WorldWideScience

Sample records for rocmas code prediction

  1. Program ROCMAS: introduction and user's guide

    International Nuclear Information System (INIS)

    Noorishad, J.; Ayatollahi, M.S.

    1980-09-01

    This model is for the study of coupled fluid flow and stress in deformable fractured rock masses. The effective mass theory of Biot is used to relate the pressure changes with the displacements of the rock matrix. The deformation of the fracture surfaces in turn affects the fracture flow through the sensitive dependence of permeability on aperture. The code combines techniques of fluid flow modeling and stress-strain analysis. The model is based on general theory which is of fundamental interest and practical importance. The code has the capability of handling a range of complex problems in fluid flow, induced rock mass deformation and soil consolidation. Further developments to couple the fluid flow with heat transfer, or to incorporate dynamic stress analysis, can increase the range of applicability. More extensive application of the code is called for

  2. Program ROCMAS: introduction and user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Noorishad, J.; Ayatollahi, M.S.

    1980-09-01

    This model is for the study of coupled fluid flow and stress in deformable fractured rock masses. The effective mass theory of Biot is used to relate the pressure changes with the displacements of the rock matrix. The deformation of the fracture surfaces in turn affects the fracture flow through the sensitive dependence of permeability on aperture. The code combines techniques of fluid flow modeling and stress-strain analysis. The model is based on general theory which is of fundamental interest and practical importance. The code has the capability of handling a range of complex problems in fluid flow, induced rock mass deformation and soil consolidation. Further developments to couple the fluid flow with heat transfer, or to incorporate dynamic stress analysis, can increase the range of applicability. More extensive application of the code is called for.

  3. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  4. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...... promising new framework for understanding perception and action, may solve pending theoretical inconsistencies in agency detection research, account for the puzzling experimental findings mentioned above, and provide hypotheses for future experimental testing. Predictive coding explains how the brain......, unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...

  5. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  6. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  7. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  8. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  9. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  10. Predictive Coding Strategies for Developmental Neurorobotics

    Science.gov (United States)

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  11. Predictive Coding Strategies for Developmental Neurorobotics

    Directory of Open Access Journals (Sweden)

    Jun-Cheol ePark

    2012-05-01

    Full Text Available In recent years, predictive coding strategies have been proposed as a possible way of how the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal believes to the observed error between what it expected to happen, and what actually happens. In this paper we present a potpourri of developmental neurorobotics experiments in which minimalist prediction-error based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions.

  12. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  13. Evolving a Dynamic Predictive Coding Mechanism for Novelty Detection

    OpenAIRE

    Haggett, Simon J.; Chu, Dominique; Marshall, Ian W.

    2007-01-01

    Novelty detection is a machine learning technique which identifies new or unknown information in data sets. We present our current work on the construction of a new novelty detector based on a dynamical version of predictive coding. We compare three evolutionary algorithms, a simple genetic algorithm, NEAT and FS-NEAT, for the task of optimising the structure of an illustrative dynamic predictive coding neural network to improve its performance over stimuli from a number of artificially gener...

  14. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  15. Real coded genetic algorithm for fuzzy time series prediction

    Science.gov (United States)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  16. Decision-making in schizophrenia: A predictive-coding perspective.

    Science.gov (United States)

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Predictive coding of music--brain responses to rhythmic incongruity.

    Science.gov (United States)

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction.

  18. DYMEL code for prediction of dynamic stability limits in boilers

    International Nuclear Information System (INIS)

    Deam, R.T.

    1980-01-01

    Theoretical and experimental studies of Hydrodynamic Instability in boilers were undertaken to resolve the uncertainties of the existing predictive methods at the time the first Advanced Gas Cooled Reactor (AGR) plant was commissioned. The experiments were conducted on a full scale electrical simulation of an AGR boiler and revealed inadequacies in existing methods. As a result a new computer code called DYMEL was developed based on linearisation and Fourier/Laplace Transformation of the one-dimensional boiler equations in both time and space. Beside giving good agreement with local experimental data, the DYMEL code has since shown agreement with stability data from the plant, sodium heated helical tubes, a gas heated helical tube and an electrically heated U-tube. The code is now used widely within the U.K. (author)

  19. Void fraction prediction of NUPEC PSBT tests by CATHARE code

    International Nuclear Information System (INIS)

    Del Nevo, A.; Michelotti, L.; Moretti, F.; Rozzia, D.; D'Auria, F.

    2011-01-01

    The current generation of thermal-hydraulic system codes benefits of about sixty years of experiments and forty years of development and are considered mature tools to provide best estimate description of phenomena and detailed reactor system representations. However, there are continuous needs for checking the code capabilities in representing nuclear system, for drawing attention to their weak points, for identifying models which need to be refined for best-estimate calculations. Prediction of void fraction and Departure from Nucleate Boiling (DNB) in system thermal-hydraulics is currently based on empirical approaches. The database carried out by Nuclear Power Engineering Corporation (NUPEC), Japan addresses these issues. It is suitable for supporting the development of new computational tools based on more mechanistic approaches (i.e. three-field codes, two-phase CFD, etc.) as well as for validating current generation of thermal-hydraulic system codes. Selected experiments belonging to this database are used for the OECD/NRC PSBT benchmark. The paper reviews the activity carried out by CATHARE2 code on the basis of the subchannel (four test sections) and presents rod bundle (different axial power profile and test sections) experiments available in the database in steady state and transient conditions. The results demonstrate the accuracy of the code in predicting the void fraction in different thermal-hydraulic conditions. The tests are performed varying the pressure, coolant temperature, mass flow and power. Sensitivity analyses are carried out addressing nodalization effect and the influence of the initial and boundary conditions of the tests. (author)

  20. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...... signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...... and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio...

  1. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  2. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  3. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  4. Quantitative accuracy assessment of thermalhydraulic code predictions with SARBM

    International Nuclear Information System (INIS)

    Prosek, A.

    2001-01-01

    In recent years, the nuclear reactor industry has focused significant attention on nuclear reactor systems code accuracy and uncertainty issues. A few methods suitable to quantify code accuracy of thermalhydraulic code calculations were proposed and applied in the past. In this study a Stochastic Approximation Ratio Based Method (SARBM) was adapted and proposed for accuracy quantification. The objective of the study was to qualify the SARBM. The study compare the accuracy obtained by SARBM with the results obtained by widely used Fast Fourier Transform Based Method (FFTBM). The methods were applied to RELAP5/MOD3.2 code calculations of various BETHSY experiments. The obtained results showed that the SARBM was able to satisfactorily predict the accuracy of the calculated trends when visually comparing plots and comparing the results with the qualified FFTBM. The analysis also showed that the new figure-of-merit called accuracy factor (AF) is more convenient than stochastic approximation ratio for combining single variable accuracy's into total accuracy. The accuracy results obtained for the selected tests suggest that the acceptability factors for the SAR method were reasonably defined. The results also indicate that AF is a useful quantitative measure of accuracy.(author)

  5. Sonic boom predictions using a modified Euler code

    Science.gov (United States)

    Siclari, Michael J.

    1992-04-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  6. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  7. Coding Scheme for Assessment of Students’ Explanations and Predictions

    Directory of Open Access Journals (Sweden)

    Mihael Gojkošek

    2017-04-01

    Full Text Available In the process of analyzing students’ explanations and predictions for interaction between brightness enhancement film and beam of white light, a need for objective and reliable assessment instrumentarose. Consequently, we developed a codingscheme that was mostly inspired by the rubrics for self-assessment of scientific abilities. In the paper we present the grading categories that were integrated in the coding scheme, and descriptions of criteria used for evaluation of students work. We report the results of reliability analysis of new assessment tool and present some examples of its application.

  8. Biocomputational prediction of small non-coding RNAs in Streptomyces

    Czech Academy of Sciences Publication Activity Database

    Pánek, Josef; Bobek, Jan; Mikulík, Karel; Basler, Marek; Vohradský, Jiří

    2008-01-01

    Roč. 9, č. 217 (2008), s. 1-14 ISSN 1471-2164 R&D Projects: GA ČR GP204/07/P361; GA ČR GA203/05/0106; GA ČR GA310/07/1009 Grant - others:XE(XE) EC Integrated Project ActinoGEN, LSHM-CT-2004-005224. Institutional research plan: CEZ:AV0Z50200510 Keywords : non-coding RNA * streptomyces * biocomputational prediction Subject RIV: IN - Informatics, Computer Science Impact factor: 3.926, year: 2008

  9. Transitioning from interpretive to predictive in thermal hydraulic codes

    International Nuclear Information System (INIS)

    Mousseau, V.A.

    2004-01-01

    The current thermal hydraulic codes in use in the US, RELAP and TRAC, where originally written in the mid to late 1970's. At that time computers were slow, expensive, and had small memories. Because of these constraints, sacrifices had to be made, both in physics and numerical methods, which resulted in limitations on the accuracy of the solutions. Significant changes have occurred that induce very different requirements for the thermal hydraulic codes to be used for the future GEN-IV nuclear reactors. First, computers speed and memory grow at an exponential rate while the costs hold constant or decrease. Second, passive safety systems in modern designs stretch the length of relevant transients to many days. Finally, costs of experiments have grown very rapidly. Because of these new constraints, modern thermal hydraulic codes will be relied on for a significantly larger portion of bringing a nuclear reactor on line. Simulation codes will have to define in which part of state space experiments will be run. They will then have to be able to extend the small number of experiments to cover the large state space in which the reactors will operate. This data extrapolation mode will be referred to as 'predictive'. One of the keys to analyzing the accuracy of a simulation is to consider the entire domain being simulated. For example, in a reactor design where the containment is coupled to the reactor cooling system through radiative heat transfer, the accuracy of a transient includes the containment, the radiation heat transfer, the fluid flow in the cooling system, the thermal conduction in the solid, and the neutron transport in the reactor. All of this physics is coupled together in one nonlinear system through material properties, cross sections, heat transfer coefficients, and other mechanisms that exchange mass, momentum, and energy. Traditionally, these different physical domains, (containment, cooling system, nuclear fuel, etc.) have been solved in different

  10. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  11. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    International Nuclear Information System (INIS)

    Geng, S.M.; Tew, R.C.

    1994-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement

  12. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  13. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  14. Improving performance of single-path code through a time-predictable memory hierarchy

    DEFF Research Database (Denmark)

    Cilku, Bekim; Puffitsch, Wolfgang; Prokesch, Daniel

    2017-01-01

    -predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution. The new memory hierarchy reduces both the cache-miss penalty time and the cache-miss rate on the instruction cache. The benefit of the approach is demonstrated through...

  15. A comparison of oxide thickness predictability from the perspective of codes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo-Young; Shin, Hye-In; Kim, Kyung-Tae; Han, Hee-Tak; Kim, Hong-Jin; Kim, Yong-Hwan [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    In Korea, OPR1000 and Westinghouse type nuclear power plant reactor fuel rods oxide thickness has been evaluated by imported code A. Because of this, there have been multiple constraints in operation and maintenance of fuel rod design system. For this reason, there has been a growing demand to establish an independent fuel rod design system. To meet this goal, KNF has recently developed its own code B for fuel rod design. The objective of this study is to compare oxide thickness prediction performance between code A and code B and to check the validity of predicting corrosion behaviors of newly developed code B. This study is based on Pool Side Examination (PSE) data for the performance confirmation. For the examination procedures, the oxide thickness measurement methods and equipment of PSE are described in detail. In this study, code B is confirmed conservatism and validity on evaluating cladding oxide thickness through the comparison with code A. Code prediction values show higher value than measured data from PSE. Throughout this study, the values by code B are evaluated and proved to be valid in a view point of the oxide thickness evaluation. However, the code B input for prediction has been made by designer's judgment with complex handwork that might be lead to excessive conservative result and ineffective design process with some possibility of errors.

  16. ISODEP, A Fuel Depletion Analysis Code for Predicting Isotopic ...

    African Journals Online (AJOL)

    The trend of results was found to be consistent with those obtained by analytical and other numerical methods. Discovery and Innovation Vol. 13 no. 3/4 December (2001) pp. 184-195. KEY WORDS: depletion analysis, code, research reactor, simultaneous equations, decay of nuclides, radionuclitides, isotope. Résumé

  17. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  18. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  19. The Barrier code for predicting long-term concrete performance

    International Nuclear Information System (INIS)

    Shuman, R.; Rogers, V.C.; Shaw, R.A.

    1989-01-01

    There are numerous features incorporated into a LLW disposal facility that deal directly with critical safety objectives required by the NRC in 10 CFR 61. Engineered barriers or structures incorporating concrete are commonly being considered for waste disposal facilities. The Barrier computer code calculates the long-term degradation of concrete structures in LLW disposal facilities. It couples this degradation with water infiltration into the facility, nuclide leaching from the waste, contaminated water release from the facility, and associated doses to members of the critical population group. The concrete degradation methodology of Barrier is described

  20. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  1. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  3. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  4. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    2009-08-01

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  5. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  6. Dropped fuel damage prediction techniques and the DROPFU code

    International Nuclear Information System (INIS)

    Mottershead, K.J.; Beardsmore, D.W.; Money, G.

    1995-01-01

    During refuelling, and fuel handling, at UK Advanced Gas Cooled Reactor (AGR) stations it is recognised that the accidental dropping of fuel is a possibility. This can result in dropping individual fuel elements, a complete fuel stringer, or a whole assembly. The techniques for assessing potential damage have been developed over a number of years. This paper describes how damage prediction techniques have subsequently evolved to meet changing needs. These have been due to later fuel designs and the need to consider drops in facilities outside the reactor. The paper begins by briefly describing AGR fuel and possible dropped fuel scenarios. This is followed by a brief summary of the damage mechanisms and the assessment procedure as it was first developed. The paper then describes the additional test work carried out, followed by the detailed numerical modelling. Finally, the paper describes the extensions to the practical assessment methods. (author)

  7. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    Science.gov (United States)

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE

  8. TRANSENERGY S: computer codes for coolant temperature prediction in LMFBR cores during transient events

    International Nuclear Information System (INIS)

    Glazer, S.; Todreas, N.; Rohsenow, W.; Sonin, A.

    1981-02-01

    This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented

  9. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    Science.gov (United States)

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  10. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  11. Development of computer code for determining prediction parameters of radionuclide migration in soil layer

    International Nuclear Information System (INIS)

    Ogawa, Hiromichi; Ohnuki, Toshihiko

    1986-07-01

    A computer code (MIGSTEM-FIT) has been developed to determine the prediction parameters, retardation factor, water flow velocity, dispersion coefficient, etc., of radionuclide migration in soil layer from the concentration distribution of radionuclide in soil layer or in effluent. In this code, the solution of the predicting equation for radionuclide migration is compared with the concentration distribution measured, and the most adequate values of parameter can be determined by the flexible tolerance method. The validity of finite differential method, which was one of the method to solve the predicting equation, was confirmed by comparison with the analytical solution, and also the validity of fitting method was confirmed by the fitting of the concentration distribution calculated from known parameters. From the examination about the error, it was found that the error of the parameter obtained by using this code was smaller than that of the concentration distribution measured. (author)

  12. Establishment the code for prediction of waste volume on NPP decommissioning

    International Nuclear Information System (INIS)

    Cho, W. H.; Park, S. K.; Choi, Y. D.; Kim, I. S.; Moon, J. K.

    2013-01-01

    In practice, decommissioning waste volume can be estimated appropriately by finding the differences between prediction and actual operation and considering the operational problem or supplementary matters. So in the nuclear developed countries such as U.S. or Japan, the decommissioning waste volume is predicted on the basis of the experience in their own decommissioning projects. Because of the contamination caused by radioactive material, decontamination activity and management of radio-active waste should be considered in decommissioning of nuclear facility unlike the usual plant or facility. As the decommissioning activity is performed repeatedly, data for similar activities are accumulated, and optimal strategy can be achieved by comparison with the predicted strategy. Therefore, a variety of decommissioning experiences are the most important. In Korea, there is no data on the decommissioning of commercial nuclear power plants yet. However, KAERI has accumulated the basis decommissioning data of nuclear facility through decommissioning of research reactor (KRR-2) and uranium conversion plant (UCP). And DECOMMIS(DECOMMissioning Information Management System) was developed to provide and manage the whole data of decommissioning project. Two codes, FAC code and WBS code, were established in this process. FAC code is the one which is classified by decommissioning target of nuclear facility, and WBS code is classified by each decommissioning activity. The reason why two codes where created is that the codes used in DEFACS (Decommissioning Facility Characterization management System) and DEWOCS (Decommissioning Work-unit productivity Calculation System) are different from each other, and they were classified each purpose. DEFACS which manages the facility needs the code that categorizes facility characteristics, and DEWOCS which calculates unit productivity needs the code that categorizes decommissioning waste volume. KAERI has accumulated decommissioning data of KRR

  13. Modification in the FUDA computer code to predict fuel performance at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Das, M; Arunakumar, B V; Prasad, P N [Nuclear Power Corp., Mumbai (India)

    1997-08-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig.

  14. Modification in the FUDA computer code to predict fuel performance at high burnup

    International Nuclear Information System (INIS)

    Das, M.; Arunakumar, B.V.; Prasad, P.N.

    1997-01-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig

  15. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  16. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  17. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  18. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  19. Assessment of the GOTHIC code for prediction of hydrogen flame propagation in small scale experiments

    International Nuclear Information System (INIS)

    Lee, Jin-Yong . E-mail jinyong1@fnctech.com; Lee, Jung-Jae; Park, Goon-Cherl . E-mail parkgc@snu.ac.kr

    2006-01-01

    With the rising concerns regarding the time and space dependent hydrogen behavior in severe accidents, the calculation for local hydrogen combustion in compartment has been attempted using CFD codes like GOTHIC. In particular, the space resolved hydrogen combustion analysis is essential to address certain safety issues such as the safety components survivability, and to determine proper positions for hydrogen control devices as e.q. recombiners or igniters. In the GOTHIC 6.1b code, there are many advanced features associated with the hydrogen burn models to enhance its calculation capability. In this study, we performed premixed hydrogen/air combustion experiments with an upright, rectangular shaped, combustion chamber of dimensions 1 m x 0.024 m x 1 m. The GOTHIC 6.1b code was used to simulate the hydrogen/air combustion experiments, and its prediction capability was assessed by comparing the experimental with multidimensional calculational results. Especially, the prediction capability of the GOTHIC 6.1b code for local hydrogen flame propagation phenomena was examined. For some cases, comparisons are also presented for lumped modeling of hydrogen combustion. By evaluating the effect of parametric simulations, we present some instructions for local hydrogen combustion analysis using the GOTHIC 6.1b code. From the analyses results, it is concluded that the modeling parameter of GOTHIC 6.1b code should be modified when applying the mechanistic burn model for hydrogen propagation analysis in small geometry

  20. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  1. Status of development of a code for predicting the migration of ground additions - MOGRA

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment. MOGRA consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for computation parameter settings and results displays, data bases and so on. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. These codes are able to create or delete compartments and set the migration of environmental-load substances between compartments by a simple mouse operation. The system features universality and excellent expandability in the application of computations to various nuclides. (author)

  2. Evaluation of the MMCLIFE 3.0 code in predicting crack growth in titanium aluminide composites

    International Nuclear Information System (INIS)

    Harmon, D.; Larsen, J.M.

    1999-01-01

    Crack growth and fatigue life predictions made with the MMCLIFE 3.0 code are compared to test data for unidirectional, continuously reinforced SCS-6/Ti-14Al-21Nb (wt pct) composite laminates. The MMCLIFE 3.0 analysis package is a design tool capable of predicting strength and fatigue performance in metal matrix composite (MMC) laminates. The code uses a combination of micromechanic lamina and macromechanic laminate analyses to predict stresses and uses linear elastic fracture mechanics to predict crack growth. The crack growth analysis includes a fiber bridging model to predict the growth of matrix flaws in 0 degree laminates and is capable of predicting the effects of interfacial shear stress and thermal residual stresses. The code has also been modified to include edge-notch flaws in addition to center-notch flaws. The model was correlated with constant amplitude, isothermal data from crack growth tests conducted on 0- and 90 degree SCS-6/Ti-14-21 laminates. Spectrum fatigue tests were conducted, which included dwell times and frequency effects. Strengths and areas for improvement for the analysis are discussed

  3. Comparison of LIFE-4 and TEMECH code predictions with TREAT transient test data

    International Nuclear Information System (INIS)

    Gneiting, B.C.; Bard, F.E.; Hunter, C.W.

    1984-09-01

    Transient tests in the TREAT reactor were performed on FFTF Reference design mixed-oxide fuel pins, most of which had received prior steady-state irradiation in the EBR-II reactor. These transient test results provide a data base for calibration and verification of fuel performance codes and for evaluation of processes that affect pin damage during transient events. This paper presents a comparison of the LIFE-4 and TEMECH fuel pin thermal/mechanical analysis codes with the results from 20 HEDL TREAT experiments, ten of which resulted in pin failure. Both the LIFE-4 and TEMECH codes provided an adequate representation of the thermal and mechanical data from the TREAT experiments. Also, a criterion for 50% probability of pin failure was developed for each code using an average cumulative damage fraction value calculated for the pins that failed. Both codes employ the two major cladding loading mechanisms of differential thermal expansion and central cavity pressurization which were demonstrated by the test results. However, a detailed evaluation of the code predictions shows that the two code systems weigh the loading mechanism differently to reach the same end points of the TREAT transient results

  4. A code MOGRA for predicting and assessing the migration of ground additions

    International Nuclear Information System (INIS)

    Amano, Hikaru; Atarashi-Andoh, Mariko; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2004-01-01

    The environment should be protected from the toxic effects of not only ionizing radiation but also any other environmental load materials. A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers for not only the purpose of the migration analysis but also the environmental assessment to livings of the environmental load materials. The functionality of MOGRA has been verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. Migration of radionuclides in combinations of hypothetical various land utilization areas was also verified. The system can analyze the dynamic changes of target radionuclide's concentrations in each compartment, fluxes from one compartment to another compartment. The code MOGRA has varieties of databases, which is included in an additional code MOGRA-DB. This additional code MOGRA-DB consists of radionuclides decay chart, distribution coefficients between solid and liquid, transfer factors from soil to plant, transfer coefficients from feed to beef and milk, concentration factors, and age dependent dose conversion factors for many radionuclides. Another additional code MOGRA-MAP can take in graphic map such as JPEG, TIFF, BITMAP, and GIF files, and calculate the square measure of the target land. (author)

  5. Prediction of surface cracks from thick-walled pressurized vessels with ASME code

    International Nuclear Information System (INIS)

    Thieme, W.

    1983-01-01

    The ASME-Code, Section XI, Appendix A 'Analysis of flow indications' is still non-mandatory for the pressure components of nuclear power plants. It is certainly difficult to take realistic account of the many factors influencing crack propagation while making life predictions. The accuracy of the US guideline is analysed, and its possible applications are roughly outlined. (orig./IHOE) [de

  6. A Predictive Coding Account of Psychotic Symptoms in Autism Spectrum Disorder

    Science.gov (United States)

    van Schalkwyk, Gerrit I.; Volkmar, Fred R.; Corlett, Philip R.

    2017-01-01

    The co-occurrence of psychotic and autism spectrum disorder (ASD) symptoms represents an important clinical challenge. Here we consider this problem in the context of a computational psychiatry approach that has been applied to both conditions--predictive coding. Some symptoms of schizophrenia have been explained in terms of a failure of top-down…

  7. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  8. MASTR: multiple alignment and structure prediction of non-coding RNAs using simulated annealing

    DEFF Research Database (Denmark)

    Lindgreen, Stinus; Gardner, Paul P; Krogh, Anders

    2007-01-01

    function that considers sequence conservation, covariation and basepairing probabilities. The results show that the method is very competitive to similar programs available today, both in terms of accuracy and computational efficiency. AVAILABILITY: Source code available from http://mastr.binf.ku.dk/......MOTIVATION: As more non-coding RNAs are discovered, the importance of methods for RNA analysis increases. Since the structure of ncRNA is intimately tied to the function of the molecule, programs for RNA structure prediction are necessary tools in this growing field of research. Furthermore......, it is known that RNA structure is often evolutionarily more conserved than sequence. However, few existing methods are capable of simultaneously considering multiple sequence alignment and structure prediction. RESULT: We present a novel solution to the problem of simultaneous structure prediction...

  9. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  10. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    Science.gov (United States)

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  11. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  12. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU Fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1994-10-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. 38 refs., 4 figs., 5 tabs

  13. Modification V to the computer code, STRETCH, for predicting coated-particle behavior

    International Nuclear Information System (INIS)

    Valentine, K.H.

    1975-04-01

    Several modifications have been made to the stress analysis code, STRETCH, in an attempt to improve agreement between the calculated and observed behavior of pyrocarbon-coated fuel particles during irradiation in a reactor environment. Specific areas of the code that have been modified are the neutron-induced densification model and the neutron-induced creep calculation. Also, the capability for modeling surface temperature variations has been added. HFIR Target experiments HT-12 through HT-15 have been simulated with the modified code, and the neutron-fluence vs particle-failure predictions compare favorably with the experimental results. Listings of the modified FORTRAN IV main source program and additional FORTRAN IV functions are provided along with instructions for supplying the additional input data. (U.S.)

  14. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  15. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1995-01-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. ((orig.))

  16. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    Science.gov (United States)

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  17. Use of a commercial heat transfer code to predict horizontally oriented spent fuel rod temperatures

    International Nuclear Information System (INIS)

    Wix, S.D.; Koski, J.A.

    1992-01-01

    Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. The maximum rod temperature is a limiting factor in the amount of spent fuel that can be loaded in a transportation cask. The scope of this work is to demonstrate that reasonable and conservative spent fuel rod temperature predictions can be made using commercially available thermal analysis codes. The demonstration is accomplished by a comparison between numerical temperature predictions, with a commercially available thermal analysis code, and experimental temperature data for electrical rod heaters simulating a horizontally oriented spent fuel rod bundle

  18. Experiment predictions of LOFT reflood behavior using the RELAP4/MOD6 code

    International Nuclear Information System (INIS)

    Lin, J.C.; Kee, E.J.; Grush, W.H.; White, J.R.

    1978-01-01

    The RELAP4/MOD6 computer code was used to predict the thermal-hydraulic transient for Loss-of-Fluid Test (LOFT) Loss-of-Coolant Accident (LOCA) experiments L2-2, L2-3, and L2-4. This analysis will aid in the development and assessment of analytical models used to analyze the LOCA performance of commercial power reactors. Prior to performing experiments in the LOFT facility, the experiments are modeled in counterpart tests performed in the nonnuclear Semiscale MOD 1 facility. A comparison of the analytical results with Semiscale data will verify the analytical capability of the RELAP4 code to predict the thermal-hydraulic behavior of the Semiscale LOFT counterpart tests. The analytical model and the results of analyses for the reflood portion of the LOFT LOCA experiments are described. These results are compared with the data from Semiscale

  19. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    Science.gov (United States)

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  20. BOW. A computer code to predict lateral deflections of composite beams. A computer code to predict lateral deflections of composite beams

    Energy Technology Data Exchange (ETDEWEB)

    Tayal, M.

    1987-08-15

    Arrays of tubes are used in many engineered structures, such as in nuclear fuel bundles and in steam generators. The tubes can bend (bow) due to in-service temperatures and loads. Assessments of bowing of nuclear fuel elements can help demonstrate the integrity of fuel and of surrounding components, as a function of operating conditions such as channel power. The BOW code calculates the bending of composite beams such as fuel elements, due to gradients of temperature and due to hydraulic forces. The deflections and rotations are calculated in both lateral directions, for given conditions of temperatures. Wet and dry operation of the sheath can be simulated. Bow accounts for the following physical phenomena: circumferential and axial variations in the temperatures of the sheath and of the pellet; cracking of pellets; grip and slip between the pellets and the sheath; hydraulic drag; restraints from endplates, from neighbouring elements, and from the pressure-tube; gravity; concentric or eccentric welds between endcap and endplate; neutron flux gradients; and variations of material properties with temperature. The code is based on fundamental principles of mechanics. The governing equations are solved numerically using the finite element method. Several comparisons with closed-form equations show that the solutions of BOW are accurate. BOW`s predictions for initial in-reactor bow are also consistent with two post-irradiation measurements.

  1. Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code

    CSIR Research Space (South Africa)

    Peter, Xolani

    2016-09-01

    Full Text Available Ballistic Organization Cape Town, South Africa 27-29 September 2016 1 PREDICTION OF DETONATION AND JWL EOS PARAMETERS OF ENERGETIC MATERIALS USING EXPLO5 COMPUTER CODE X. Peter*, Z. Jiba, M. Olivier, I.M. Snyman, F.J. Mostert and T.J. Sono.... Nowadays many numerical methods and programs are being used for carrying out thermodynamic calculations of the detonation parameters of condensed explosives, for example a BKW Fortran (Mader, 1967), Ruby (Cowperthwaite and Zwisler, 1974) TIGER...

  2. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  3. Prediction of the HBS width and Xe concentration in grain matrix by the INFRA code

    International Nuclear Information System (INIS)

    Yang, Yong Sik; Lee, Chan Bok; Kim, Dae Ho; Kim, Young Min

    2004-01-01

    Formation of a HBS(High Burnup Structure) is an important phenomenon for the high burnup fuel performance and safety. For the prediction of the HBS(so called 'rim microstructure') proposed rim microstructure formation model, which is a function of the fuel temperature, grain size and fission rate, was inserted into the high burnup fuel performance code INFRA. During the past decades, various examinations have been performed to find the HBS formation mechanism and define HBS characteristics. In the HBEP(High Burnup Effects Program), several rods were examined by EPMA analysis to measure HBS width and these results were re-measured by improved technology including XRF and detail microstructure examination. Recently, very high burnup(∼100MWd/kgU) fuel examination results were reported by Manzel et al., and EPMA analysis results have been released. Using the measured EPMA analysis data, HBS formation prediction model of INFRA code are verified. HBS width prediction results are compared with measured ones and Xe concentration profile is compared with measured EPMA data. Calculated HBS width shows good agreement with measured data in a reasonable error range. Though, there are some difference in transition region and central region due to model limitation and fission gas release prediction error respectively, however, predicted Xe concentration in the fully developed HBS region shows a good agreement with the measured data. (Author)

  4. IAMBUS, a computer code for the design and performance prediction of fast breeder fuel rods

    International Nuclear Information System (INIS)

    Toebbe, H.

    1990-05-01

    IAMBUS is a computer code for the thermal and mechanical design, in-pile performance prediction and post-irradiation analysis of fast breeder fuel rods. The code deals with steady, non-steady and transient operating conditions and enables to predict in-pile behavior of fuel rods in power reactors as well as in experimental rigs. Great effort went into the development of a realistic account of non-steady fuel rod operating conditions. The main emphasis is placed on characterizing the mechanical interaction taking place between the cladding tube and the fuel as a result of contact pressure and friction forces, with due consideration of axial and radial crack configuration within the fuel as well as the gradual transition at the elastic/plastic interface in respect to fuel behavior. IAMBUS can be readily adapted to various fuel and cladding materials. The specific models and material correlations of the reference version deal with the actual in-pile behavior and physical properties of the KNK II and SNR 300 related fuel rod design, confirmed by comparison of the fuel performance model with post-irradiation data. The comparison comprises steady, non-steady and transient irradiation experiments within the German/Belgian fuel rod irradiation program. The code is further validated by comparison of model predictions with post-irradiation data of standard fuel and breeder rods of Phenix and PFR as well as selected LWR fuel rods in non-steady operating conditions

  5. Information-Theoretic Evidence for Predictive Coding in the Face-Processing System.

    Science.gov (United States)

    Brodski-Guerniero, Alla; Paasch, Georg-Friedrich; Wollstadt, Patricia; Özdemir, Ipek; Lizier, Joseph T; Wibral, Michael

    2017-08-23

    Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and β-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity. Copyright © 2017 the authors 0270-6474/17/378273-11$15.00/0.

  6. Assessment of Prediction Capabilities of COCOSYS and CFX Code for Simplified Containment

    Directory of Open Access Journals (Sweden)

    Jia Zhu

    2016-01-01

    Full Text Available The acceptable accuracy for simulation of severe accident scenarios in containments of nuclear power plants is required to investigate the consequences of severe accidents and effectiveness of potential counter measures. For this purpose, the actual capability of CFX tool and COCOSYS code is assessed in prototypical geometries for simplified physical process-plume (due to a heat source under adiabatic and convection boundary condition, respectively. Results of the comparison under adiabatic boundary condition show that good agreement is obtained among the analytical solution, COCOSYS prediction, and CFX prediction for zone temperature. The general trend of the temperature distribution along the vertical direction predicted by COCOSYS agrees with the CFX prediction except in dome, and this phenomenon is predicted well by CFX and failed to be reproduced by COCOSYS. Both COCOSYS and CFX indicate that there is no temperature stratification inside dome. CFX prediction shows that temperature stratification area occurs beneath the dome and away from the heat source. Temperature stratification area under adiabatic boundary condition is bigger than that under convection boundary condition. The results indicate that the average temperature inside containment predicted with COCOSYS model is overestimated under adiabatic boundary condition, while it is underestimated under convection boundary condition compared to CFX prediction.

  7. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  8. Improved predictions of nuclear reaction rates for astrophysics applications with the TALYS reaction code

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J.

    2008-01-01

    Nuclear reaction rates for astrophysics applications are traditionally determined on the basis of Hauser-Feshbach reaction codes, like MOST. These codes use simplified schemes to calculate the capture reaction cross section on a given target nucleus, not only in its ground state but also on the different thermally populated states of the stellar plasma at a given temperature. Such schemes include a number of approximations that have never been tested, such as an approximate width fluctuation correction, the neglect of delayed particle emission during the electromagnetic decay cascade or the absence of the pre-equilibrium contribution at increasing incident energies. New developments have been brought to the reaction code TALYS to estimate the Maxwellian-averaged reaction rates of astrophysics relevance. These new developments give us the possibility to calculate with an improved accuracy the reaction cross sections and the corresponding astrophysics rates. The TALYS predictions for the thermonuclear rates of astrophysics relevance are presented and compared with those obtained with the MOST code on the basis of the same nuclear ingredients for nuclear structure properties, optical model potential, nuclear level densities and γ-ray strength. It is shown that, in particular, the pre-equilibrium process significantly influences the astrophysics rates of exotic neutron-rich nuclei. The reciprocity theorem traditionally used in astrophysics to determine photo-rates is also shown no to be valid for exotic nuclei. The predictions obtained with different nuclear inputs are also analyzed to provide an estimate of the theoretical uncertainties still affecting the reaction rate prediction far away from the experimentally known regions. (authors)

  9. Thought insertion as a self-disturbance: An integration of predictive coding and phenomenological approaches

    Directory of Open Access Journals (Sweden)

    Philipp Sterzer

    2016-10-01

    Full Text Available Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from nowhere, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: Ichstörungen of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a becoming sensory of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, internal events such as thoughts (including volitions, emotions and memories can also be associated with increased prediction-error signaling and are thus imbued with

  10. Comparison of Heavy Water Reactor Thermalhydraulic Code Predictions with Small Break LOCA Experimental Data

    International Nuclear Information System (INIS)

    2012-08-01

    Activities within the frame of the IAEA's Technical Working Group on Advanced Technologies for HWRs (TWG-HWR) are conducted in a project within the IAEA's subprogramme on nuclear power reactor technology development. The objective of the activities on HWRs is to foster, within the frame of the TWG-HWR, information exchange and cooperative research on technology development for current and future HWRs, with an emphasis on safety, economics and fuel resource sustainability. One of the activities recommended by the TWG-HWR was an international standard problem exercise entitled Intercomparison and Validation of Computer Codes for Thermalhydraulics Safety Analyses. Intercomparison and validation of computer codes used in different countries for thermalhydraulics safety analyses will enhance the confidence in the predictions made by these codes. However, the intercomparison and validation exercise needs a set of reliable experimental data. Two RD-14M small break loss of coolant accident (SBLOCA) tests, simulating HWR LOCA behaviour, conducted by Atomic Energy of Canada Ltd (AECL), were selected for this validation project. This report provides a comparison of the results obtained from eight participating organizations from six countries (Argentina, Canada, China, India, Republic of Korea, and Romania), utilizing four different computer codes (ATMIKA, CATHENA, MARS-KS, and RELAP5). General conclusions are reached and recommendations made.

  11. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    Science.gov (United States)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  12. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  13. A study on the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations

    International Nuclear Information System (INIS)

    Choi, Y. S.; Lee, W. J.; Lee, J. J.; Park, K. C.

    2002-01-01

    In this study the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations was verified with experimental results. Among the experiments, executed by SNU and other organization inside and outside of the country, the fast transient and the obstacle cases are selected. In case of large subcompartment both the code show good agreement with the experimental data. But in case of small and complex geometry or fast transient the results of GOTHIC code have the large difference from experimental ones. This represents that GOTHIC code is unsuitable for these cases. On the contrary HTCA3D code agrees well with all the experimental data

  14. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  15. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  16. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  17. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  18. From structure prediction to genomic screens for novel non-coding RNAs

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.

    2011-01-01

    Abstract: Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction....... This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early...... upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other....

  19. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    International Nuclear Information System (INIS)

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-01-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit

  20. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  1. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    Energy Technology Data Exchange (ETDEWEB)

    Geffraye, G.; Bazin, P.; Pichon, P. [CEA/DRN/STR, Grenoble (France)

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  2. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Science.gov (United States)

    Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose; Bullock, James S.; Joung, M. Ryan; Devriendt, Julien; Ceverino, Daniel; Kereš, Dušan; Hopkins, Philip F.; Faucher-Giguère, Claude-André

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ˜4 times more specific angular momentum in cold halo gas (λ cold ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  3. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kyle R. [Department of Mathematical Sciences, California Baptist University, 8432 Magnolia Ave., Riverside, CA 92504 (United States); Maller, Ariyeh H. [Department of Physics, New York City College of Technology, 300 Jay St., Brooklyn, NY 11201 (United States); Oñorbe, Jose [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Bullock, James S. [Center for Cosmology, Department of Physics and Astronomy, The University of California at Irvine, Irvine, CA 92697 (United States); Joung, M. Ryan [Department of Astronomy, Columbia University, New York, NY 10027 (United States); Devriendt, Julien [Department of Physics, University of Oxford, The Denys Wilkinson Building, Keble Rd., Oxford OX1 3RH (United Kingdom); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany); Kereš, Dušan [Department of Physics, Center for Astrophysics and Space Sciences, University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Hopkins, Philip F. [California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Faucher-Giguère, Claude-André [Department of Physics and Astronomy and CIERA, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60208 (United States)

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ∼4 times more specific angular momentum in cold halo gas ( λ {sub cold} ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  4. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction

    Directory of Open Access Journals (Sweden)

    Kiran Sree Pokkuluri

    2014-01-01

    Full Text Available Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000. The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata and MCC (modified clonal classifier to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992 datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006 dataset and nonpromoters from EID (Saxonov et al., 2000 and UTRdb (Pesole et al., 2002 datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively.

  5. Evaluation of Design & Analysis Code, CACTUS, for Predicting Crossflow Hydrokinetic Turbine Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wosnik, Martin [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Bachant, Pete [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murphy, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements in a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.

  6. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  7. Plasma burn-through simulations using the DYON code and predictions for ITER

    International Nuclear Information System (INIS)

    Kim, Hyun-Tae; Sips, A C C; De Vries, P C

    2013-01-01

    This paper will discuss simulations of the full ionization process (i.e. plasma burn-through), fundamental to creating high temperature plasma. By means of an applied electric field, the gas is partially ionized by the electron avalanche process. In order for the electron temperature to increase, the remaining neutrals need to be fully ionized in the plasma burn-through phase, as radiation is the main contribution to the electron power loss. The radiated power loss can be significantly affected by impurities resulting from interaction with the plasma facing components. The DYON code is a plasma burn-through simulator developed at Joint European Torus (JET) (Kim et al and EFDA-JET Contributors 2012 Nucl. Fusion 52 103016, Kim, Sips and EFDA-JET Contributors 2013 Nucl. Fusion 53 083024). The dynamic evolution of the plasma temperature and plasma densities including the impurity content is calculated in a self-consistent way using plasma wall interaction models. The recent installation of a beryllium wall at JET enabled validation of the plasma burn-through model in the presence of new, metallic plasma facing components. The simulation results of the plasma burn-through phase show a consistent good agreement against experiments at JET, and explain differences observed during plasma initiation with the old carbon plasma facing components. In the International Thermonuclear Experimental Reactor (ITER), the allowable toroidal electric field is restricted to 0.35 (V m −1 ), which is significantly lower compared to the typical value (∼1 (V m −1 )) used in the present devices. The limitation on toroidal electric field also reduces the range of other operation parameters during plasma formation in ITER. Thus, predictive simulations of plasma burn-through in ITER using validated model is of crucial importance. This paper provides an overview of the DYON code and the validation, together with new predictive simulations for ITER using the DYON code. (paper)

  8. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  9. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  10. The Interaction between Interoceptive and Action States within a Framework of Predictive Coding

    Science.gov (United States)

    Marshall, Amanda C.; Gentsch, Antje; Schütz-Bosbach, Simone

    2018-01-01

    The notion of predictive coding assumes that perception is an iterative process between prior knowledge and sensory feedback. To date, this perspective has been primarily applied to exteroceptive perception as well as action and its associated phenomenological experiences such as agency. More recently, this predictive, inferential framework has been theoretically extended to interoception. This idea postulates that subjective feeling states are generated by top–down inferences made about internal and external causes of interoceptive afferents. While the processing of motor signals for action control and the emergence of selfhood have been studied extensively, the contributions of interoceptive input and especially the potential interaction of motor and interoceptive signals remain largely unaddressed. Here, we argue for a specific functional relation between motor and interoceptive awareness. Specifically, we implicate interoceptive predictions in the generation of subjective motor-related feeling states. Furthermore, we propose a distinction between reflexive and pre-reflexive modes of agentic action control and suggest that interoceptive input may affect each differently. Finally, we advocate the necessity of continuous interoceptive input for conscious forms of agentic action control. We conclude by discussing further research contributions that would allow for a fuller understanding of the interaction between agency and interoceptive awareness. PMID:29515495

  11. Positive predictive value of peptic ulcer diagnosis codes in the Danish National Patient Registry.

    Science.gov (United States)

    Viborg, Søren; Søgaard, Kirstine Kobberøe; Jepsen, Peter

    2017-01-01

    Diagnoses of peptic ulcer are registered in the Danish National Patient Registry (DNPR) for administrative as well as research purposes, but it is unknown whether the coding validity depends on the location of the ulcer. To validate the International Classification of Diseases, 10 th revision diagnosis codes of peptic ulcer in the DNPR by estimating positive predictive values (PPVs) for gastric and duodenal ulcer diagnoses. We identified all patients registered with a hospital discharge diagnosis of peptic ulcer from Aarhus University Hospital, Denmark, in 1995-2006. Among them, we randomly selected 200 who had an outpatient gastroscopy at the time of ulcer diagnosis. We reviewed the findings from these gastroscopies to confirm the presence of peptic ulcer and its location. We calculated PPVs and corresponding 95% confidence intervals (CIs) of gastric and duodenal ulcer diagnoses, using descriptions from the gastroscopic examinations as standard reference. In total, 182 records (91%) were available for review. The overall PPV of peptic ulcer diagnoses in DNPR was 95.6% (95% CI 91.5-98.1), with PPVs of 90.3% (95% CI 82.4-95.5) for gastric ulcer diagnoses, and 94.4% (95% CI 87.4-98.2) for duodenal ulcer diagnoses. PPVs were constant over time. The PPV of uncomplicated peptic ulcer diagnoses in the DNPR is high, and the location of the ulcers is registered correctly in most cases, indicating that the diagnoses are useful for research purposes.

  12. Predicting holland occupational codes by means of paq job dimension scores

    Directory of Open Access Journals (Sweden)

    R. P. Van Der Merwe

    1990-06-01

    Full Text Available A study was conducted on how to obtain Holland's codes for South African occupations practically and economically by deducing them from information on the nature of the occupation (as derived by means of the Position Analysis Questionnaire. A discriminant analysis revealed that on the basis of the PAQ information the occupations could be distinguished clearly according to the main orientations of their American codes. Regression equations were also developed to predict the mean Self-Directed Search scores of the occupations on the basis of their PAQ information. Opsomming Ondersoek is ingestel om Holland se kodes vir Suid- Afrikaanse beroepe op 'n praktiese en ekonomiese wyse te bekom deur hulle van inligting oor die aard van die beroep (soos verkry met behulp van die Position Analysis Questionnaire af te lei. 'n Diskriminantontleding het getoon dat die beroepe op grond van die PAQ-inligting duidelik volgens die hoofberoepsgroepe van hulle Amerikaanse kodes onderskei kan word. Verder is regressievergelykings ontwikkel om beroepe se gemiddelde Self-Directed Search-tellings op grond van hulle PAQ-inligting te voorspel.

  13. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1985-01-01

    The Fusion Safety Program of EG and G Idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculated tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TWAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design. This structure was composed of composite layers of vanadium and stainless steel. Included in these calculations was the effect of contrasting material tritium solubility at the composite interface. In addition, TMAP was used to investigate the rate of tritium cleanup after an accidental release into the atmosphere of a reactor building. Tritium retention and release from surfaces and conversion to the oxide form was predicted

  14. Predictive Coding and Multisensory Integration: An Attentional Account of the Multisensory Mind

    Directory of Open Access Journals (Sweden)

    Durk eTalsma

    2015-03-01

    Full Text Available Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top-down and bottom-up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top-down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.

  15. Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code

    International Nuclear Information System (INIS)

    Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat

    2015-01-01

    The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)

  16. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  17. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  18. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    International Nuclear Information System (INIS)

    Aounallah, Y.; Coddington, P.; Gantner, U.

    2000-01-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  19. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    Energy Technology Data Exchange (ETDEWEB)

    Aounallah, Y.; Coddington, P.; Gantner, U

    2000-07-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  20. From structure prediction to genomic screens for novel non-coding RNAs.

    Science.gov (United States)

    Gorodkin, Jan; Hofacker, Ivo L

    2011-08-01

    Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  1. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    International Nuclear Information System (INIS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-01-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion

  2. Benchmarking and qualification of the NUFREQ-NPW code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.

    1988-01-01

    The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs

  3. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    International Nuclear Information System (INIS)

    Chen, K.F.; Olson, C.A.

    1983-01-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction

  4. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  5. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    Science.gov (United States)

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  6. Simulation of transport in the ignited ITER with 1.5-D predictive code

    International Nuclear Information System (INIS)

    Becker, G.

    1995-01-01

    The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (C υ = 1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s. In the reference cases, the steady state helium fraction is 7%. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral. But neon is found to cause high fuel dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. The bootstrap current has only a small impact on the current profile. The sawtooth dominated region is found to cover 35% of the plasma cross-section. Local stability analysis of ideal ballooning modes shows that the plasma is everywhere well below the stability limit. 23 refs, 34 figs, 3 tabs

  7. Heterogeneous fuels for minor actinides transmutation: Fuel performance codes predictions in the EFIT case study

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, R., E-mail: rolando.calabrese@enea.i [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Vettraino, F.; Artioli, C. [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Sobolev, V. [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, B-2400 Mol (Belgium); Thetford, R. [Serco Technical and Assurance Services, 150 Harwell Business Centre, Didcot OX11 0QB (United Kingdom)

    2010-06-15

    . Presented results were used for testing newly-developed models installed in the TRANSURANUS code to deal with such innovative fuels and T91 steel cladding. Agreement among codes predictions was satisfactory for fuel and cladding temperatures, pellet-cladding gap and mechanical stresses.

  8. Does the Holland Code Predict Job Satisfaction and Productivity in Clothing Factory Workers?

    Science.gov (United States)

    Heesacker, Martin; And Others

    1988-01-01

    Administered Self-Directed Search to sewing machine operators to determine Holland code, and assessed work productivity, job satisfaction, absenteeism, and insurance claims. Most workers were of the Social code. Social subjects were the most satisfied, Conventional and Realistic subjects next, and subjects of other codes less so. Productivity of…

  9. From structure prediction to genomic screens for novel non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Jan Gorodkin

    2011-08-01

    Full Text Available Non-coding RNAs (ncRNAs are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs. A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  10. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  11. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  12. RELAP5/MOD2 code modifications to obtain better predictions for the once-through steam generator

    International Nuclear Information System (INIS)

    Blanchat, T.; Hassan, Y.

    1989-01-01

    The steam generator is a major component in pressurized water reactors. Predicting the response of a steam generator during both steady-state and transient conditions is essential in studying the thermal-hydraulic behavior of a nuclear reactor coolant system. Therefore, many analytical and experimental efforts have been performed to investigate the thermal-hydraulic behavior of the steam generators during operational and accident transients. The objective of this study is to predict the behavior of the secondary side of the once-through steam generator (OTSG) using the RELAP5/MOD2 computer code. Steady-state conditions were predicted with the current version of the RELAP5/MOD2 code and compared with experimental plant data. The code predictions consistently underpredict the degree of superheat. A new interface friction model has been implemented in a modified version of RELAP5/MOD2. This modification, along with changes to the flow regime transition criteria and the heat transfer correlations, correctly predicts the degree of superheat and matches plant data

  13. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  14. Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392

    International Nuclear Information System (INIS)

    Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep

    2015-01-01

    Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)

  15. Assessment of predictive capability of REFLA/TRAC code for large break LOCA transient in PWR using LOFT L2-5 test data

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best estimate code developed at Japan Atomic Energy Research Institute (JAERI) to provide advanced predictions of thermal hydraulic transient in light water reactors (LWRs). The REFLA/TRAC code uses the TRAC-PF1/MOD1 code as the framework of the code. The REFLA/TRAC code is expected to be used for the calibration of licensing codes, accident analysis, accident simulation of LWRs, and design of advanced LWRs. Several models have been implemented to the TRAC-PF1/MOD1 code at JAERI including reflood model, condensation model, interfacial and wall friction models, etc. These models have been verified using data from various separate effect tests. This report describes an assessment result of the REFLA/TRAC code, which was performed to assess the predictive capability for integral system behavior under large break loss of coolant accident (LBLOCA) using data from the LOFT L2-5 test. The assessment calculation confirmed that the REFLA/TRAC code can predict break mass flow rate, emergency core cooling water bypass and clad temperature excellently in the LOFT L2-5 test. The CPU time of the REFLA/TRAC code was about 1/3 of the TRAC-PF1/MOD1 code. The REFLA/TRAC code can perform stable and fast simulation of thermal hydraulic behavior in PWR LBLOCA with enough accuracy for practical use. (author)

  16. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-01-01

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in (1,2) wherein Verification and Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI (3,4) is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  17. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  18. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    Science.gov (United States)

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. A numerical study of the influence of the void drift model on the predictions of the assert subchannel code

    International Nuclear Information System (INIS)

    Tye, P.; Teyssedou, A.; Troche, N.; Kiteley, J.

    1996-01-01

    One of the factors which is important in order to ensure the continued safe operation of nuclear reactors is the ability to accurately predict the 'Critical Heat Flux' (CHF) throughout the rod bundles in the fuel channel. One method currently used by the Canadian nuclear industry to predict the CHF in the fuel bundles of CANDU reactors is to use the ASSERT subchannel code to predict the local thermal-hydraulic conditions prevailing at each axial location in each subchannel in conjunction with appropriate correlations or the CHF look-up table. The successful application of the above methods depends greatly on the ability of ASSERT to accurately predict the local flow conditions throughout the fuel channel. In this paper, full range qualitative verification tests, using the ASSERT subchannel code are presented which show the influence of the void drift model on the predictions of the local subchannel quality. For typical cases using a 7 rod subset of a full 37 element rod bundle taken from the ASSERT validation database, it will be shown that the void drift term can significantly influence the calculated distribution of the quality in the rod bundle. In order to isolate, as much as possible, the influence of the void drift term this first numerical study is carried out with the rod bundle oriented both vertically and horizontally. Subsequently, additional numerical experiments will be presented which show the influence that the void drift model has on the predicted CHF locations. (author)

  20. Benchmarking and qualification of the nufreq-npw code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1988-01-01

    The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy

  1. Comparison of Computational Electromagnetic Codes for Prediction of Low-Frequency Radar Cross Section

    National Research Council Canada - National Science Library

    Lash, Paul C

    2006-01-01

    .... The goal of this research is to compare the capabilities of three computational electromagnetic codes for use in production of RCS signature assessments at low frequencies in terms of performance...

  2. Using clinical data to predict high-cost performance coding issues associated with pressure ulcers: a multilevel cohort model.

    Science.gov (United States)

    Padula, William V; Gibbons, Robert D; Pronovost, Peter J; Hedeker, Donald; Mishra, Manish K; Makic, Mary Beth F; Bridges, John Fp; Wald, Heidi L; Valuck, Robert J; Ginensky, Adam J; Ursitti, Anthony; Venable, Laura Ruth; Epstein, Ziv; Meltzer, David O

    2017-04-01

    Hospital-acquired pressure ulcers (HAPUs) have a mortality rate of 11.6%, are costly to treat, and result in Medicare reimbursement penalties. Medicare codes HAPUs according to Agency for Healthcare Research and Quality Patient-Safety Indicator 3 (PSI-03), but they are sometimes inappropriately coded. The objective is to use electronic health records to predict pressure ulcers and to identify coding issues leading to penalties. We evaluated all hospitalized patient electronic medical records at an academic medical center data repository between 2011 and 2014. These data contained patient encounter level demographic variables, diagnoses, prescription drugs, and provider orders. HAPUs were defined by PSI-03: stages III, IV, or unstageable pressure ulcers not present on admission as a secondary diagnosis, excluding cases of paralysis. Random forests reduced data dimensionality. Multilevel logistic regression of patient encounters evaluated associations between covariates and HAPU incidence. The approach produced a sample population of 21 153 patients with 1549 PSI-03 cases. The greatest odds ratio (OR) of HAPU incidence was among patients diagnosed with spinal cord injury (ICD-9 907.2: OR = 14.3; P  coded for paralysis, leading to a PSI-03 flag. Other high ORs included bed confinement (ICD-9 V49.84: OR = 3.1, P  coded without paralysis, leading to PSI-03 flags. The resulting statistical model can be tested to predict HAPUs during hospitalization. Inappropriate coding of conditions leads to poor hospital performance measures and Medicare reimbursement penalties. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Prediction Capability of SPACE Code about the Loop Seal Clearing on ATLAS SBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Jong Hyuk; Chung, Bub Dong; Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The most possible break size for loop seal reforming has been decided as 4 inch by the pre-calculation conducted by the RELAP5 and MARS. Many organizations have participated with various system analysis codes: for examples, RELAP5, MARS, TRACE. KAERI also anticipated with SPACE code. SPACE code has been developed for the use of design and safety analysis of nuclear thermal hydraulics system. KHNP and other organizations have collaborated during last 10 years. And it is currently under the certification procedures. SPACE has the capability to analyze the droplet field with full governing equation set: continuity, momentum, and energy. The SPACE code has been participated in PKL- 3 benchmark program for the international activity. The DSP-04 benchmark problem is also the application of SPACE as the domestic activities. The cold leg top slot break accident of APR1400 reactor has been modeled and surveyed by SPACE code. Benchmark experiment as a program of DSP-04 has been performed with ATLAS facility. The break size has been selected as 4 inch in APR1400 and the corresponding scale down break size has been modeled in SPACE code. The loop seal reforming has been occurred at all 4 loops. But the PCT shows no significant behaviors.

  4. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  5. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  6. Analytical prediction of CHF by FIDAS code based on three-fluid and film-dryout model

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1990-01-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. Critical heat flux in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels (FIDAS) which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  7. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    International Nuclear Information System (INIS)

    Peterson, P.F.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors

  8. Erosion corrosion in power plant piping systems - Calculation code for predicting wall thinning

    International Nuclear Information System (INIS)

    Kastner, W.; Erve, M.; Henzel, N.; Stellwag, B.

    1990-01-01

    Extensive experimental and theoretical investigations have been performed to develop a calculation code for wall thinning due to erosion corrosion in power plant piping systems. The so-called WATHEC code can be applied to single-phase water flow as well as to two-phase water/steam flow. Only input data which are available to the operator of the plant are taken into consideration. Together with a continuously updated erosion corrosion data base the calculation code forms one element of a weak point analysis for power plant piping systems which can be applied to minimize material loss due to erosion corrosion, reduce non-destructive testing and curtail monitoring programs for piping systems, recommend life-extending measures. (author). 12 refs, 17 figs

  9. A computer code PACTOLE to predict activation and transport of corrosion products in a PWR

    International Nuclear Information System (INIS)

    Beslu, P.; Frejaville, G.; Lalet, A.

    1978-01-01

    Theoretical studies on activation and transport of corrosion products in a PWR primary circuit have been concentrated, at CEA on the development of a computer code : PACTOLE. This code takes into account the major phenomena which govern corrosion products transport: 1. Ion solubility is obtained by usual thermodynamics laws in function of water chemistry: pH at operating temperature is calculated by the code. 2. Release rates of base metals, dissolution rates of deposits, precipitation rates of soluble products are derived from solubility variations. 3. Deposition of solid particles is treated by a model taking into account particle size, brownian and turbulent diffusion and inertial effect. Erosion of deposits is accounted for by a semi-empirical model. After a review of calculational models, an application of PACTOLE is presented in view of analyzing the distribution of in core. (author)

  10. A fast and compact Fuel Rod Performance Simulator code for predictive, interpretive and educational purpose

    International Nuclear Information System (INIS)

    Lorenzen, J.

    1990-01-01

    A new Fuel rod Performance Simulator code FRPS has been developed, tested and benchmarked and is now available in different versions. The user may choose between the batch version INTERPIN producing results in form of listings or beforehand defined plots, or the interactive simulator code SIMSIM which is stepping through a power history under the control of user. Both versions are presently running on minicomputers and PC:s using EGA-Graphics. A third version is the implementation in a Studsvik Compact Simulator with FRPS being one of its various modules receiving the dynamic inputs from the simulator

  11. Computer code TRANS-ACE predicting for fire and explosion accidents in nuclear fuel reprocessing plants

    International Nuclear Information System (INIS)

    Abe, Hitoshi; Nishio; Gunji; Naito, Yoshitaka

    1993-11-01

    The accident analysis code TRANS-ACE was developed to evaluate the safety of a ventilation system in a reprocessing plant in the event of fire and explosion accidents. TRANS-ACE can evaluate not only the integrity of a ventilation system containing HEPA filters but also the source term of radioactive materials for release out of a plant. It calculates the temperature, pressure, flow rate, transport of combustion materials and confinement of radioactive materials in the network of a ventilation system that might experience a fire or explosion accident. TRANS-ACE is based on the one-dimensional compressible thermo-fluid analysis code EVENT developed by Los Alamos National Laboratory (LANL). Calculational functions are added for the radioactive source term, heat transfer and radiation to cell and duct walls and HEPA filter integrity. For the second edition in the report, TRANS-ACE has been improved incorporating functions for the initial steady-state calculation to determine the flow rates, pressure drops and temperature in the network before an accident mode analysis. It is also improved to include flow resistance calculations of the filters and blowers in the network and to have an easy to use code by simplifying the input formats. This report is to prepare an explanation of the mathematical model for TRANS-ACE code and to be the user's manual. (author)

  12. Validation of the ASSERT subchannel code for prediction of CHF in standard and non-standard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Kiteley, J.C.; Carver, M.B.; Zhou, Q.N.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting critical heat flux (CHF) at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is the only tool available to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries. 28 refs., 12 figs

  13. Validation of the ASSERT subchannel code: Prediction of critical heat flux in standard and nonstandard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1995-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of Canada uranium deuterium (CANDU) pressurized heavy water reactor fuel channels and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting CHF at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental database. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. The numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology are discussed. The evolutionary validation plan is also discussed and early validation exercises are summarized. More recent validation exercises in standard and nonstandard geometries are emphasized

  14. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  15. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Karahan, Aydin, E-mail: karahan@mit.ed [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States); Buongiorno, Jacopo [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States)

    2010-01-31

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO{sub 2}-PuO{sub 2} mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium

  16. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    International Nuclear Information System (INIS)

    Karahan, Aydin; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO 2 -PuO 2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  17. Development of a code MOGRA for predicting the migration of ground additions and its application to various land utilization areas

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for model formation, computation parameter settings, and results displays. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. The functionality of MOGRA is being verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. In this report, a hypothetical combination of land usage was supposed to check the function of MOGRA. The land usage was consisted from cultivated lands, forests, uncultivated lands, urban area, river, and lake. Each land usage has its own inside model which is basic module. Also supposed was homogeneous contamination of the surface land from atmospheric deposition of 137 Cs(1.0Bq/m 2 ). The system analyzed the dynamic changes of 137 Cs concentrations in each compartment, fluxes from one compartment to another compartment. (author)

  18. Predictions of Critical Heat Flux Using the ASSERT-PV Subchannel Code for a CANFLEX Variant Bundle

    International Nuclear Information System (INIS)

    Onder, Ebru Nihan; Leung, Laurence; Kim, Hung; Rao, Yanfei

    2009-01-01

    The ASSERT-PV subchannel code developed by AECL has been applied as a design-assist tool to the advanced CANDU 1 reactor fuel bundle. Based primarily on the CANFLEX 2 fuel bundle, several geometry changes (such as element sizes and pitchcircle diameters of various element rings) were examined to optimize the dryout power and pressure-drop performances of the new fuel bundle. An experiment was performed to obtain dryout power measurements for verification of the ASSERT-PV code predictions. It was carried out using an electrically heated, Refrigerant-134a cooled, fuel bundle string simulator. The axial power profile of the simulator was uniform, while the radial power profile of the element rings was varied simulating profiles in bundles with various fuel compositions and burn-ups. Dryout power measurements are predicted closely using the ASSERT-PV code, particularly at low flows and low pressures, but are overpredicted at high flows and high pressures. The majority of data shows that dryout powers are underpredicted at low inlet-fluid temperatures but overpredicted at high inlet-fluid temperatures

  19. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  20. Evaluation of CRUDTRAN code to predict transport of corrosion products and radioactivity in the PWR primary coolant system

    International Nuclear Information System (INIS)

    Lee, C.B.

    2002-01-01

    CRUDTRAN code is to predict transport of the corrosion products and their radio-activated nuclides such as cobalt-58 and cobalt-60 in the PWR primary coolant system. In CRUDTRAN code the PWR primary circuit is divided into three principal sections such as the core, the coolant and the steam generator. The main driving force for corrosion product transport in the PWR primary coolant comes from coolant temperature change throughout the system and a subsequent change in corrosion product solubility. As the coolant temperature changes around the PWR primary circuit, saturation status of the corrosion products in the coolant also changes such that under-saturation in steam generator and super-saturation in the core. CRUDTRAN code was evaluated by comparison with the results of the in-reactor loop tests simulating the PWR primary coolant system and PWR plant data. It showed that CRUDTRAN could predict variations of cobalt-58 and cobalt-60 radioactivity with time, plant cycle and coolant chemistry in the PWR plant. (author)

  1. The representatives of the various intersubchannel transfer mechanisms and their effects on the predictions of the ASSERT-4 subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Tye, P [Ecole Polytechnique, Montreal, PQ (Canada)

    1994-12-31

    In this paper, effects of that the constitutive relations used to represent some of the intersubchannel transfer mechanisms have on the predictions of the ASSERT-4 subchannel code for horizontal flows are examined. In particular the choices made in the representation of the gravity driven phase separation phenomena, which is unique to the horizontal fuel channel arrangement seen in CANDU reactors, are analyzed. This is done by comparing the predictions of the ASSERT-4 subchannel code with experimental data on void fraction, mass flow rate, and pressure drop obtained for two horizontal interconnected subchannels. ASSERT-4, the subchannel code used by the Canadian nuclear industry, uses an advanced drift flux model which permits departure from both thermal and mechanical equilibrium between the phases to be accurately modeled. In particular ASSERT-4 contains models for the buoyancy effects which cause phase separation between adjacent subchannels in horizontal flows. This feature, which is of great importance in the subchannel analysis of CANDU reactors, is implemented in the constitutive relationship for the relative velocity required by the conservation equations. In order to, as much as is physically possible, isolate different inter-subchannel transfer mechanisms, three different subchannel orientations are analyzed. These are: the two subchannels at the same elevation, the high void subchannel below the low void subchannel, and the high void subchannel above the low void subchannel. It is observed that for all three subchannel orientations ASSERT-4 does a reasonably good job of predicting the experimental trends. However, certain modifications to the representation of the gravitational phase separation effects which seem to improve the overall predictions are suggested. (author). 12 refs., 12 figs.

  2. Perceptual instability in schizophrenia: Probing predictive coding accounts of delusions with ambiguous stimuli

    Directory of Open Access Journals (Sweden)

    Katharina Schmack

    2015-06-01

    Conclusion: Our results indicate an association between a weakened effect of sensory predictions in perceptual inference and delusions in schizophrenia. We suggest that attenuated predictive signaling during perceptual inference in schizophrenia may yield the experience of aberrant salience, thereby providing the starting point for the formation of delusions.

  3. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Predictive Coding Accelerates Word Recognition and Learning in the Early Stages of Language Development

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-01-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard…

  5. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

    Science.gov (United States)

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-04-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

  6. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1986-01-01

    The Fusion Safety Program of EGandG idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculates tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TMAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design

  7. Benchmarking and qualification of the ppercase nufreq -ppercase npw code for best estimate prediction of multichannel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.P.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1994-01-01

    The ppercase nufreq - ppercase np (G.C. Park et al. NUREG/CR-3375, 1983; S.J. Peng et al. NUREG/CR-4116, 1984; S.J. Peng et al. Nucl. Sci. Eng. 88 (1988) 404-411) code was modified and set up at Westinghouse, USA, for mixed fuel type multichannel core-wide stability analysis. The resulting code, ppercase nufreq - ppercase npw , allows for variable axial power profiles between channel groups and can handle mixed fuel types.Various models incorporated into ppercase nurfreq - ppercase npw were systematically compared against the Westinghouse channel stability analysis code ppercase mazda -ppercase nf (R. Taleyarkhan et al. J. Heat Transfer 107 (February 1985) 175-181; NUREG/CR2972, 1983), for which the mathematical model was developed in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All 13 Peach Bottom-2 EOC-2/3 low flow stability tests (L.A. Carmichael and R.O. Neimi, EPRI NP-564, Project 1020-1, 1978; F.B. Woffinden and R.O. Neimi, EPRI, NP 0972, Project 1020-2, 1981) were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between ppercase nufreq-npw predictions and data. ((orig.))

  8. Validation of the assert subchannel code: Prediction of CHF in standard and non-standard Candu bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of prediting CHF at these local conditions, makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries

  9. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  10. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  11. Incorporation of the variation in conductivity with burnup in the stability of code predictive LAPUR

    International Nuclear Information System (INIS)

    Escriba, A.; Munoz-cobo, J. L.; Merino, R.; Melara, J.; Albendea, M.

    2013-01-01

    In the field of nuclear safety, the analysis of the stability of boiling water reactors is one of the biggest challenges for researchers. LAPUR code that allows to obtain the parameters of stability of the plant (Decay rate and frequency), being one of the programs used by IBERDROLA can be used for these calculations. With the collaboration of the research group TIN of the Polytechnic University of Valencia, a model of loss of conductivity of uranium has joined with the burned LAPUR. This update allows you to play the phenomenon in a more realistic way. This improvement has been validated and verified contrasting results with reference values.

  12. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants.

    Science.gov (United States)

    Vieira, Lucas Maciel; Grativol, Clicia; Thiebaut, Flavia; Carvalho, Thais G; Hardoim, Pablo R; Hemerly, Adriana; Lifschitz, Sergio; Ferreira, Paulo Cavalcanti Gomes; Walter, Maria Emilia M T

    2017-03-04

    Non-coding RNAs (ncRNAs) constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs), which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM). We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane ( Saccharum spp.) and in maize ( Zea mays ). From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  13. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants

    Directory of Open Access Journals (Sweden)

    Lucas Maciel Vieira

    2017-03-01

    Full Text Available Non-coding RNAs (ncRNAs constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs, which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM. We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane (Saccharum spp. and in maize (Zea mays. From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  14. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements

    KAUST Repository

    Guturu, H.

    2013-11-11

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and \\'through-DNA\\' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  15. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements

    KAUST Repository

    Guturu, H.; Doxey, A. C.; Wenger, A. M.; Bejerano, G.

    2013-01-01

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and 'through-DNA' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  16. A parametric study of MELCOR Accident Consequence Code System 2 (MACCS2) Input Values for the Predicted Health Effect

    International Nuclear Information System (INIS)

    Kim, So Ra; Min, Byung Il; Park, Ki Hyun; Yang, Byung Mo; Suh, Kyung Suk

    2016-01-01

    The MELCOR Accident Consequence Code System 2, MACCS2, has been the most widely used through the world among the off-site consequence analysis codes. MACCS2 code is used to estimate the radionuclide concentrations, radiological doses, health effects, and economic consequences that could result from the hypothetical nuclear accidents. Most of the MACCS model parameter values are defined by the user and those input parameters can make a significant impact on the output. A limited parametric study was performed to identify the relative importance of the values of each input parameters in determining the predicted early and latent health effects in MACCS2. These results would not be applicable to every case of the nuclear accidents, because only the limited calculation was performed with Kori-specific data. The endpoints of the assessment were early- and latent cancer-risk in the exposed population, therefore it might produce the different results with the parametric studies for other endpoints, such as contamination level, absorbed dose, and economic cost. Accident consequence assessment is important for decision making to minimize the health effect from radiation exposure, accordingly the sufficient parametric studies are required for the various endpoints and input parameters in further research

  17. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    Science.gov (United States)

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  18. Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines

    Science.gov (United States)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.

  19. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    Science.gov (United States)

    Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua

    2015-12-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.

  20. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    Science.gov (United States)

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  1. Predictions of the thermomechanical code ''RESTA'' compared with fuel element examinations after irradiation in the BR3 reactor

    International Nuclear Information System (INIS)

    Petitgrand, S.

    1980-01-01

    A large number of fuel rods have been irradiated in the small power plant BR3. Many of them have been examined in hot cells after irradiation, giving thus valuable experimental information. On the other hand a thermomechanical code, named RESTA, has been developed by the C.E.A. to describe and predict the behaviour of a fuel pin in a PWR environment and in stationary conditions. The models used in that code derive chiefly from the C.E.A.'s own experience and are briefly reviewed in this paper. The comparison between prediction and experience has been performed for four power history classes: (1) moderate (average linear rating approximately equal to 20 kw m -1 ) and short (approximately equal to 300 days) rating, (2) moderate (approximately equal to 20 kw m -1 ) and long (approximately equal to 600 days) rating, (3) high (25-30 kw m -1 ) and long (approximately equal to 600 days) rating and (4) very high (30-40 kw m -1 ) and long (approximately equal to 600 days) rating. Satisfactory agreement has been found between experimental and calculated results in all cases, concerning fuel structural change, fission gas release, pellet-clad interaction as well as clad permanent strain. (author)

  2. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    International Nuclear Information System (INIS)

    Yu Jia-Feng; Sui Tian-Xiang; Wang Ji-Hua; Wang Hong-Mei; Wang Chun-Ling; Jing Li

    2015-01-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. (special topic)

  3. Computational Account of Spontaneous Activity as a Signature of Predictive Coding.

    Directory of Open Access Journals (Sweden)

    Veronika Koren

    2017-01-01

    Full Text Available Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network

  4. Use of a commercial heat transfer code to predict horizontally oriented spent fuel rod surface temperatures

    International Nuclear Information System (INIS)

    Wix, S.D.; Koski, J.A.

    1993-03-01

    Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. A comparison between numerical calculations using commercial thermal analysis software packages and experimental data simulating a horizontally oriented spent fuel rod array was performed. Twelve cases were analyzed using air and helium for the fill gas, with three different heat dissipation levels. The numerically predicted temperatures are higher than the experimental data for all levels of heat dissipation with air as the fill gas. The temperature differences are 4 degree C and 23 degree C for the low heat dissipation and high heat dissipation, respectively. The temperature predictions using helium as a fill gas are lower for the low and medium heat dissipation levels, but higher at the high heat dissipation. The temperature differences are 1 degree C and 6 degree C for the low and medium heat dissipation, respectively. For the high heat dissipation level, the temperature predictions are 16 degree C higher than the experimental data. Differences between the predicted and experimental temperatures can be attributed to several factors. These factors include experimental uncertainty in the temperature and heat dissipation measurements, actual convection effects not included in the model, and axial heat flow in the experimental data. This work demonstrates that horizontally oriented spent fuel rod surface temperature predictions can be made using existing commercial software packages. This work also shows that end effects will be increasingly important as the amount of dissipated heat increases

  5. Pupil dilation indicates the coding of past prediction errors: Evidence for attentional learning theory.

    Science.gov (United States)

    Koenig, Stephan; Uengoer, Metin; Lachnit, Harald

    2018-04-01

    The attentional learning theory of Pearce and Hall () predicts more attention to uncertain cues that have caused a high prediction error in the past. We examined how the cue-elicited pupil dilation during associative learning was linked to such error-driven attentional processes. In three experiments, participants were trained to acquire associations between different cues and their appetitive (Experiment 1), motor (Experiment 2), or aversive (Experiment 3) outcomes. All experiments were designed to examine differences in the processing of continuously reinforced cues (consistently followed by the outcome) versus partially reinforced, uncertain cues (randomly followed by the outcome). We measured the pupil dilation elicited by the cues in anticipation of the outcome and analyzed how this conditioned pupil response changed over the course of learning. In all experiments, changes in pupil size complied with the same basic pattern: During early learning, consistently reinforced cues elicited greater pupil dilation than uncertain, randomly reinforced cues, but this effect gradually reversed to yield a greater pupil dilation for uncertain cues toward the end of learning. The pattern of data accords with the changes in prediction error and error-driven attention formalized by the Pearce-Hall theory. © 2017 The Authors. Psychophysiology published by Wiley Periodicals, Inc. on behalf of Society for Psychophysiological Research.

  6. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc; Impact de modeles de melange implantes dans le code de thermohydraulique Thyc sur les predictions de flux critique

    Energy Technology Data Exchange (ETDEWEB)

    Banner, D

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs.

  7. Bit-depth scalable video coding with new inter-layer prediction

    Directory of Open Access Journals (Sweden)

    Chiang Jui-Chiu

    2011-01-01

    Full Text Available Abstract The rapid advances in the capture and display of high-dynamic range (HDR image/video content make it imperative to develop efficient compression techniques to deal with the huge amounts of HDR data. Since HDR device is not yet popular for the moment, the compatibility problems should be considered when rendering HDR content on conventional display devices. To this end, in this study, we propose three H.264/AVC-based bit-depth scalable video-coding schemes, called the LH scheme (low bit-depth to high bit-depth, the HL scheme (high bit-depth to low bit-depth, and the combined LH-HL scheme, respectively. The schemes efficiently exploit the high correlation between the high and the low bit-depth layers on the macroblock (MB level. Experimental results demonstrate that the HL scheme outperforms the other two schemes in some scenarios. Moreover, it achieves up to 7 dB improvement over the simulcast approach when the high and low bit-depth representations are 12 bits and 8 bits, respectively.

  8. Prediction of palladium-103 production using the Monte Carlo code MCNPX

    International Nuclear Information System (INIS)

    Mahmodi, Mahbobeh; Sadeghi, Mahdi; Tenreiro, Claudio

    2013-01-01

    Highlights: ► The production of 103 Pd activity in 15 h of irradiation at 200 μA was calculated to be 685 mCi. ► MCNPX was used to calculate the energy distribution of the proton flux on the Rh target. ► The activity based on the MCNPX was calculated to be 674.58 mCi. - Abstract: The radionuclide 103 Pd (T 1/2 = 16.991 d; decays almost exclusively by EC to 103m Rh, T 1/2 = 56.114 min) has been of great interest in prostate and eye cancer therapy due to its suitable half-life and decay characteristics. 103 Pd has been produced by proton irradiation of a 103 Rh target through the 103 Rh(p,n) 103 Pd reaction. In this paper, the Monte Carlo simulation code (MCNPX) was used to calculate the energy distribution of the proton flux on the Rh target. The activity based on the MCNPX was calculated to be 674.58 mCi. Good agreement between the theoretical and the experimental data of the 103 Pd activity and the activity estimation based on MCNPX calculation was observed. This study demonstrated that MCNPX provides a suitable tool for the simulation of radionuclide production using proton irradiation

  9. Results and code prediction comparisons of lithium-air reaction and aerosol behavior tests

    International Nuclear Information System (INIS)

    Jeppson, D.W.

    1986-03-01

    The Hanford Engineering Development Laboratory (HEDL) Fusion Safety Support Studies include evaluation of potential safety and environmental concerns associated with the use of liquid lithium as a breeder and coolant for fusion reactors. Potential mechanisms for volatilization and transport of radioactive metallic species associated with breeder materials are of particular interest. Liquid lithium pool-air reaction and aerosol behavior tests were conducted with lithium masses up to 100 kg within the 850-m 3 containment vessel in the Containment Systems Test Facility. Lithium-air reaction rates, aerosol generation rates, aerosol behavior and characterization, as well as containment atmosphere temperature and pressure responses were determined. Pool-air reaction and aerosol behavior test results were compared with computer code calculations for reaction rates, containment atmosphere response, and aerosol behavior. The volatility of potentially radioactive metallic species from a lithium pool-air reaction was measured. The response of various aerosol detectors to the aerosol generated was determined. Liquid lithium spray tests in air and in nitrogen atmospheres were conducted with lithium temperatures of about 427 0 and 650 0 C. Lithium reaction rates, containment atmosphere response, and aerosol generation and characterization were determined for these spray tests

  10. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc

    International Nuclear Information System (INIS)

    Banner, D.

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs

  11. Development and verification test of integral reactor major components - Development of MCP impeller design, performance prediction code and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung Kyoon; Oh, Woo Hyoung; Song, Jae Wook [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-03-01

    The present study is aimed at developing a computational code for design and performance prediction of an axial-flow pump. The proposed performance prediction method is tested against a model axial-flow pump streamline curvature method. The preliminary design is made by using the ideal velocity triangles at inlet and exit and the three dimensional blade shape is calculated by employing the free vortex design method. Then the detailed blading design is carried out by using experimental database of double circular arc cambered hydrofoils. To computationally determine the design incidence, deviation, blade camber, solidity and stagger angle, a number of correlation equations are developed form the experimental database and a theorical formula for the lift coefficient is adopted. A total of 8 equations are solved iteratively using an under-relaxation factor. An experimental measurement is conducted under a non-cavitating condition to obtain the off-design performance curve and also a cavitation test is carried out by reducing the suction pressure. The experimental results are very satisfactorily compared with the predictions by the streamline curvature method. 28 refs., 26 figs., 11 tabs. (Author)

  12. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-01-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the open-quotes standardclose quotes κ-ε transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels

  13. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    Energy Technology Data Exchange (ETDEWEB)

    Banas, A.O.; Carver, M.B. [Chalk River Laboratories (Canada); Unrau, D. [Univ. of Toronto (Canada)

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  14. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  15. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  16. Annotating Diseases Using Human Phenotype Ontology Improves Prediction of Disease-Associated Long Non-coding RNAs.

    Science.gov (United States)

    Le, Duc-Hau; Dao, Lan T M

    2018-05-23

    Recently, many long non-coding RNAs (lncRNAs) have been identified and their biological function has been characterized; however, our understanding of their underlying molecular mechanisms related to disease is still limited. To overcome the limitation in experimentally identifying disease-lncRNA associations, computational methods have been proposed as a powerful tool to predict such associations. These methods are usually based on the similarities between diseases or lncRNAs since it was reported that similar diseases are associated with functionally similar lncRNAs. Therefore, prediction performance is highly dependent on how well the similarities can be captured. Previous studies have calculated the similarity between two diseases by mapping exactly each disease to a single Disease Ontology (DO) term, and then use a semantic similarity measure to calculate the similarity between them. However, the problem of this approach is that a disease can be described by more than one DO terms. Until now, there is no annotation database of DO terms for diseases except for genes. In contrast, Human Phenotype Ontology (HPO) is designed to fully annotate human disease phenotypes. Therefore, in this study, we constructed disease similarity networks/matrices using HPO instead of DO. Then, we used these networks/matrices as inputs of two representative machine learning-based and network-based ranking algorithms, that is, regularized least square and heterogeneous graph-based inference, respectively. The results showed that the prediction performance of the two algorithms on HPO-based is better than that on DO-based networks/matrices. In addition, our method can predict 11 novel cancer-associated lncRNAs, which are supported by literature evidence. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Positive Predictive Values of International Classification of Diseases, 10th Revision Coding Algorithms to Identify Patients With Autosomal Dominant Polycystic Kidney Disease

    Directory of Open Access Journals (Sweden)

    Vinusha Kalatharan

    2016-12-01

    Full Text Available Background: International Classification of Diseases, 10th Revision codes (ICD-10 for autosomal dominant polycystic kidney disease (ADPKD is used within several administrative health care databases. It is unknown whether these codes identify patients who meet strict clinical criteria for ADPKD. Objective: The objective of this study is (1 to determine whether different ICD-10 coding algorithms identify adult patients who meet strict clinical criteria for ADPKD as assessed through medical chart review and (2 to assess the number of patients identified with different ADPKD coding algorithms in Ontario. Design: Validation study of health care database codes, and prevalence. Setting: Ontario, Canada. Patients: For the chart review, 201 adult patients with hospital encounters between April 1, 2002, and March 31, 2014, assigned either ICD-10 codes Q61.2 or Q61.3. Measurements: This study measured positive predictive value of the ICD-10 coding algorithms and the number of Ontarians identified with different coding algorithms. Methods: We manually reviewed a random sample of medical charts in London, Ontario, Canada, and determined whether or not ADPKD was present according to strict clinical criteria. Results: The presence of either ICD-10 code Q61.2 or Q61.3 in a hospital encounter had a positive predictive value of 85% (95% confidence interval [CI], 79%-89% and identified 2981 Ontarians (0.02% of the Ontario adult population. The presence of ICD-10 code Q61.2 in a hospital encounter had a positive predictive value of 97% (95% CI, 86%-100% and identified 394 adults in Ontario (0.003% of the Ontario adult population. Limitations: (1 We could not calculate other measures of validity; (2 the coding algorithms do not identify patients without hospital encounters; and (3 coding practices may differ between hospitals. Conclusions: Most patients with ICD-10 code Q61.2 or Q61.3 assigned during their hospital encounters have ADPKD according to the clinical

  18. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  19. Comparison of the Aerospace Systems Test Reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1983-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code

  20. Comparison of the aerospace systems test reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1984-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code. (author)

  1. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491

  2. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants.

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.

  3. Conservation of σ28-Dependent Non-Coding RNA Paralogs and Predicted σ54-Dependent Targets in Thermophilic Campylobacter Species.

    Directory of Open Access Journals (Sweden)

    My Thanh Le

    Full Text Available Assembly of flagella requires strict hierarchical and temporal control via flagellar sigma and anti-sigma factors, regulatory proteins and the assembly complex itself, but to date non-coding RNAs (ncRNAs have not been described to regulate genes directly involved in flagellar assembly. In this study we have investigated the possible role of two ncRNA paralogs (CjNC1, CjNC4 in flagellar assembly and gene regulation of the diarrhoeal pathogen Campylobacter jejuni. CjNC1 and CjNC4 are 37/44 nt identical and predicted to target the 5' untranslated region (5' UTR of genes transcribed from the flagellar sigma factor σ54. Orthologs of the σ54-dependent 5' UTRs and ncRNAs are present in the genomes of other thermophilic Campylobacter species, and transcription of CjNC1 and CNC4 is dependent on the flagellar sigma factor σ28. Surprisingly, inactivation and overexpression of CjNC1 and CjNC4 did not affect growth, motility or flagella-associated phenotypes such as autoagglutination. However, CjNC1 and CjNC4 were able to mediate sequence-dependent, but Hfq-independent, partial repression of fluorescence of predicted target 5' UTRs in an Escherichia coli-based GFP reporter gene system. This hints towards a subtle role for the CjNC1 and CjNC4 ncRNAs in post-transcriptional gene regulation in thermophilic Campylobacter species, and suggests that the currently used phenotypic methodologies are insufficiently sensitive to detect such subtle phenotypes. The lack of a role of Hfq in the E. coli GFP-based system indicates that the CjNC1 and CjNC4 ncRNAs may mediate post-transcriptional gene regulation in ways that do not conform to the paradigms obtained from the Enterobacteriaceae.

  4. Conservation of σ28-Dependent Non-Coding RNA Paralogs and Predicted σ54-Dependent Targets in Thermophilic Campylobacter Species

    Science.gov (United States)

    Le, My Thanh; van Veldhuizen, Mart; Porcelli, Ida; Bongaerts, Roy J.; Gaskin, Duncan J. H.; Pearson, Bruce M.; van Vliet, Arnoud H. M.

    2015-01-01

    Assembly of flagella requires strict hierarchical and temporal control via flagellar sigma and anti-sigma factors, regulatory proteins and the assembly complex itself, but to date non-coding RNAs (ncRNAs) have not been described to regulate genes directly involved in flagellar assembly. In this study we have investigated the possible role of two ncRNA paralogs (CjNC1, CjNC4) in flagellar assembly and gene regulation of the diarrhoeal pathogen Campylobacter jejuni. CjNC1 and CjNC4 are 37/44 nt identical and predicted to target the 5' untranslated region (5' UTR) of genes transcribed from the flagellar sigma factor σ54. Orthologs of the σ54-dependent 5' UTRs and ncRNAs are present in the genomes of other thermophilic Campylobacter species, and transcription of CjNC1 and CNC4 is dependent on the flagellar sigma factor σ28. Surprisingly, inactivation and overexpression of CjNC1 and CjNC4 did not affect growth, motility or flagella-associated phenotypes such as autoagglutination. However, CjNC1 and CjNC4 were able to mediate sequence-dependent, but Hfq-independent, partial repression of fluorescence of predicted target 5' UTRs in an Escherichia coli-based GFP reporter gene system. This hints towards a subtle role for the CjNC1 and CjNC4 ncRNAs in post-transcriptional gene regulation in thermophilic Campylobacter species, and suggests that the currently used phenotypic methodologies are insufficiently sensitive to detect such subtle phenotypes. The lack of a role of Hfq in the E. coli GFP-based system indicates that the CjNC1 and CjNC4 ncRNAs may mediate post-transcriptional gene regulation in ways that do not conform to the paradigms obtained from the Enterobacteriaceae. PMID:26512728

  5. Accuracy of Single Frequency GPS Observations Processing In Near Real-time With Use of Code Predicted Products

    Science.gov (United States)

    Wielgosz, P. A.

    In this year, the system of active geodetic GPS permanent stations is going to be estab- lished in Poland. This system should provide GPS observations for a wide spectrum of users, especially it will be a great opportunity for surveyors. Many of surveyors still use cheaper, single frequency receivers. This paper focuses on processing of single frequency GPS observations only. During processing of such observations the iono- sphere plays an important role, so we concentrated on the influence of the ionosphere on the positional coordinates. Twenty consecutive days of GPS data from 2001 year were processed to analyze the accuracy of a derived three-dimensional relative vec- tor position between GPS stations. Observations from two Polish EPN/IGS stations: BOGO and JOZE were used. In addition to, a new test station - IGIK was created. In this paper, the results of single frequency GPS observations processing in near real- time are presented. Baselines of 15, 27 and 42 kilometers and sessions of 1, 2, 3, 4, and 6 hours long were processed. While processing we used CODE (Centre for Orbit De- termination in Europe, Bern, Switzerland) predicted products: orbits and ionosphere info. These products are available in real-time and enable near real-time processing. Software Bernese v. 4.2 for Linux and BPE (Bernese Processing Engine) mode were used. These results are shown with a reference to dual frequency weekly solution (the best solution). Obtained GPS positional time and GPS baseline length dependency accuracy is presented for single frequency GPS observations.

  6. Probable mode prediction for H.264 advanced video coding P slices using removable SKIP mode distortion estimation

    Science.gov (United States)

    You, Jongmin; Jeong, Jechang

    2010-02-01

    The H.264/AVC (advanced video coding) is used in a wide variety of applications including digital broadcasting and mobile applications, because of its high compression efficiency. The variable block mode scheme in H.264/AVC contributes much to its high compression efficiency but causes a selection problem. In general, rate-distortion optimization (RDO) is the optimal mode selection strategy, but it is computationally intensive. For this reason, the H.264/AVC encoder requires a fast mode selection algorithm for use in applications that require low-power and real-time processing. A probable mode prediction algorithm for the H.264/AVC encoder is proposed. To reduce the computational complexity of RDO, the proposed method selects probable modes among all allowed block modes using removable SKIP mode distortion estimation. Removable SKIP mode distortion is used to estimate whether or not a further divided block mode is appropriate for a macroblock. It is calculated using a no-motion reference block with a few computations. Then the proposed method reduces complexity by performing the RDO process only for probable modes. Experimental results show that the proposed algorithm can reduce encoding time by an average of 55.22% without significant visual quality degradation and increased bit rate.

  7. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2018-03-19

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  8. Systemizers are better code-breakers:Self-reported systemizing predicts code-breaking performance in expert hackers and naïve participants

    Directory of Open Access Journals (Sweden)

    India eHarvey

    2016-05-01

    Full Text Available Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders - attention to detail and systemizing - may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e. crypto-analysis or code-breaking. A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001; Baron-Cohen et al., 2003. They were also tested with behavioural tasks involving code-breaking and a control task involving security x-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing was related with performance in the x-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015. We discuss the theoretical and translational implications of our findings.

  9. A comparison of the radiological doses to man predicted by the MILDOS, UDAD, and UMMAC assessment codes

    International Nuclear Information System (INIS)

    Polehn, J.L.; Coleman, J.H.

    1984-01-01

    The output of three computer codes used to estimate the dose to man from uranium mining and milling operations are compared. UMMAC was developed by the Tennessee Valley Authority. UDAD (version 9) was developed at the Argonne National Laboratory. Version 4 of UDAD was modified by the Nuclear Regulatory Commission and renamed MILDOS. Dose estimates vary widely between the three codes. However, it appears any of the codes can be used if care is taken in the analyses of the output

  10. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  11. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  12. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  13. Detection of non-coding RNAs on the basis of predicted secondary structure formation free energy change

    Directory of Open Access Journals (Sweden)

    Uzilov Andrew V

    2006-03-01

    Full Text Available Abstract Background Non-coding RNAs (ncRNAs have a multitude of roles in the cell, many of which remain to be discovered. However, it is difficult to detect novel ncRNAs in biochemical screens. To advance biological knowledge, computational methods that can accurately detect ncRNAs in sequenced genomes are therefore desirable. The increasing number of genomic sequences provides a rich dataset for computational comparative sequence analysis and detection of novel ncRNAs. Results Here, Dynalign, a program for predicting secondary structures common to two RNA sequences on the basis of minimizing folding free energy change, is utilized as a computational ncRNA detection tool. The Dynalign-computed optimal total free energy change, which scores the structural alignment and the free energy change of folding into a common structure for two RNA sequences, is shown to be an effective measure for distinguishing ncRNA from randomized sequences. To make the classification as a ncRNA, the total free energy change of an input sequence pair can either be compared with the total free energy changes of a set of control sequence pairs, or be used in combination with sequence length and nucleotide frequencies as input to a classification support vector machine. The latter method is much faster, but slightly less sensitive at a given specificity. Additionally, the classification support vector machine method is shown to be sensitive and specific on genomic ncRNA screens of two different Escherichia coli and Salmonella typhi genome alignments, in which many ncRNAs are known. The Dynalign computational experiments are also compared with two other ncRNA detection programs, RNAz and QRNA. Conclusion The Dynalign-based support vector machine method is more sensitive for known ncRNAs in the test genomic screens than RNAz and QRNA. Additionally, both Dynalign-based methods are more sensitive than RNAz and QRNA at low sequence pair identities. Dynalign can be used as a

  14. Assessment of the prediction capability of the TRANSURANUS fuel performance code on the basis of power ramp tested LWR fuel rods

    International Nuclear Information System (INIS)

    Pastore, G.; Botazzoli, P.; Di Marcello, V.; Luzzi, L.

    2009-01-01

    The present work is aimed at assessing the prediction capability of the TRANSURANUS code for the performance analysis of LWR fuel rods under power ramp conditions. The analysis refers to all the power ramp tested fuel rods belonging to the Studsvik PWR Super-Ramp and BWR Inter-Ramp Irradiation Projects, and is focused on some integral quantities (i.e., burn-up, fission gas release, cladding creep-down and failure due to pellet cladding interaction) through a systematic comparison between the code predictions and the experimental data. To this end, a suitable setup of the code is established on the basis of previous works. Besides, with reference to literature indications, a sensitivity study is carried out, which considers the 'ITU model' for fission gas burst release and modifications in the treatment of the fuel solid swelling and the cladding stress corrosion cracking. The performed analyses allow to individuate some issues, which could be useful for the future development of the code. Keywords: Light Water Reactors, Fuel Rod Performance, Power Ramps, Fission Gas Burst Release, Fuel Swelling, Pellet Cladding Interaction, Stress Corrosion Cracking

  15. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  16. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  17. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  18. Positive predictive values of the International Classification of Disease, 10th edition diagnoses codes for diverticular disease in the Danish National Registry of Patients

    Directory of Open Access Journals (Sweden)

    Rune Erichsen

    2010-10-01

    Full Text Available Rune Erichsen1, Lisa Strate2, Henrik Toft Sørensen1, John A Baron31Department of Clinical Epidemiology, Aarhus University Hospital, Denmark; 2Division of Gastroenterology, University of Washington, Seattle, WA, USA; 3Departments of Medicine and of Community and Family Medicine, Dartmouth Medical School, NH, USAObjective: To investigate the accuracy of diagnostic coding for diverticular disease in the Danish National Registry of Patients (NRP.Study design and setting: At Aalborg Hospital, Denmark, with a catchment area of 640,000 inhabitants, we identified 100 patients recorded in the NRP with a diagnosis of diverticular disease (International Classification of Disease codes, 10th revision [ICD-10] K572–K579 during the 1999–2008 period. We assessed the positive predictive value (PPV as a measure of the accuracy of discharge codes for diverticular disease using information from discharge abstracts and outpatient notes as the reference standard.Results: Of the 100 patients coded with diverticular disease, 49 had complicated diverticular disease, whereas 51 had uncomplicated diverticulosis. For the overall diagnosis of diverticular disease (K57, the PPV was 0.98 (95% confidence intervals [CIs]: 0.93, 0.99. For the more detailed subgroups of diagnosis indicating the presence or absence of complications (K573–K579 the PPVs ranged from 0.67 (95% CI: 0.09, 0.99 to 0.92 (95% CI: 0.52, 1.00. The diagnosis codes did not allow accurate identification of uncomplicated disease or any specific complication. However, the combined ICD-10 codes K572, K574, and K578 had a PPV of 0.91 (95% CI: 0.71, 0.99 for any complication.Conclusion: The diagnosis codes in the NRP can be used to identify patients with diverticular disease in general; however, they do not accurately discern patients with uncomplicated diverticulosis or with specific diverticular complications.Keywords: diverticulum, colon, diverticulitis, validation studies

  19. Predictive value of vertebral artery extracranial color-coded duplex sonography for ischemic stroke-related vertigo.

    Science.gov (United States)

    Liou, Li-Min; Lin, Hsiu-Fen; Huang, I-Fang; Chang, Yang-Pei; Lin, Ruey-Tay; Lai, Chiou-Lian

    2013-12-01

    Vertigo can be a major presentation of posterior circulation stroke and can be easily misdiagnosed because of its complicated presentation. We thus prospectively assessed the predictive value of vertebral artery extracranial color-coded duplex sonography (ECCS) for the prediction of ischemic stroke-related vertigo. The inclusion criteria were: (1) a sensation of whirling (vertigo); (2) intractable vertigo for more than 1 hour despite appropriate treatment; and (3) those who could complete cranial magnetic resonance imaging (MRI) and vertebral artery (V2 segment) ECCS studies. Eventually, 76 consecutive participants with vertigo were enrolled from Kaohsiung Municipal Hsiao-Kang Hospital, Kaohsiung, Taiwan between August 2010 and August 2011. Demographic data, neurological symptoms, neurologic examinations, and V2 ECCS were assessed. We chose the parameters of peak systolic velocity (PSV), end diastolic velocity (EDV), PSV/EDV, mean velocity (MV), resistance index (RI), and pulsatility index (PI) to represent the hemodynamics. Values from both sides of V2 segments were averaged. We then calculated the average RI (aRI), average PI (aPI), average PSV (aPSV)/EDV, and average (aMV). Axial and coronal diffusion-weighted MRI findings determined the existence of acute ischemic stroke. We grouped and analyzed participants in two ways (way I and way II analyses) based on the diffusion-weighted MRI findings (to determine whether there was acute stroke) and neurological examinations. Using way I analysis, the "MRI (+)" group had significantly higher impedance (aRI, aPI, and aPSV/EDV ratio) and lower velocity (aPSV, aEDV, and aMV(PSV + EDV/2)), compared to the "MRI (-)" group. The cutoff value/sensitivity/specificity of aPSV, aEDV, aMV, aPI, aRI, and aPSV/EDV between the MRI (+) and MRI (-) groups were 41.15/61.5/66.0 (p = 0.0101), 14.55/69.2/72.0 (p = 0.0003), 29.10/92.1/38.0 (p = 0.0013), 1.07/76.9/64.0 (p = 0.0066), 0.62/76.9/64.0 (p = 0.0076), and 2.69/80.8/66.0 (p = 0

  20. Effects of using coding potential, sequence conservation and mRNA structure conservation for predicting pyrroly-sine containing genes

    DEFF Research Database (Denmark)

    Have, Christian Theil; Zambach, Sine; Christiansen, Henning

    2013-01-01

    for prediction of pyrrolysine incorporating genes in genomes of bacteria and archaea leading to insights about the factors driving pyrrolysine translation and identification of new gene candidates. The method predicts known conserved genes with high recall and predicts several other promising candidates...... for experimental verification. The method is implemented as a computational pipeline which is available on request....

  1. SIEX: a correlated code for the prediction of liquid metal fast breeder reactor (LMFBR) fuel thermal performance

    International Nuclear Information System (INIS)

    Dutt, D.S.; Baker, R.B.

    1975-06-01

    The SIEX computer program is a steady state heat transfer code developed to provide thermal performance calculations for a mixed-oxide fuel element in a fast neutron environment. Fuel restructuring, fuel-cladding heat conduction and fission gas release are modeled to provide assessment of the temperature. Modeling emphasis has been placed on correlations to measurable quantities from EBR-II irradiation tests and the inclusion of these correlations in a physically based computational scheme. SIEX is completely modular in construction allowing the user options for material properties and correlated models. Required code input is limited to geometric and environmental parameters, with a ''consistent'' set of material properties and correlated models provided by the code. 24 references. (U.S.)

  2. Evaluation of the WIMS (KAERI) - VENTURE code system for peak power prediction of KMRR core using MCNP

    International Nuclear Information System (INIS)

    Park, W.S.; Lee, K.M.; Lee, C.S.; Lee, J.T.; Oh, S.K.

    1992-01-01

    In this work, the validity and quantitative uncertainty of WIMS (KAERI) - VENTURE code system for the design and analysis of KMRR core was tried to be inferred using a well known benchmark code, MCNP. WIMS (KAERI) showed an excellent agreement with MCNP code. For three different control rod positions at a simulated core which has a quarter symmetry, total peaking factors and three sub-factors (radial, axial, and local) obtained from VENTURE were compared with those of MCNP. The comparison proved the validity of VENTURE and showed better agreement in the order of radial, axial, and local factors. The uncertainty of WIMS (KAERI) - VENTURE system was inferred using the 2σ band of total peaking obtained by MCNP. The uncertainty of WIMS (KAERI) - VENTURE system were found to be 18.5 % for the operating condition. (author)

  3. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  4. Physical models and codes for prediction of activity release from defective fuel rods under operation conditions and in leakage tests during refuelling

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Khoruzhii, O.; Sorokin, A.; Novikov, V.

    2003-01-01

    It is appropriate to use the dependences, based on physical models, in the design-analytical codes for improving of reliability of defective fuel rod detection and for determination of defect characteristics by activity measuring in the primary coolant. In the paper the results on development of some physical models and integral mechanistic codes, assigned for prediction of defective fuel rod behaviour are presented. The analysis of mass transfer and mass exchange between fuel rod and coolant showed that the rates of these processes depends on many factors, such as coolant turbulent flow, pressure, effective hydraulic diameter of defect, fuel rod geometric parameters. The models, which describe these dependences, have been created. The models of thermomechanical fuel behaviour, stable gaseous FP release were modified and new computer code RTOP-CA was created thereupon for description of defective fuel rod behaviour and activity release into the primary coolant. The model of fuel oxidation in in-pile conditions, which includes radiolysis and RTOP-LT after validation of physical models are planned to be used for prediction of defective fuel rods behaviour

  5. A computer code for the prediction of mill gases and hot air distribution between burners sections as input parameters for 3D CFD furnace calculation

    International Nuclear Information System (INIS)

    Tucakovic, Dragan; Zivanovic, Titoslav; Beloshevic, Srdjan

    2006-01-01

    Current computer technology development enables application of powerful software packages that can provide a reliable insight into real operating conditions of a steam boiler in the Thermal Power Plant. Namely, an application of CFD code to the 3D analysis of combustion and heat transfer in a furnace provides temperature, velocity and concentration fields in both cross sectional and longitudinal planes of the observed furnace. In order to obtain reliable analytical results, which corresponds to real furnace conditions, it is necessary to accurately predict a distribution of mill gases and hot air between burners' sections, because these parameters are input values for the furnace 3D calculation. Regarding these tasks, the computer code for the prediction of mill gases and hot air distribution has been developed at the Department for steam boilers of the Faculty of Mechanical Engineering in Belgrade. The code is based on simultaneous calculations of material and heat balances for fan mill and air tracts. The aim of this paper is to present a methodology of performed calculations and results obtained for the steam boiler furnace of 350 MWe Thermal Power Plant equipped with eight fan mills. Key words: mill gases, hot air, aerodynamic calculation, air tract, mill tract.

  6. Histone modification profiles are predictive for tissue/cell-type specific expression of both protein-coding and microRNA genes

    Directory of Open Access Journals (Sweden)

    Zhang Michael Q

    2011-05-01

    Full Text Available Abstract Background Gene expression is regulated at both the DNA sequence level and through modification of chromatin. However, the effect of chromatin on tissue/cell-type specific gene regulation (TCSR is largely unknown. In this paper, we present a method to elucidate the relationship between histone modification/variation (HMV and TCSR. Results A classifier for differentiating CD4+ T cell-specific genes from housekeeping genes using HMV data was built. We found HMV in both promoter and gene body regions to be predictive of genes which are targets of TCSR. For example, the histone modification types H3K4me3 and H3K27ac were identified as the most predictive for CpG-related promoters, whereas H3K4me3 and H3K79me3 were the most predictive for nonCpG-related promoters. However, genes targeted by TCSR can be predicted using other type of HMVs as well. Such redundancy implies that multiple type of underlying regulatory elements, such as enhancers or intragenic alternative promoters, which can regulate gene expression in a tissue/cell-type specific fashion, may be marked by the HMVs. Finally, we show that the predictive power of HMV for TCSR is not limited to protein-coding genes in CD4+ T cells, as we successfully predicted TCSR targeted genes in muscle cells, as well as microRNA genes with expression specific to CD4+ T cells, by the same classifier which was trained on HMV data of protein-coding genes in CD4+ T cells. Conclusion We have begun to understand the HMV patterns that guide gene expression in both tissue/cell-type specific and ubiquitous manner.

  7. Predictive modelling of the impact of argon injection on H-mode plasmas in JET with the RITM code

    International Nuclear Information System (INIS)

    Unterberg, B; Kalupin, D; Tokar', M Z; Corrigan, G; Dumortier, P; Huber, A; Jachmich, S; Kempenaars, M; Kreter, A; Messiaen, A M; Monier-Garbet, P; Ongena, J; Puiatti, M E; Valisa, M; Hellermann, M von

    2004-01-01

    Self-consistent modelling of energy and particle transport of the plasma background and impurities has been performed with the code RITM for argon seeded high density H-mode plasmas in JET. The code can reproduce both the profiles in the plasma core and the structure of the edge pedestal. The impact of argon on core transport is found to be small; in particular, no significant change in confinement is observed in both experimental and modelling results. The same transport model, which has been used to reproduce density peaking in the radiative improved mode in TEXTOR, reveals a flat density profile in Ar seeded JET H-mode plasmas in agreement with the experimental observations. This behaviour is attributed to the rather flat profile of the safety factor in the bulk of H-mode discharges

  8. Predicting the aerodynamic characteristics of 2D airfoil and the performance of 3D wind turbine using a CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bum Suk; Kim, Mann Eung [Korean Register of Shipping, Daejeon (Korea, Republic of); Lee, Young Ho [Korea Maritime Univ., Busan (Korea, Republic of)

    2008-07-15

    Despite of the laminar-turbulent transition region co-exist with fully turbulence region around the leading edge of an airfoil, still lots of researchers apply to fully turbulence models to predict aerodynamic characteristics. It is well known that fully turbulent model such as standard k-model couldn't predict the complex stall and the separation behavior on an airfoil accurately, it usually leads to over prediction of the aerodynamic characteristics such as lift and drag forces. So, we apply correlation based transition model to predict aerodynamic performance of the NREL (National Renewable Energy Laboratory) Phase IV wind turbine. And also, compare the computed results from transition model with experimental measurement and fully turbulence results. Results are presented for a range of wind speed, for a NREL Phase IV wind turbine rotor. Low speed shaft torque, power, root bending moment, aerodynamic coefficients of 2D airfoil and several flow field figures results included in this study. As a result, the low speed shaft torque predicted by transitional turbulence model is very good agree with the experimental measurement in whole operating conditions but fully turbulent model(K- {epsilon}) over predict the shaft torque after 7m/s. Root bending moment is also good agreement between the prediction and experiments for most of the operating conditions, especially with the transition model.

  9. Predicting the aerodynamic characteristics of 2D airfoil and the performance of 3D wind turbine using a CFD code

    International Nuclear Information System (INIS)

    Kim, Bum Suk; Kim, Mann Eung; Lee, Young Ho

    2008-01-01

    Despite of the laminar-turbulent transition region co-exist with fully turbulence region around the leading edge of an airfoil, still lots of researchers apply to fully turbulence models to predict aerodynamic characteristics. It is well known that fully turbulent model such as standard k-model couldn't predict the complex stall and the separation behavior on an airfoil accurately, it usually leads to over prediction of the aerodynamic characteristics such as lift and drag forces. So, we apply correlation based transition model to predict aerodynamic performance of the NREL (National Renewable Energy Laboratory) Phase IV wind turbine. And also, compare the computed results from transition model with experimental measurement and fully turbulence results. Results are presented for a range of wind speed, for a NREL Phase IV wind turbine rotor. Low speed shaft torque, power, root bending moment, aerodynamic coefficients of 2D airfoil and several flow field figures results included in this study. As a result, the low speed shaft torque predicted by transitional turbulence model is very good agree with the experimental measurement in whole operating conditions but fully turbulent model(K- ε) over predict the shaft torque after 7m/s. Root bending moment is also good agreement between the prediction and experiments for most of the operating conditions, especially with the transition model

  10. Two-dimensional steady-state thermal and hydraulic analysis code for prediction of detailed temperature fields around distorted fuel pin in LMFBR assembly: SPOTBOW

    International Nuclear Information System (INIS)

    Shimizu, T.

    1983-01-01

    SPOTBOW computer program has been developed for predicting detailed temperature and turbulent flow velocity fields around distorted fuel pins in LMFBR fuel assemblies, in which pin to pin and pin to wrapper tube contacts may occur. The present study started from the requirement of reactor core designers to evaluate local hot spot temperature due to the wire contact effect and the pin bowing effect on cladding temperature distribution. This code calculates for both unbaffled and wire-wrapped pin bundles. The Galerkin method and iterative procedure were used to solve the basic equations which govern the local heat and momentum transfer in turbulent fluid flow around the distorted pins. Comparisons have been made with cladding temperatures measured in normal and distorted pin bundle mockups to check the validity of this code. Predicted peak temperatures in the vicinity of wire contact point were somewhat higher than the measured values, and the shape of the peaks agreed well with measurement. The changes of cladding temperature due to the decrease of gap width between bowing pin and adjacent pin were predicted well

  11. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  12. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  13. SIEX3: A correlated computer code for prediction of fast reactor mixed oxide fuel and blanket pin performance

    International Nuclear Information System (INIS)

    Baker, R.B.; Wilson, D.R.

    1986-04-01

    The SIEX3 computer program was developed to calculate the fuel and cladding performance of oxide fuel and oxide blanket pins irradiated in the fast neutron environment of a liquid metal cooled reactor. The code is uniquely designed to be accurate yet quick running and use a minimum of computer core storage. This was accomplished through the correlation of physically based models to very large data bases of irradiation test results. Data from over 200 fuel pins and over 800 transverse fuel microscopy samples were used in the calibrations

  14. Improved atomic data for electron-transport predictions by the codes TIGER and TIGERP: II. Electron stopping and range data

    International Nuclear Information System (INIS)

    Peek, J.M.; Halbleib, J.A.

    1983-04-01

    The electron stopping and range data now used in the TIGER and TIGERP electron-transport codes are extracted and compared with other data for these processes. At the smallest collision energies treated by these codes, E approx. 1 keV, the stopping-power is estimated to be accurate for small-Z targets, to be about 25 percent too small for Z near 36 and to be a factor of three too small for Z > 79. These errors decrease with increasing E and the largest error for any target is roughly 20 percent for E = 10 keV. The closely related continuous-slowing-down range is estimated, at 1 keV, to be about 25 percent too small for small-Z targets and a factor of 2 too large for large-Z targets. The electron-transport problem of reflection from planer surfaces is re-investigated with improved stopping-power data. The effects of this change for the examples considered were about the size of the statistical uncertainties in the calculation, 1 to 2 percent

  15. BIODOSE: a code for predicting the dose to man from radionuclides released from underground nuclear waste repositories

    International Nuclear Information System (INIS)

    Bonner, N.A.; Ng, Y.C.

    1980-03-01

    The BIODOSE computer program simulates the environmental transport of radionuclides released to surface water and predicts the resulting dosage to humans. This report describes the program and discusses its use in the evaluation of nuclear waste repositories. The methods used to estimate dose are examined critically, and the most important parameters in each stage of the calculations are identified as an aid in planning for measurements in the field. Dose predictions from releases of nuclear waste to a large northwestern river (the baseline river) are presented to point out the nuclides, compartments and pathways that contribute most to the hazard as a function of waste storage time. Predictions for five other water systems are presented to identify the most important system parameters that determine the concentrations of individual nuclides in compartments and the resultant dose. The uncertainties in the biological parameters for dose prediction are identified, and changes in current values are suggested. Various ways of reporting dose estimates for radiological safety assessments are discussed. Additional work needed to improve the dose predictions from BIODOSE and specific areas and steps to improve our capabilities to assess the environmental transport of nuclides released from nuclear waste repositories and the resultant dose to man are suggested

  16. A statistical framework to predict functional non-coding regions in the human genome through integrated analysis of annotation data.

    Science.gov (United States)

    Lu, Qiongshi; Hu, Yiming; Sun, Jiehuan; Cheng, Yuwei; Cheung, Kei-Hoi; Zhao, Hongyu

    2015-05-27

    Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu.

  17. Predictive values of diagnostic codes for identifying serious hypocalcemia and dermatologic adverse events among women with postmenopausal osteoporosis in a commercial health plan database.

    Science.gov (United States)

    Wang, Florence T; Xue, Fei; Ding, Yan; Ng, Eva; Critchlow, Cathy W; Dore, David D

    2018-04-10

    Post-marketing safety studies of medicines often rely on administrative claims databases to identify adverse outcomes following drug exposure. Valid ascertainment of outcomes is essential for accurate results. We aim to quantify the validity of diagnostic codes for serious hypocalcemia and dermatologic adverse events from insurance claims data among women with postmenopausal osteoporosis (PMO). We identified potential cases of serious hypocalcemia and dermatologic events through ICD-9 diagnosis codes among women with PMO within claims from a large US healthcare insurer (June 2005-May 2010). A physician adjudicated potential hypocalcemic and dermatologic events identified from the primary position on emergency department (ED) or inpatient claims through medical record review. Positive predictive values (PPVs) and 95% confidence intervals (CIs) quantified the fraction of potential cases that were confirmed. Among 165,729 patients with PMO, medical charts were obtained for 40 of 55 (73%) potential hypocalcemia cases; 16 were confirmed (PPV 40%, 95% CI 25-57%). The PPV was higher for ED than inpatient claims (82 vs. 24%). Among 265 potential dermatologic events (primarily urticaria or rash), we obtained 184 (69%) charts and confirmed 128 (PPV 70%, 95% CI 62-76%). The PPV was higher for ED than inpatient claims (77 vs. 39%). Diagnostic codes for hypocalcemia and dermatologic events may be sufficient to identify events giving rise to emergency care, but are less accurate for identifying events within hospitalizations.

  18. Development of a computer code to predict a ventilation requirement for an underground radioactive waste storage tank

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.J.; Dalpiaz, E.L. [ICF Kaiser Hanford Co., Richland, WA (United States)

    1997-08-01

    Computer code, WTVFE (Waste Tank Ventilation Flow Evaluation), has been developed to evaluate the ventilation requirement for an underground storage tank for radioactive waste. Heat generated by the radioactive waste and mixing pumps in the tank is removed mainly through the ventilation system. The heat removal process by the ventilation system includes the evaporation of water from the waste and the heat transfer by natural convection from the waste surface. Also, a portion of the heat will be removed through the soil and the air circulating through the gap between the primary and secondary tanks. The heat loss caused by evaporation is modeled based on recent evaporation test results by the Westinghouse Hanford Company using a simulated small scale waste tank. Other heat transfer phenomena are evaluated based on well established conduction and convection heat transfer relationships. 10 refs., 3 tabs.

  19. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    Energy Technology Data Exchange (ETDEWEB)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Hirohiko

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author).

  20. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    International Nuclear Information System (INIS)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru; Ishikawa, Hirohiko.

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author)

  1. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    International Nuclear Information System (INIS)

    Raimondi, L.; Svetina, C.; Mahne, N.; Cocco, D.; Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M.; De Ninno, G.; Zeitoun, P.; Dovillaire, G.; Lambert, G.; Boutu, W.; Merdji, H.; Gonzalez, A.I.; Gauthier, D.

    2013-01-01

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization

  2. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, L., E-mail: lorenzo.raimondi@elettra.trieste.it [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Svetina, C.; Mahne, N. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Cocco, D. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS-19 Menlo Park, CA 94025 (United States); Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); De Ninno, G. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); Zeitoun, P. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Dovillaire, G. [Imagine Optic, 18 Rue Charles de Gaulle, 91400 Orsay (France); Lambert, G. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Boutu, W.; Merdji, H.; Gonzalez, A.I. [Service des Photons, Atomes et Molécules, IRAMIS, CEA-Saclay, Btiment 522, 91191 Gif-sur-Yvette (France); Gauthier, D. [University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); and others

    2013-05-11

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization.

  3. Comparative prediction of irradiation test of CNFT and Cise prototypes of CIRENE fuel pins, a prediction by transuranus M1V1J12 code

    International Nuclear Information System (INIS)

    Suwardi

    2014-01-01

    A prototype of fuel pin design for HWR by CIRENE has been realized by Center for Nuclear Fuel Technology CNFT-BATAN. The prototype will be irradiated in PRTF Power Ramp Test (PRTF). The facility has been installed inside RSG-GA Siwabessy at Serpong. The present paper reports the preparation of experimentation and prediction of irradiation test. One previous PCI test report is found in, written by Lysell G and Valli G in 1973. The CNFT fuel irradiation test parameter is adapted to both PRTF and power loop design for RSG-GAS reactor in Serpong mainly the maxima of: rod length, neutrons flux, total power of rod, and power ramp rate. The CNFT CIRENE prototype design has been reported by Futichah et al 2007 and 2010. The AEC-India HWR fuel pin is of 19/22 fuel bundle design has also been evaluated as comparison. The first PCI test prediction has experiment comparison for Cise pin. The second prediction will be used for optimizing the design of ramp test for CNFT CIRENE fuel pin prototype. (author)

  4. Performance Prediction of Centrifugal Compressor for Drop-In Testing Using Low Global Warming Potential Alternative Refrigerants and Performance Test Codes

    Directory of Open Access Journals (Sweden)

    Joo Hoon Park

    2017-12-01

    Full Text Available As environmental regulations to stall global warming are strengthened around the world, studies using newly developed low global warming potential (GWP alternative refrigerants are increasing. In this study, substitute refrigerants, R-1234ze (E and R-1233zd (E, were used in the centrifugal compressor of an R-134a 2-stage centrifugal chiller with a fixed rotational speed. Performance predictions and thermodynamic analyses of the centrifugal compressor for drop-in testing were performed. A performance prediction method based on the existing ASME PTC-10 performance test code was proposed. The proposed method yielded the expected operating area and operating point of the centrifugal compressor with alternative refrigerants. The thermodynamic performance of the first and second stages of the centrifugal compressor was calculated as the polytropic state. To verify the suitability of the proposed method, the drop-in test results of the two alternative refrigerants were compared. The predicted operating range based on the permissible deviation of ASME PTC-10 confirmed that the temperature difference was very small at the same efficiency. Because the drop-in test of R-1234ze (E was performed within the expected operating range, the centrifugal compressor using R-1234ze (E is considered well predicted. However, the predictions of the operating point and operating range of R-1233zd (E were lower than those of the drop-in test. The proposed performance prediction method will assist in understanding thermodynamic performance at the expected operating point and operating area of a centrifugal compressor using alternative gases based on limited design and structure information.

  5. Long non-coding RNA PVT1 as a novel potential biomarker for predicting the prognosis of colorectal cancer.

    Science.gov (United States)

    Fan, Heng; Zhu, Jian-Hua; Yao, Xue-Qing

    2018-05-01

    Long non-coding RNA (lncRNA) plays a very important role in the occurrence and development of various tumors, and is a potential biomarker for cancer diagnosis and prognosis. The purpose of this study was to investigate the relationship between the expression of lncRNA plasmacytoma variant translocation 1 (PVT1) and the prognostic significance in patients with colorectal cancer. The expression of PVT1 was measured by real-time quantitative reverse transcription-polymerase chain reaction (qRT-PCR) in cancerous and adjacent tissues of 210 colorectal cancer patients. The disease-free survival and overall survival of colorectal cancer patients were evaluated by Kaplan-Meier analysis, and univariate and multivariate analysis were performed by Cox proportional-hazards model. Our results revealed that PVT1 expression in cancer tissues of colorectal cancer was significantly higher than that of adjacent tissues ( Pcolorectal cancer patients, whether at TNM I/II stage or at TNM III/IV stage. A multivariate Cox regression analysis demonstrated that high PVT1 expression was an independent predictor of poor prognosis in colorectal cancer patients. Our results suggest that high PVT1 expression might be a potential biomarker for assessing tumor recurrence and prognosis in colorectal cancer patients.

  6. The PRC2-binding long non-coding RNAs in human and mouse genomes are associated with predictive sequence features

    Science.gov (United States)

    Tu, Shiqi; Yuan, Guo-Cheng; Shao, Zhen

    2017-01-01

    Recently, long non-coding RNAs (lncRNAs) have emerged as an important class of molecules involved in many cellular processes. One of their primary functions is to shape epigenetic landscape through interactions with chromatin modifying proteins. However, mechanisms contributing to the specificity of such interactions remain poorly understood. Here we took the human and mouse lncRNAs that were experimentally determined to have physical interactions with Polycomb repressive complex 2 (PRC2), and systematically investigated the sequence features of these lncRNAs by developing a new computational pipeline for sequences composition analysis, in which each sequence is considered as a series of transitions between adjacent nucleotides. Through that, PRC2-binding lncRNAs were found to be associated with a set of distinctive and evolutionarily conserved sequence features, which can be utilized to distinguish them from the others with considerable accuracy. We further identified fragments of PRC2-binding lncRNAs that are enriched with these sequence features, and found they show strong PRC2-binding signals and are more highly conserved across species than the other parts, implying their functional importance.

  7. Prediction of corium debris characteristics in lower plenum of a nordic BWR in different accident scenarios using MELCOR code - 15367

    International Nuclear Information System (INIS)

    Phung, V.A.; Galushin, S.; Raub, S.; Goronovski, A.; Villanueva, W.; Koeoep, K; Grishchenko, D.; Kudinov, P.

    2015-01-01

    Severe accident management strategy in Nordic boiling water reactors (BWRs) relies on ex-vessel core debris coolability. The mode of corium melt release from the vessel determines conditions for ex-vessel accident progression and threats to containment integrity, e.g., formation of a non-coolable debris bed and possibility of energetic steam explosion. In-vessel core degradation and relocation is an important stage which determines characteristics of corium debris in the vessel lower plenum, such as mass, composition, thermal properties, timing of relocation, and decay heat. These properties affect debris reheating and remelting, melt interactions with the vessel structures, and possibly vessel failure and melt ejection mode. Core degradation and relocation is contingent upon the accident scenario parameters such as recovery time and capacity of safety systems. The goal of this work is to obtain a better understanding of the impact of the accident scenarios and timing of the events on core relocation phenomena and resulting properties of the debris bed in the vessel lower plenum of Nordic BWRs. In this study, severe accidents in a Nordic BWR reference plant are initiated by a station black out event, which is the main contributor to core damage frequency of the reactor. The work focuses on identifying ranges of debris bed characteristics in the lower plenum as functions of the accident scenario with different recovery timing and capacity of safety systems. The severe accident analysis code MELCOR coupled with GA-IDPSA is used in this work. GA-IDPSA is a Genetic Algorithm-based Integrated Deterministic Probabilistic Safety Analysis tool, which has been developed to search uncertain input parameter space. The search is guided by different target functions. Scenario grouping and clustering approach is applied in order to estimate the ranges of debris characteristics and identify scenario regions of core relocation that can lead to significantly different debris bed

  8. High abundance of Serine/Threonine-rich regions predicted to be hyper-O-glycosylated in the secretory proteins coded by eight fungal genomes

    Directory of Open Access Journals (Sweden)

    González Mario

    2012-09-01

    Full Text Available Abstract Background O-glycosylation of secretory proteins has been found to be an important factor in fungal biology and virulence. It consists in the addition of short glycosidic chains to Ser or Thr residues in the protein backbone via O-glycosidic bonds. Secretory proteins in fungi frequently display Ser/Thr rich regions that could be sites of extensive O-glycosylation. We have analyzed in silico the complete sets of putatively secretory proteins coded by eight fungal genomes (Botrytis cinerea, Magnaporthe grisea, Sclerotinia sclerotiorum, Ustilago maydis, Aspergillus nidulans, Neurospora crassa, Trichoderma reesei, and Saccharomyces cerevisiae in search of Ser/Thr-rich regions as well as regions predicted to be highly O-glycosylated by NetOGlyc (http://www.cbs.dtu.dk. Results By comparison with experimental data, NetOGlyc was found to overestimate the number of O-glycosylation sites in fungi by a factor of 1.5, but to be quite reliable in the prediction of highly O-glycosylated regions. About half of secretory proteins have at least one Ser/Thr-rich region, with a Ser/Thr content of at least 40% over an average length of 40 amino acids. Most secretory proteins in filamentous fungi were predicted to be O-glycosylated, sometimes in dozens or even hundreds of sites. Residues predicted to be O-glycosylated have a tendency to be grouped together forming hyper-O-glycosylated regions of varying length. Conclusions About one fourth of secretory fungal proteins were predicted to have at least one hyper-O-glycosylated region, which consists of 45 amino acids on average and displays at least one O-glycosylated Ser or Thr every four residues. These putative highly O-glycosylated regions can be found anywhere along the proteins but have a slight tendency to be at either one of the two ends.

  9. Development and verification of the LIFE-GCFR computer code for predicting gas-cooled fast-reactor fuel-rod performance

    International Nuclear Information System (INIS)

    Hsieh, T.C.; Billone, M.C.; Rest, J.

    1982-03-01

    The fuel-pin modeling code LIFE-GCFR has been developed to predict the thermal, mechanical, and fission-gas behavior of a Gas-Cooled Fast Reactor (GCFR) fuel rod under normal operating conditions. It consists of three major components: thermal, mechanical, and fission-gas analysis. The thermal analysis includes calculations of coolant, cladding, and fuel temperature; fuel densification; pore migration; fuel grain growth; and plenum pressure. Fuel mechanical analysis includes thermal expansion, elasticity, creep, fission-product swelling, hot pressing, cracking, and crack healing of fuel; and thermal expansion, elasticity, creep, and irradiation-induced swelling of cladding. Fission-gas analysis simultaneously treats all major mechanisms thought to influence fission-gas behavior, which include bubble nucleation, resolution, diffusion, migration, and coalescence; temperature and temperature gradients; and fission-gas interaction with structural defects

  10. Positive predictive value between medical-chart body-mass-index category and obesity versus codes in a claims-data warehouse.

    Science.gov (United States)

    Caplan, Eleanor O; Kamble, Pravin S; Harvey, Raymond A; Smolarz, B Gabriel; Renda, Andrew; Bouchard, Jonathan R; Huang, Joanna C

    2018-01-01

    To evaluate the positive predictive value of claims-based V85 codes for identifying individuals with varying degrees of BMI relative to their measured BMI obtained from medical record abstraction. This was a retrospective validation study utilizing administrative claims and medical chart data from 1 January 2009 to 31 August 2015. Randomly selected samples of patients enrolled in a Medicare Advantage Prescription Drug (MAPD) or commercial health plan and with a V85 claim were identified. The claims-based BMI category (underweight, normal weight, overweight, obese class I-III) was determined via corresponding V85 codes and compared to the BMI category derived from chart abstracted height, weight and/or BMI. The positive predictive values (PPVs) of the claims-based BMI categories were calculated with the corresponding 95% confidence intervals (CIs). The overall PPVs (95% CIs) in the MAPD and commercial samples were 90.3% (86.3%-94.4%) and 91.1% (87.3%-94.9%), respectively. In each BMI category, the PPVs (95% CIs) for the MAPD and commercial samples, respectively, were: underweight, 71.0% (55.0%-87.0%) and 75.9% (60.3%-91.4%); normal, 93.8% (85.4%-100%) and 87.8% (77.8%-97.8%); overweight, 97.4% (92.5%-100%) and 93.5% (84.9%-100%); obese class I, 96.9 (90.9%-100%) and 97.2% (91.9%-100%); obese class II, 97.0% (91.1%-100%) and 93.0% (85.4%-100%); and obese class III, 85.0% (73.3%-96.1%) and 97.1% (91.4%-100%). BMI categories derived from administrative claims, when available, can be used successfully particularly in the context of obesity research.

  11. Developing a Diagnostic Brain Imaging Protocol for Recent Onset Psychosis which Embodies Principles of Predictive Coding and Free Energy

    Directory of Open Access Journals (Sweden)

    Michael Breakspear

    2011-05-01

    Full Text Available Although schizophrenia is usually conceptualized of as a disorder of well formed hallucinations and delusions, its onset is more typically characterized by heightened arousal, a vague but pervasive feeling of unease and poorly formed misperceptions. This unstable constellation of symptoms challenges clinical diagnosis at a time when properly guided interventions are most likely to have the greatest long-term benefits. It is possible to frame this clinical picture as a disturbance in the capacity of the cortex to optimally form predictive models of the environment, to appropriately sample visual scenes in order to estimate the likelihood of such models and to hence minimize surprise in a dynamic social landscape. This approach suggests that manipulating the relationship between visual search strategies and natural scene statistics might be a sensitive means of quantifying core neurobiological deficits early in the course of emerging psychotic disorders. We are developing a diagnostic tool for the prodromal phase of these disorders which involves experimentally disturbing the relationship between visual saccades and the stream of natural images in well directed films. I will outline the conceptual and computational bases and early experimental results thus far obtained, including a canonical example of paranoia and theory of mind in a spaghetti western.

  12. SPEEDI: a computer code system for the real-time prediction of radiation dose to the public due to an accidental release

    International Nuclear Information System (INIS)

    Imai, Kazuhiko; Chino, Masamichi; Ishikawa, Hirohiko

    1985-10-01

    SPEEDI, a computer code system for prediction of environmental doses from radioactive materials accidentally released from a nuclear plant has been developed to assist the organizations responsible for an emergency planning. For realistic simulation, have been developed a model which statistically predicts the basic wind data and then calculates the three-dimensional mass consistent wind field by interpolating these predicted data, and a model for calculation of the diffusion of released materials using a combined model of random-walk and PICK methods. These calculation in the system is carried out in conversational mode with a computer so that we may use the system with ease in an emergency. SPEEDI has also versatile files, which make it easy to control the complicated flows of calculation. In order to attain a short computation time, a large-scale computer with performance of 25 MIPS and a vector processor of maximum 250 MFLOPS are used for calculation of the models so that quick responses have been made. Simplified models are also prepared for calculation in a minicomputer widely used by local governments and research institutes, although the precision of calculation as same with the above models can not be expected to obtain. The present report outlines the structure and functions of SPEEDI, methods for prediction of the wind field and the models for calculation of the concentration of released materials in air and on the ground, and the doses to the public. Some of the diffusion models have been compared with the field experiments which had been carried out as a part of the SPEEDI development program. The report also discusses the reliability of the diffusion models on the basis of the compared results, and shows that they can reasonably simulate the diffusion in the internal boundary layer which commonly occurs near the coastal region. (J.P.N.)

  13. Rhythmic complexity and predictive coding

    DEFF Research Database (Denmark)

    Vuust, Peter; Witek, Maria A G

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events,has a remarkable capacity to move our minds and bodies. How does the cognitive systemenable our experiences of rhythmically complex music? In this paper, we describe somecommon forms of rhythmic complexity...

  14. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  15. Double blind post-test prediction for LOBI-MOD2 small break experiment A2-81 using RELAP5/MOD1/19 computer code as contribution to international CSNI-standardproblem no. 18

    International Nuclear Information System (INIS)

    Jacobs, G.; Mansoor, S.H.

    1986-06-01

    The first small break experiment A2-81 performed in the LOBI-MOD2 test facility was the base of the 18th international CSNI standard problem (ISP 18). Taking part in this exercise, a blind post-test prediction was performed using the light water reactor transient analysis code RELAP5/MOD1. This paper describes the input model preparation and summarizes the findings of the pre-calculation comparing the calculational results with the experimental data. The results show that there was a good agreement between prediction and experiment in the initial stage (up to 250 sec) of the transient and an adequate prediction of the global behaviour (thermal response of the core), which is important for safety related considerations. However, the prediction confirmed some deficiencies of the models in the code concerning vertical and horizontal stratification resulting in a high break mass flow and an erroneous distribution of mass over the primary loops. (orig.) [de

  16. Long non-coding RNA XIST predicts worse prognosis in digestive system tumors: a systemic review and meta-analysis.

    Science.gov (United States)

    Liu, Xuefang; Ming, Xinliang; Jing, Wei; Luo, Ping; Li, Nandi; Zhu, Man; Yu, Mingxia; Liang, Chunzi; Tu, Jiancheng

    2018-05-11

    Aims: Increasing studies are indicating that long non-coding RNA X-inactive specific transcript (XIST) is associated with the prognosis of cancer patients. However, the results have been disputed. Therefore, we aimed to further explore the prognostic value and clinical significance of XIST in various types of cancers. Then, we focused our research on the comparison of the predictive value of XIST between digestive system tumors and non-digestive system tumors. Methods: We performed a systematic search by looking up PubMed, Embase, Cochrane Library, Web of Science, and Medline (up to January 3, 2018). Fifteen studies which matched to our inclusion criteria with a total of 920 patients for overall survival and 867 patients for clinicopathological characteristics were included in this meta-analysis. Pooled hazard ratios (HR) and odds ratios (ORs) with their corresponding 95% confidence intervals (95% CIs) were calculated to summarize the effects. Results: Our results suggested that high expression levels of XIST were associated with unfavorable overall survival in cancer patients (pooled HR=1.81, 95% CI: 1.45-2.26). Additionally, we found that XIST was more valuable in digestive system tumors (pooled HR=2.24, 95% CI: 1.73-2.92) than in non-digestive system tumors (pooled HR=1.22, 95% CI: 0.60-2.45). Additionally, elevated expression levels of XIST were connection with distant metastasis and tumor stage. Conclusion: XIST was correlated with poor prognosis, which suggested that XIST might serve as a novel predictive biomarker for cancer patients, especially for patients of digestive system tumors. ©2018 The Author(s).

  17. Numerical prediction of pressure loss in tight-lattice rod bundle by use of 3-dimensional two-fluid model simulation code ACE-3D

    International Nuclear Information System (INIS)

    Yoshida, Hiroyuki; Takase, Kazuyuki; Suzuki, Takayuki

    2009-01-01

    Two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method or particle interaction method. Therefore, two-fluid model is useful for thermal hydraulic analysis in large-scale domain such as a rod bundle. Japan Atomic Energy Agency (JAEA) develops three dimensional two-fluid model analysis code ACE-3D that adopts boundary fitted coordinate system in order to simulate complex shape flow channel. In this paper, boiling two-phase flow analysis in a tight-lattice rod bundle was performed by the ACE-3D. In the results, the void fraction, which distributes in outermost region of rod bundle, is lower than that in center region of rod bundle. The tendency of void fraction distribution agreed with the measurement results by neutron radiography qualitatively. To evaluate effects of two-phase flow model used in the ACE-3D, numerical simulation of boiling two-phase in tight-lattice rod bundle with no lift force model was also performed. In the results, the lift force model has direct effects on void fraction concentration in gap region, and pressure distribution in horizontal plane induced by void fraction distribution cause of bubble movement from the gap region to the subchannel region. The predicted pressure loss in the section that includes no spacer accorded with experimental results with around 10% of differences. The predicted friction pressure loss was underestimated around 20% of measured values, and the effect of the turbulence model is considered as one of the causes of this underestimation. (author)

  18. Improved atomic data for electron-transport predictions by the codes TIGER and TIGERP. I. Inner-shell ionization by electron collision

    International Nuclear Information System (INIS)

    Peek, J.M.; Halbleib, J.A.

    1983-01-01

    The inner-shell ionization data for electron-target collisions now in use in the TIGER and TIGERP electron-transport codes are extracted and compared with other data for these processes. The TIGER cross sections for K-shell ionization by electron collisions are found to be seriously in error for large-Z targets and incident electron energies greater than 1 MeV. A series of TIGER and TIGERP runs were carried out with and without improved K-shell electron ionization cross section data replacing that now in use. The relative importance of electron-impact and photon ionization of the various subshells was also extracted from these runs. In general, photon ionization dominated in the examples studied so the sensitivity of many predicted properties to errors in the electron-impact subshell ionization data was not large. However, some differences were found and, as all possible applications were not covered in this study, it is recommended that these electron-impact data now in TIGER and TIGERP be replaced. Cross section data for the processes under study are reviewed and those that are most suitable for this application are identified. 19 references, 9 figures, 2 tables

  19. Evaluation of proposed shallow-land burial sites using the PRESTO-II [Prediction of Radiation Effects from Shallow Trench Operations] methodology and code

    International Nuclear Information System (INIS)

    Fields, D.E.; Uslu, I.; Yalcintas, M.G.

    1987-01-01

    PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed to evaluate possible doses and risks (health effects) from shallow-land burial sites. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transport from the trench to an individual or population include ground-water transport, overland flow, erosion, surface water dilution, suspension, atmospheric transport and deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. The proposed waste disposal area in Koteyli, Balikesir, Turkey, has been evaluated using the PRESTO-II methodology. The results have been compared to those obtained for the Barnwell, South Carolina, site. Dose estimates for both sites are below regulatory limits, for the release and exposure scenarios considered. The doses for the sites are comparable, with slightly higher estimates obtained for the Turkish site. 7 refs., 1 tab

  20. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  1. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  2. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  3. Dual Coding in Children.

    Science.gov (United States)

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  4. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  5. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  6. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  7. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  8. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  9. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  10. Use of a finite difference code for the prediction of the ability of sub-floor ventilation strategies to reduce indoor radon concentrations

    International Nuclear Information System (INIS)

    Cohilis, P.; Wouters, P.; L'Heureux, D.

    1992-01-01

    This paper concerns the use of a numerical code, based on the finite difference method, for the evaluation of 222 Rn mitigation strategies in dwellings. It is supposed that 222 Rn transport from soil into a dwelling occurs mainly by pressure-driven air flow. The program calculates the pressure fields under the buildings, supposing a laminar air flow in the soil and adopting the steady-state condition. The simple data structure of the code allows one to describe even complex configurations in an easy way. Clear alphanumerical and graphical outputs are delivered. The calculations presented in the paper illustrate the possibilities of the program. An interesting consequence of the linear assumption implicit in the equations of the model is considered, and a comparison with laboratory measurements is presented. (author)

  11. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  12. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  13. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  14. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  15. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  16. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  17. Development of a numerical code for the prediction of the long-term behavior of the underground facilities for the high-level radioactive waste disposal

    International Nuclear Information System (INIS)

    Sawada, Masataka; Okada, Tetsuji; Hasegawa, Takuma

    2006-01-01

    Complicated phenomena originated by thermo-hydro-mechanical coupling behavior will occur in the near-field of geological disposal of nuclear waste. Development of a numerical evaluation method for such phenomena is important in order to make a reasonable repository design and a safety assessment. In order to achieve the objective above, a numerical model using the equations which can evaluate the swelling characteristics of buffer materials based on the diffusive double layer theory is proposed, and a numerical scheme for the thermo-hydro-mechanical coupled analysis including the swelling model is constructed. The proposed swelling model can reproduce the behavior observed during both swelling pressure tests and swelling deformation tests. When the developed numerical code is applied to the laboratory heater test using a bentonite specimen, it can reproduce the thermal gradient, the distribution of saturation rate and the variation of porosity. The developed numerical code will be applied to well-controlled laboratory tests and full-scale in-situ tests in the future work. In order to apply to the various geochemical conditions around the engineered barrier, chemical component will be coupled to the present numerical code. (author)

  18. Genome-wide identification and functional prediction of nitrogen-responsive intergenic and intronic long non-coding RNAs in maize (Zea mays L.).

    Science.gov (United States)

    Lv, Yuanda; Liang, Zhikai; Ge, Min; Qi, Weicong; Zhang, Tifu; Lin, Feng; Peng, Zhaohua; Zhao, Han

    2016-05-11

    Nitrogen (N) is an essential and often limiting nutrient to plant growth and development. Previous studies have shown that the mRNA expressions of numerous genes are regulated by nitrogen supplies; however, little is known about the expressed non-coding elements, for example long non-coding RNAs (lncRNAs) that control the response of maize (Zea mays L.) to nitrogen. LncRNAs are a class of non-coding RNAs larger than 200 bp, which have emerged as key regulators in gene expression. In this study, we surveyed the intergenic/intronic lncRNAs in maize B73 leaves at the V7 stage under conditions of N-deficiency and N-sufficiency using ribosomal RNA depletion and ultra-deep total RNA sequencing approaches. By integration with mRNA expression profiles and physiological evaluations, 7245 lncRNAs and 637 nitrogen-responsive lncRNAs were identified that exhibited unique expression patterns. Co-expression network analysis showed that the nitrogen-responsive lncRNAs were enriched mainly in one of the three co-expressed modules. The genes in the enriched module are mainly involved in NADH dehydrogenase activity, oxidative phosphorylation and the nitrogen compounds metabolic process. We identified a large number of lncRNAs in maize and illustrated their potential regulatory roles in response to N stress. The results lay the foundation for further in-depth understanding of the molecular mechanisms of lncRNAs' role in response to nitrogen stresses.

  19. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  20. Comparison of a fission-gas effects in a transient overpower test (HUT 5-7A) to FRAS3 code predictions

    International Nuclear Information System (INIS)

    Gruber, E.E.; Randklev, E.H.

    1979-01-01

    Fission gas has an important bearing on fuel dynamics during reactor transients. Fission-gas bubble sizes and densities, both within grains and on grain boundaries, are characterized as functions of radial location at the axial midplane in studies of PNL-9 fuel microstructures before and after the HUT 5-7A (PNL 9-25) TREAT test. The FRAS3 code, being developed to model fission-gas effects in reactor transients, is applied to analyze the results of the HUT 5-7A test are presented to illustrate the observed phenomena and the validity of the modeling approach

  1. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  2. Data analysis and analytical predictions of a steam generator tube bundle flow field for verification of 2-D T/H computer code

    International Nuclear Information System (INIS)

    Hwang, J.Y.; Reid, H.C.; Berringer, R.

    1981-01-01

    Analytical predictions of the flow field within a 60 deg segment flow model of a proposed sodium heated steam generator are compared to experimental results obtained from several axial levels between baffling. The axial/crossflow field is developed by use of alternating multi-ported baffling, accomplished by radial perforation distribution. Radial and axial porous model predictions from an axisymmetric computational analysis compared to intra-pitch experimental data at the mid baffle span location for various levels. The analytical mechanics utilizes a cylindrical, axisymmetric, finite difference model, solving conservation mass and momentum equations. 6 refs

  3. Overexpression of long non-coding RNA TUG1 predicts poor prognosis and promotes cancer cell proliferation and migration in high-grade muscle-invasive bladder cancer.

    Science.gov (United States)

    Iliev, Robert; Kleinova, Renata; Juracek, Jaroslav; Dolezel, Jan; Ozanova, Zuzana; Fedorko, Michal; Pacik, Dalibor; Svoboda, Marek; Stanik, Michal; Slaby, Ondrej

    2016-10-01

    Long non-coding RNA TUG1 is involved in the development and progression of a variety of tumors. Little is known about TUG1 function in high-grade muscle-invasive bladder cancer (MIBC). The aims of our study were to determine expression levels of long non-coding RNA TUG1 in tumor tissue, to evaluate its relationship with clinico-pathological features of high-grade MIBC, and to describe its function in MIBC cells in vitro. TUG1 expression levels were determined in paired tumor and adjacent non-tumor bladder tissues of 47 patients with high-grade MIBC using real-time PCR. Cell line T-24 and siRNA silencing were used to study the TUG1 function in vitro. We observed significantly increased levels of TUG1 in tumor tissue in comparison to adjacent non-tumor bladder tissue (P TUG1 levels were significantly increased in metastatic tumors (P = 0.0147) and were associated with shorter overall survival of MIBC patients (P = 0.0241). TUG1 silencing in vitro led to 34 % decrease in cancer cell proliferation (P = 0.0004) and 23 % reduction in migration capacity of cancer cells (P TUG1 silencing on cell cycle distribution and number of apoptotic cells. Our study confirmed overexpression of TUG1 in MIBC tumor tissue and described its association with worse overall survival in high-grade MIBC patients. Together with in vitro observations, these data suggest an oncogenic role of TUG1 and its potential usage as biomarker or therapeutic target in MIBC.

  4. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  5. Prediction of the effects of soil-based countermeasures on soil solution chemistry of soils contaminated with radiocesium using the hydrogeochemical code PHREEQC.

    Science.gov (United States)

    Hormann, Volker; Kirchner, Gerald

    2002-04-22

    For agriculturally used areas, which are contaminated by the debris from a nuclear accident, the use of chemical amendmends (e.g. potassium chloride and lime) is among the most common soil-based countermeasures. These countermeasures are intended to reduce the plant uptake of radionuclides (mainly 137Cs and 90Sr) by competitive inhibition by chemically similar ions. So far, the impacts of countermeasures on soil solution composition - and thus, their effectiveness - have almost exclusively been established experimentally, since they depend on mineral composition and chemical characteristics of the soil affected. In this study, which focuses on caesium contamination, the well-established code PHREEQC was used as a geochemical model to calculate the changes in the ionic compositions of soil solutions, which result from the application of potassium or ammonium in batch equilibrium experiments. The simple ion exchange model used by PHREEQC was improved by taking into account selective sorption of Cs+, NH4+ and K+ by clay minerals. Calculations were performed with three different initial soil solution compositions, corresponding to particular soil types (loam, sand, peat). For loamy and sandy soils, our calculational results agree well with experimental data reported by Nisbet (Effectiveness of soil-based countermeasures six months and one year after contamination of five diverse soil types with caesium-134 and strontium-90. Contract Report NRPB-M546, National Radiation Protection Board, Chilton, 1995.). For peat, discrepancies were found indicating that for organic soils a reliable set of exchange constants of the relevant cations still has to be determined experimentally. For cesium, however, these discrepancies almost disappeared if selective sites were assumed to be inaccessible. Additionally, results of sensitivity analyses are presented by which the influence of the main soil parameters on Cs+ concentrations in solution after soil treatment has been systematically

  6. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  7. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  8. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  9. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  10. Comparison report of the OECD/CSNI international standard problem 21 (Piper-one experiment PO-SB-7). Volume 1 comparison report. Volume 2 evaluation of code accuracy in the prediction of ISP 21

    International Nuclear Information System (INIS)

    1989-11-01

    The present report deals with the comparison of 6 blind predictions, submitted by 5 participants, and the experimental results measured during the test PO-SB-7 performed in PIPER-ONE facility. The PIPER-ONE apparatus is an experimental simulator of a General Electric BWR. The test PO-SB-7 simulates a SB-LOCA originated by a break in one recirculation line of the reference BWR-6 plant, without intervention of high pressure ECCS. The overall activity constitutes the CSNI ISP-21. The main parts of the report are: a) outline of the test facility and of the PO-SB-7 experiment; b) overview of input models used by participants; c) evaluation of participant predictions on the basis of one-by-one comparison with selected experimental trends; d) evaluation of present code capabilities and accuracy, on the basis of the overall comparison between measured data and participants double blind predictions. Finally, a judgement is given in relation to the overall value of the activity

  11. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  12. Coupled thermohydromechanical analysis of a heater test in unsaturated clay and fractured rock at Kamaishi Mine. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rutqvist, J.; Noorishad, J.; Tsang, C.F. [Lawrence Berkeley National Lab., Berkeley, CA (United States). Earth Sciences Division

    1999-08-01

    The recent interest in coupled thermohydromechanical (THM) processes associated with geological disposal of spent nuclear fuel, and in particular the issue of resaturation of a clay buffer around a waste canister, has encouraged major development of the finite element computer program ROCMAS in the past three years. The main objective is to develop a tool for analysis of THM processes in practical field scale, including fractured rock masses and detailed behavior of the near-field, nonisothermal and unsaturated system composed of rock fractures and clay buffer. In this report, the ROCMAS code is presented and applied for modeling of coupled THM processes in small laboratory samples of bentonite clay as well as a large in situ THM experiment in fractured rocks, at Kamaishi Mine, Japan. The fundamental responses of a bentonite clay material were investigated in a number of laboratory tests, including suction tests, infiltration tests, thermal gradient tests, and swelling pressure tests. These laboratory tests are modeled with ROCMAS for determination of material properties and for validation of the newly implemented algorithms. The ROCMAS code is also applied for modeling of a 3-year in situ heater experiment conducted in fractured hard rock, which consists of a heater-clay buffer system and simulates a nuclear waste repository. The temperature of the heater was set to 100 deg C during 8.5 months followed by a 6-month cooling period. The bentonite and the rock surrounding the heater were extensively instrumented for monitoring of temperature, moisture content, fluid pressure, stress, strain, and displacements. An overall good agreement between the modeling and measured results, both against the laboratory experiments and the in situ heater test, indicates that the THM responses in fractured rock and bentonite are well represented by the coupled numerical model, ROCMAS. In addition, robustness and applicability of ROCMAS to practical scale problems is demonstrated

  13. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  14. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  15. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  16. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  17. Expression profiles analysis of long non-coding RNAs identified novel lncRNA biomarkers with predictive value in outcome of cutaneous melanoma.

    Science.gov (United States)

    Ma, Xu; He, Zhijuan; Li, Ling; Yang, Daping; Liu, Guofeng

    2017-09-29

    Recent advancements in cancer biology have identified a large number of lncRNAs that are dysregulated expression in the development and tumorigenesis of cancers, highlighting the importance of lncRNAs as a key player for human cancers. However, the prognostic value of lncRNAs still remains unclear and needs to be further investigated. In the present study, we aim to assess the prognostic value of lncRNAs in cutaneous melanoma by integrated lncRNA expression profiles from TCGA database and matched clinical information from a large cohort of patients with cutaneous melanoma. We finally identified a set of six lncRNAs that are significantly associated with survival of patients with cutaneous melanoma. A linear combination of six lncRNAs ( LINC01260, HCP5, PIGBOS1, RP11-247L20.4, CTA-292E10.6 and CTB-113P19.5 ) was constructed as a six-lncRNA signature which classified patients of training cohort into the high-risk group and low-risk group with significantly different survival time. The prognostic value of the six-lncRNA signature was validated in both the validation cohort and entire TCGA cohort. Moreover, the six-lncRNA signature is independent of known clinic-pathological factors by multivariate Cox regression analysis and demonstrated good performance for predicting three- and five-year overall survival by time-dependent receiver operating characteristic (ROC) analysis. Our study provides novel insights into the molecular heterogeneity of cutaneous melanoma and also shows potentially important implications of lncRNAs for prognosis and therapy for cutaneous melanoma.

  18. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  19. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  20. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  1. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  2. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  3. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  4. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  5. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  6. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  7. Predicting VQ Performance Bound for LSF Coding

    OpenAIRE

    Chatterjee, Saikat; Sreenivas, TV

    2008-01-01

    For vector quantization (VQ) of speech line spectrum frequency (LSF) parameters, we experimentally determine a mapping function between the mean square error (MSE) measure and the perceptually motivated average spectral distortion (SD) measure. Using the mapping function, we estimate the minimum bits/vector required for transparent quantization of telephone-band and wide-band speech LSF parameters, respectively, as 22 bits/vector and 36 bits/vector, where the distribution of LSF vector is mod...

  8. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  9. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  10. Do circulating long non-coding RNAs (lncRNAs) (LincRNA-p21, GAS 5, HOTAIR) predict the treatment response in patients with head and neck cancer treated with chemoradiotherapy?

    Science.gov (United States)

    Fayda, Merdan; Isin, Mustafa; Tambas, Makbule; Guveli, Murat; Meral, Rasim; Altun, Musa; Sahin, Dilek; Ozkan, Gozde; Sanli, Yasemin; Isin, Husniye; Ozgur, Emre; Gezer, Ugur

    2016-03-01

    Long non-coding RNAs (lncRNAs) have been shown to be aberrantly expressed in head and neck cancer (HNC). The aim of the present study was to evaluate plasma levels of three lncRNA molecules (lincRNA-p21, GAS5, and HOTAIR) in the treatment response in HNC patients treated with radical chemoradiotherapy (CRT). Forty-one patients with HNC were enrolled in the study. Most of the patients had nasopharyngeal carcinoma (n = 27, 65.9 %) and locally advanced disease. Blood was drawn at baseline and treatment evaluation 4.5 months after therapy. lncRNAs in plasma were measured by semiquantitative PCR. Treatment response was evaluated according to clinical examination, RECIST and PERCIST criteria based on magnetic resonance imaging (MRI), and positron emission tomography with computed tomography (PET/CT) findings. Complete response (CR) rates were 73.2, 36.6, and 50 % for clinical investigation, PET/CT-, or MRI-based response evaluation, respectively. Predictive value of lncRNAs was investigated in patients with CR vs. those with partial response (PR)/progressive disease (PD). We found that post-treatment GAS5 levels in patients with PR/PD were significantly higher compared with patients with CR based on clinical investigation (p = 0.01). Receiver operator characteristic (ROC) analysis showed that at a cutoff value of 0.3 of GAS5, sensitivity and specificity for clinical tumor response were 82 and 77 %, respectively. Interestingly, pretreatment GAS5 levels were significantly increased in patients with PR/PD compared to those with CR upon MRI-based response evaluation (p = 0.042). In contrast to GAS5, pretreatment or post-treatment lincRNA-p21 and HOTAIR levels were not informative for treatment response. Our results suggest that circulating GAS5 could be a biomarker in predicting treatment response in HNC patients.

  11. Comparison between MAAP and ECART predictions of radionuclide transport throughout a French standard PWR reactor coolant system; Transport des radionucleides dans le circuit primaire d`un REP: comparaison des codes MAAP et ECART

    Energy Technology Data Exchange (ETDEWEB)

    Hervouet, C.; Ranval, W. [Electricite de France (EDF), 92 - Clamart (France); Parozzi, F.; Eusebi, M. [Ente Nazionale per l`Energia Elettrica, Rome (Italy)

    1996-04-01

    In the framework of a collaboration agreement between EDF and ENEL, the MAAP (Modular Accident Analysis Program) and ECART (ENEL Code for Analysis of radionuclide Transport) predictions about the fission product retention inside the reactor cooling system of a French PWR 1300 MW during a small Loss of Coolant Accident were compared. The volatile fission products CsI, CsOH, TeO{sub 2} and the structural materials, all of them released early by the core, are more retained in MAAP than in ECART. On the other hand, the non-volatile fission products, released later, are more retained in ECART than in MAAP, because MAAP does not take into account diffusion-phoresis: in fact, this deposition phenomenon is very significant when the molten core vaporizes the water of the vessel lower plenum. Centrifugal deposition in bends, that can be modeled only with ECART, slightly increases the whole retention in the circuit if it is accounted for. (authors). 18 refs., figs., tabs.

  12. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  13. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  14. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  15. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  16. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  17. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  18. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  19. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  20. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  1. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  2. International assessment of PCA codes

    International Nuclear Information System (INIS)

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE

  3. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  4. Comparison of sodium aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.

    1984-01-01

    Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)

  5. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  6. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  7. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  8. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  9. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  10. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  11. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  12. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  13. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  14. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  15. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  16. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  17. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  18. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  19. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  20. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  1. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  2. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  3. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  4. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  5. Analytical validation of the CACECO containment analysis code

    International Nuclear Information System (INIS)

    Peak, R.D.

    1979-08-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. This report covers the verification of the CACECO code by problems that can be solved by hand calculations or by reference to textbook and literature examples. The verification concentrates on the accuracy of the material and energy balances maintained by the code and on the independence of the four cells analyzed by the code so that the user can be assured that the code analyses are numerically correct and independent of the organization of the input data submitted to the code

  6. Users' guide to CACECO containment analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Peak, R.D.

    1979-06-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. The code is included in the National Energy Software Center Library at Argonne National Laboratory as Program No. 762. This users' guide describes the CACECO code and its data input requirements. The code description covers the many mathematical models used and the approximations used in their solution. The descriptions are detailed to the extent that the user can modify the code to suit his unique needs, and, indeed, the reader is urged to consider code modification acceptable.

  7. APR1400 Containment Simulation with CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Chung, Bub Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  8. APR1400 Containment Simulation with CONTAIN code

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Chung, Bub Dong

    2010-01-01

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  9. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M H; Hur, C; Kim, D K; Cho, H J [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  10. Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code

    International Nuclear Information System (INIS)

    Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.

    1978-12-01

    A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A

  11. NASA space radiation transport code development consortium

    International Nuclear Information System (INIS)

    Townsend, L. W.

    2005-01-01

    Recently, NASA established a consortium involving the Univ. of Tennessee (lead institution), the Univ. of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking. (authors)

  12. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  13. Fast neutron analysis code SAD1

    International Nuclear Information System (INIS)

    Jung, M.; Ott, C.

    1985-01-01

    A listing and an example of outputs of the M.C. code SAD1 are given here. This code has been used many times to predict responses of fast neutrons in hydrogenic materials (in our case emulsions or plastics) towards the elastic n, p scattering. It can be easily extended to other kinds of such materials and to any kind of incident fast neutron spectrum

  14. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  15. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  16. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  17. RELAP5/MOD2 code assessment

    International Nuclear Information System (INIS)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-01-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G

  18. RELAP5/MOD2 code assessment

    Energy Technology Data Exchange (ETDEWEB)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-11-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G.

  19. Future trends in image coding

    Science.gov (United States)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  20. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  1. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  2. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  3. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  4. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  5. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  6. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  7. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  8. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  9. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  10. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  11. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  12. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  13. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  14. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  15. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  16. CBP Phase I Code Integration

    International Nuclear Information System (INIS)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-01-01

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown and Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown and Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface

  17. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  18. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  19. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  20. Code of Ethics

    Science.gov (United States)

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  1. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  2. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  3. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  4. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  5. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  7. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  8. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  9. Code package to analyse behavior of the WWER fuel rods in normal operation: TOPRA's code

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.

    2001-01-01

    This paper briefly describes the code package intended for analysis of WWER fuel rod characteristics. The package includes two computer codes: TOPRA-1 and TOPRA-2 for full-scale fuel rod analyses; MRZ and MKK codes for analyzing the separate sections of fuel rods in r-z and r-j geometry. The TOPRA's codes are developed on the base of PIN-mod2 version and verified against experimental results obtained in MR, MIR and Halden research reactors (in the framework of SOFIT, FGR-2 and FUMEX experimental programs). Comparative analysis of calculation results and results from post-reactor examination of the WWER-440 and WWER-1000 fuel rod are also made as additional verification of these codes. To avoid the enlarging of uncertainties in fuel behavior prediction as a result of simplifying of the fuel geometry, MKK and MRZ codes are developed on the basis of the finite element method with use of the three nodal finite elements. Results obtained in the course of the code verification indicate the possibility for application of the method and TOPRA's code for simplified engineering calculations of WWER fuel rods thermal-physical parameters. An analysis of maximum relative errors for predicting of the fuel rod characteristics in the range of the accepted parameter values is also presented in the paper

  10. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  11. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  12. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  13. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  14. SWAAM code development, verification and application to steam generator design

    International Nuclear Information System (INIS)

    Shin, Y.W.; Valentin, R.A.

    1990-01-01

    This paper describes the family of SWAAM codes developed by Argonne National Laboratory to analyze the effects of sodium/water reactions on LMR steam generators. The SWAAM codes were developed as design tools for analyzing various phenomena related to steam generator leaks and to predict the resulting thermal and hydraulic effects on the steam generator and the intermediate heat transport system (IHTS). The theoretical foundations and numerical treatments on which the codes are based are discussed, followed by a description of code capabilities and limitations, verification of the codes by comparison with experiment, and applications to steam generator and IHTS design. (author). 25 refs, 14 figs

  15. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  16. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  17. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  18. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  19. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  20. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  1. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  2. MVP utilization for PWR design code

    International Nuclear Information System (INIS)

    Matsumoto, Hideki; Tahara, Yoshihisa

    2001-01-01

    MHI studies the method of the spatially dependent resonance cross sections so as to predict the power distribution in a fuel pellet accurately. For this purpose, the multiband method and the Stoker/Weiss method were implemented to the 2 dimensional transport code PHOENIX-P, and the methods were validated by comparing them with MVP code. Although the appropriate reference was not obtain from the deterministic codes on the resonance cross section study, now the Monte Carlo code MVP result is available and useful as reference. It is shown here how MVP is used to develop the multiband method and the Stoker/Weiss method, and how effective the result of MVP is on the study of the resonance cross sections. (author)

  3. GAPCON-THERMAL-3 code description

    International Nuclear Information System (INIS)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes

  4. GAPCON-THERMAL-3 code description

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes.

  5. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  6. Papa, a Particle Tracing Code in Pascal

    NARCIS (Netherlands)

    Haselhoff, E.H.; Haselhoff, Eltjo H.; Ernst, G.J.

    1990-01-01

    During the design of a 10 ¿m high-gain FEL oscillator (TEUFEL Project) we developed a new particle-tracing code to perform simulations of thermionic- and photo-cathode electron injectors/accelerators. The program allows predictions of current, energy and beam emittance in a user-specified linac

  7. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  8. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  9. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  10. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  11. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  12. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  13. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  14. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  15. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  17. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  18. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  19. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  20. Cracking the Gender Codes

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  1. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  2. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  3. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. Incorporation of the variation in conductivity with burnup in the stability of code predictive LAPUR; Incroporacion de la variacion de la conductividad con el quemado en el codigo de estabilidad predictivo LAPUR

    Energy Technology Data Exchange (ETDEWEB)

    Escriba, A.; Munoz-cobo, J. L.; Merino, R.; Melara, J.; Albendea, M.

    2013-07-01

    In the field of nuclear safety, the analysis of the stability of boiling water reactors is one of the biggest challenges for researchers. LAPUR code that allows to obtain the parameters of stability of the plant (Decay rate and frequency), being one of the programs used by IBERDROLA can be used for these calculations. With the collaboration of the research group TIN of the Polytechnic University of Valencia, a model of loss of conductivity of uranium has joined with the burned LAPUR. This update allows you to play the phenomenon in a more realistic way. This improvement has been validated and verified contrasting results with reference values.

  6. Fast H.264/AVC FRExt intra coding using belief propagation.

    Science.gov (United States)

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  7. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  8. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  9. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  10. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1983-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  11. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1982-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  12. RADTRAN II: revised computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1982-10-01

    A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables

  13. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  14. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  15. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  16. Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code

    International Nuclear Information System (INIS)

    Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.

    2004-01-01

    Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials

  17. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  18. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  19. A CFD code comparison of wind turbine wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Storey, R. C.; Sørensen, Niels N.

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds-stresses fo...

  20. Practical Design of Delta-Sigma Multiple Description Audio Coding

    DEFF Research Database (Denmark)

    Leegaard, Jack Højholt; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in order to trade off side distortion for central distortion in MD coding. It was shown that ...

  1. Fire-accident analysis code (FIRAC) verification

    International Nuclear Information System (INIS)

    Nichols, B.D.; Gregory, W.S.; Fenton, D.L.; Smith, P.R.

    1986-01-01

    The FIRAC computer code predicts fire-induced transients in nuclear fuel cycle facility ventilation systems. FIRAC calculates simultaneously the gas-dynamic, material transport, and heat transport transients that occur in any arbitrarily connected network system subjected to a fire. The network system may include ventilation components such as filters, dampers, ducts, and blowers. These components are connected to rooms and corridors to complete the network for moving air through the facility. An experimental ventilation system has been constructed to verify FIRAC and other accident analysis codes. The design emphasizes network system characteristics and includes multiple chambers, ducts, blowers, dampers, and filters. A larger industrial heater and a commercial dust feeder are used to inject thermal energy and aerosol mass. The facility is instrumented to measure volumetric flow rate, temperature, pressure, and aerosol concentration throughout the system. Aerosol release rates and mass accumulation on filters also are measured. We have performed a series of experiments in which a known rate of thermal energy is injected into the system. We then simulated this experiment with the FIRAC code. This paper compares and discusses the gas-dynamic and heat transport data obtained from the ventilation system experiments with those predicted by the FIRAC code. The numerically predicted data generally are within 10% of the experimental data

  2. Hydrogen burn assessment with the CONTAIN code

    International Nuclear Information System (INIS)

    van Rij, H.M.

    1986-01-01

    The CONTAIN computer code was developed at Sandia National Laboratories, under contract to the US Nuclear Regulatory Commission (NRC). The code is intended for calculations of containment loads during severe accidents and for prediction of the radioactive source term in the event that the containment leaks or fails. A strong point of the CONTAIN code is the continuous interaction of the thermal-hydraulics phenomena, aerosol behavior and fission product behavior. The CONTAIN code can be used for Light Water Reactors as well as Liquid Metal Reactors. In order to evaluate the CONTAIN code on its merits, comparisons between the code and experiments must be made. In this paper, CONTAIN calculations for the hydrogen burn experiments, carried out at the Nevada Test Site (NTS), are presented and compared with the experimental data. In the Large-Scale Hydrogen Combustion Facility at the NTS, 21 tests have been carried out. These tests were sponsored by the NRC and the Electric Power Research Institute (EPRI). The tests, carried out by EG and G, were performed in a spherical vessel 16 m in diameter with a design pressure of 700 kPa, substantially higher than that of most commercial nuclear containment buildings

  3. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  4. The Coding Question.

    Science.gov (United States)

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  6. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  7. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  8. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  9. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  10. Code of Medical Ethics

    Directory of Open Access Journals (Sweden)

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  11. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  12. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  13. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  14. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  15. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  16. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  17. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  18. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  19. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  20. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  1. Radioactive action code

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  2. Building Codes and Regulations.

    Science.gov (United States)

    Fisher, John L.

    The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)

  3. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  4. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  5. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  6. CERN Code of Conduct

    CERN Document Server

    Department, HR

    2010-01-01

    The Code is intended as a guide in helping us, as CERN contributors, to understand how to conduct ourselves, treat others and expect to be treated. It is based around the five core values of the Organization. We should all become familiar with it and try to incorporate it into our daily life at CERN.

  7. Nuclear safety code study

    Energy Technology Data Exchange (ETDEWEB)

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  8. Revised C++ coding conventions

    CERN Document Server

    Callot, O

    2001-01-01

    This document replaces the note LHCb 98-049 by Pavel Binko. After a few years of practice, some simplification and clarification of the rules was needed. As many more people have now some experience in writing C++ code, their opinion was also taken into account to get a commonly agreed set of conventions

  9. Corporate governance through codes

    NARCIS (Netherlands)

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  10. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  11. Broadcast Coded Slotted ALOHA

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives ...

  12. Software Defined Coded Networking

    DEFF Research Database (Denmark)

    Di Paola, Carla; Roetter, Daniel Enrique Lucani; Palazzo, Sergio

    2017-01-01

    the quality of each link and even across neighbouring links and using simulations to show that an additional reduction of packet transmission in the order of 40% is possible. Second, to advocate for the use of network coding (NC) jointly with software defined networking (SDN) providing an implementation...

  13. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  14. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  15. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    having a probability Pi of being equal to a 1. Let us assume ... equal to a 0/1 has no bearing on the probability of the. It is often ... bits (call this set S) whose individual bits add up to zero ... In the context of binary error-correct~ng codes, specifi-.

  16. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  17. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  18. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  19. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  20. Coding for urologic office procedures.

    Science.gov (United States)

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  2. Sandia National Laboratories analysis code data base

    Science.gov (United States)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  3. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  4. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  5. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    This paper describes the application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third problem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed. (Auth.)

  6. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed

  7. Development of safety analysis codes for light water reactor

    International Nuclear Information System (INIS)

    Akimoto, Masayuki

    1985-01-01

    An overview is presented of currently used major codes for the prediction of thermohydraulic transients in nuclear power plants. The overview centers on the two-phase fluid dynamics of the coolant system and the assessment of the codes. Some of two-phase phenomena such as phase separation are not still predicted with engineering accuracy. MINCS-PIPE are briefly introduced. The MINCS-PIPE code is to assess constitutive relations and to aid development of various experimental correlations for 1V1T model to 2V2T model. (author)

  8. Contributions of vitreous natural analogs to the investigation of long-term nuclear glass behavior; Apports des analogues naturels vitreux a la validation des codes de prediction du comportement a long terme des verres nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Techer, I

    1999-07-01

    This study assesses the extend of the analogy between the alteration behavior in water and in a moist clay environment of aluminosilicate volcanic glass and alumino-borosilicate nuclear containment glass. Basaltic glass alteration in water initially occurs by hydrolysis processes with an activation energy on the order of 73 kJ.mol{sup -1}. As the reaction progresses, the alteration rate drops by over four orders of magnitude from the initial rate r{sub 0}, The alteration kinetics are not governed by the alteration solution chemistry alone, the glass alteration film appears to have a major role as a diffusion barrier limiting the transfer of reaction species and products. All these aspects highlight the behavioral analogy between basaltic glass and nuclear borosilicate glass in aqueous media. Conversely, the alteration reaction of obsidian-type volcanic glass involves other mechanisms than those governing the dissolution of borosilicate glass. Basaltic glass alteration is also examined in the presence of a clay environmental material, in a study of the natural basaltic glass and argillaceous pelites system of the Salagou basin in southern France, in an approach combining mineralogical, chemical and isotopic data to assess the interactions between a basaltic glass and the argillaceous pelites. Laboratory leach test results with basaltic glass and measured data for the Salagou glass in its natural environment are modeled using a code implementing a kinetic law coupling diffusive transfer of dissolved silica with a reaction affinity law. (author)

  9. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  10. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  11. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  12. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  13. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  14. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  15. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  16. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  17. Tokamak simulation code manual

    International Nuclear Information System (INIS)

    Chung, Moon Kyoo; Oh, Byung Hoon; Hong, Bong Keun; Lee, Kwang Won

    1995-01-01

    The method to use TSC (Tokamak Simulation Code) developed by Princeton plasma physics laboratory is illustrated. In KT-2 tokamak, time dependent simulation of axisymmetric toroidal plasma and vertical stability have to be taken into account in design phase using TSC. In this report physical modelling of TSC are described and examples of application in JAERI and SERI are illustrated, which will be useful when TSC is installed KAERI computer system. (Author) 15 refs., 6 figs., 3 tabs

  18. Status of MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  19. Codes of Good Governance

    DEFF Research Database (Denmark)

    Beck Jørgensen, Torben; Sørensen, Ditte-Lene

    2013-01-01

    Good governance is a broad concept used by many international organizations to spell out how states or countries should be governed. Definitions vary, but there is a clear core of common public values, such as transparency, accountability, effectiveness, and the rule of law. It is quite likely......, transparency, neutrality, impartiality, effectiveness, accountability, and legality. The normative context of public administration, as expressed in codes, seems to ignore the New Public Management and Reinventing Government reform movements....

  20. Status of the ASTEC integral code

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)