WorldWideScience

Sample records for rocmas code prediction

  1. Coupled thermohydromechanical analysis of a heater test in unsaturated clay and fractured rock at Kamaishi Mine. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Rutqvist, J.; Noorishad, J.; Tsang, C.F. [Lawrence Berkeley National Lab., Berkeley, CA (United States). Earth Sciences Division

    1999-08-01

    The recent interest in coupled thermohydromechanical (THM) processes associated with geological disposal of spent nuclear fuel, and in particular the issue of resaturation of a clay buffer around a waste canister, has encouraged major development of the finite element computer program ROCMAS in the past three years. The main objective is to develop a tool for analysis of THM processes in practical field scale, including fractured rock masses and detailed behavior of the near-field, nonisothermal and unsaturated system composed of rock fractures and clay buffer. In this report, the ROCMAS code is presented and applied for modeling of coupled THM processes in small laboratory samples of bentonite clay as well as a large in situ THM experiment in fractured rocks, at Kamaishi Mine, Japan. The fundamental responses of a bentonite clay material were investigated in a number of laboratory tests, including suction tests, infiltration tests, thermal gradient tests, and swelling pressure tests. These laboratory tests are modeled with ROCMAS for determination of material properties and for validation of the newly implemented algorithms. The ROCMAS code is also applied for modeling of a 3-year in situ heater experiment conducted in fractured hard rock, which consists of a heater-clay buffer system and simulates a nuclear waste repository. The temperature of the heater was set to 100 deg C during 8.5 months followed by a 6-month cooling period. The bentonite and the rock surrounding the heater were extensively instrumented for monitoring of temperature, moisture content, fluid pressure, stress, strain, and displacements. An overall good agreement between the modeling and measured results, both against the laboratory experiments and the in situ heater test, indicates that the THM responses in fractured rock and bentonite are well represented by the coupled numerical model, ROCMAS. In addition, robustness and applicability of ROCMAS to practical scale problems is demonstrated

  2. Program ROCMAS: introduction and user's guide

    International Nuclear Information System (INIS)

    Noorishad, J.; Ayatollahi, M.S.

    1980-09-01

    This model is for the study of coupled fluid flow and stress in deformable fractured rock masses. The effective mass theory of Biot is used to relate the pressure changes with the displacements of the rock matrix. The deformation of the fracture surfaces in turn affects the fracture flow through the sensitive dependence of permeability on aperture. The code combines techniques of fluid flow modeling and stress-strain analysis. The model is based on general theory which is of fundamental interest and practical importance. The code has the capability of handling a range of complex problems in fluid flow, induced rock mass deformation and soil consolidation. Further developments to couple the fluid flow with heat transfer, or to incorporate dynamic stress analysis, can increase the range of applicability. More extensive application of the code is called for

  3. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  4. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...... promising new framework for understanding perception and action, may solve pending theoretical inconsistencies in agency detection research, account for the puzzling experimental findings mentioned above, and provide hypotheses for future experimental testing. Predictive coding explains how the brain......, unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...

  5. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  6. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  7. Program ROCMAS: introduction and user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Noorishad, J.; Ayatollahi, M.S.

    1980-09-01

    This model is for the study of coupled fluid flow and stress in deformable fractured rock masses. The effective mass theory of Biot is used to relate the pressure changes with the displacements of the rock matrix. The deformation of the fracture surfaces in turn affects the fracture flow through the sensitive dependence of permeability on aperture. The code combines techniques of fluid flow modeling and stress-strain analysis. The model is based on general theory which is of fundamental interest and practical importance. The code has the capability of handling a range of complex problems in fluid flow, induced rock mass deformation and soil consolidation. Further developments to couple the fluid flow with heat transfer, or to incorporate dynamic stress analysis, can increase the range of applicability. More extensive application of the code is called for.

  8. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  9. A comparison of oxide thickness predictability from the perspective of codes

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo-Young; Shin, Hye-In; Kim, Kyung-Tae; Han, Hee-Tak; Kim, Hong-Jin; Kim, Yong-Hwan [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2016-10-15

    In Korea, OPR1000 and Westinghouse type nuclear power plant reactor fuel rods oxide thickness has been evaluated by imported code A. Because of this, there have been multiple constraints in operation and maintenance of fuel rod design system. For this reason, there has been a growing demand to establish an independent fuel rod design system. To meet this goal, KNF has recently developed its own code B for fuel rod design. The objective of this study is to compare oxide thickness prediction performance between code A and code B and to check the validity of predicting corrosion behaviors of newly developed code B. This study is based on Pool Side Examination (PSE) data for the performance confirmation. For the examination procedures, the oxide thickness measurement methods and equipment of PSE are described in detail. In this study, code B is confirmed conservatism and validity on evaluating cladding oxide thickness through the comparison with code A. Code prediction values show higher value than measured data from PSE. Throughout this study, the values by code B are evaluated and proved to be valid in a view point of the oxide thickness evaluation. However, the code B input for prediction has been made by designer's judgment with complex handwork that might be lead to excessive conservative result and ineffective design process with some possibility of errors.

  10. Decision-making in schizophrenia: A predictive-coding perspective.

    Science.gov (United States)

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    2009-08-01

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  12. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  13. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  14. Evolving a Dynamic Predictive Coding Mechanism for Novelty Detection

    OpenAIRE

    Haggett, Simon J.; Chu, Dominique; Marshall, Ian W.

    2007-01-01

    Novelty detection is a machine learning technique which identifies new or unknown information in data sets. We present our current work on the construction of a new novelty detector based on a dynamical version of predictive coding. We compare three evolutionary algorithms, a simple genetic algorithm, NEAT and FS-NEAT, for the task of optimising the structure of an illustrative dynamic predictive coding neural network to improve its performance over stimuli from a number of artificially gener...

  15. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    Science.gov (United States)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  16. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    International Nuclear Information System (INIS)

    Geng, S.M.; Tew, R.C.

    1994-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement

  17. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  18. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  19. Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?

    Science.gov (United States)

    Heilbron, Micha; Chait, Maria

    2017-08-04

    Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Real coded genetic algorithm for fuzzy time series prediction

    Science.gov (United States)

    Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.

    2017-10-01

    Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.

  1. A predictive coding account of bistable perception - a model-based fMRI study.

    Science.gov (United States)

    Weilnhammer, Veith; Stuke, Heiner; Hesselmann, Guido; Sterzer, Philipp; Schmack, Katharina

    2017-05-01

    In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together, our current work

  2. A predictive coding account of bistable perception - a model-based fMRI study.

    Directory of Open Access Journals (Sweden)

    Veith Weilnhammer

    2017-05-01

    Full Text Available In bistable vision, subjective perception wavers between two interpretations of a constant ambiguous stimulus. This dissociation between conscious perception and sensory stimulation has motivated various empirical studies on the neural correlates of bistable perception, but the neurocomputational mechanism behind endogenous perceptual transitions has remained elusive. Here, we recurred to a generic Bayesian framework of predictive coding and devised a model that casts endogenous perceptual transitions as a consequence of prediction errors emerging from residual evidence for the suppressed percept. Data simulations revealed close similarities between the model's predictions and key temporal characteristics of perceptual bistability, indicating that the model was able to reproduce bistable perception. Fitting the predictive coding model to behavioural data from an fMRI-experiment on bistable perception, we found a correlation across participants between the model parameter encoding perceptual stabilization and the behaviourally measured frequency of perceptual transitions, corroborating that the model successfully accounted for participants' perception. Formal model comparison with established models of bistable perception based on mutual inhibition and adaptation, noise or a combination of adaptation and noise was used for the validation of the predictive coding model against the established models. Most importantly, model-based analyses of the fMRI data revealed that prediction error time-courses derived from the predictive coding model correlated with neural signal time-courses in bilateral inferior frontal gyri and anterior insulae. Voxel-wise model selection indicated a superiority of the predictive coding model over conventional analysis approaches in explaining neural activity in these frontal areas, suggesting that frontal cortex encodes prediction errors that mediate endogenous perceptual transitions in bistable perception. Taken together

  3. Modification in the FUDA computer code to predict fuel performance at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Das, M; Arunakumar, B V; Prasad, P N [Nuclear Power Corp., Mumbai (India)

    1997-08-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig.

  4. Modification in the FUDA computer code to predict fuel performance at high burnup

    International Nuclear Information System (INIS)

    Das, M.; Arunakumar, B.V.; Prasad, P.N.

    1997-01-01

    The computer code FUDA (FUel Design Analysis) participated in the blind exercises organized by the IAEA CRP (Co-ordinated Research Programme) on FUMEX (Fuel Modelling at Extended Burnup). While the code prediction compared well with the experiments at Halden under various parametric and operating conditions, the fission gas release and fission gas pressure were found to be slightly over-predicted, particularly at high burnups. In view of the results of 6 FUMEX cases, the main models and submodels of the code were reviewed and necessary improvements were made. The new version of the code FUDA MOD 2 is now able to predict fuel performance parameter for burn-ups up to 50000 MWD/TeU. The validation field of the code has been extended to prediction of thorium oxide fuel performance. An analysis of local deformations at pellet interfaces and near the end caps is carried out considering the hourglassing of the pellet by finite element technique. (author). 15 refs, 1 fig

  5. Improving performance of single-path code through a time-predictable memory hierarchy

    DEFF Research Database (Denmark)

    Cilku, Bekim; Puffitsch, Wolfgang; Prokesch, Daniel

    2017-01-01

    -predictable memory hierarchy with a prefetcher that exploits the predictability of execution traces in single-path code to speed up code execution. The new memory hierarchy reduces both the cache-miss penalty time and the cache-miss rate on the instruction cache. The benefit of the approach is demonstrated through...

  6. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  7. Predictive coding of music--brain responses to rhythmic incongruity.

    Science.gov (United States)

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction.

  8. Predictive Coding Strategies for Developmental Neurorobotics

    Science.gov (United States)

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  9. Predictive Coding Strategies for Developmental Neurorobotics

    Directory of Open Access Journals (Sweden)

    Jun-Cheol ePark

    2012-05-01

    Full Text Available In recent years, predictive coding strategies have been proposed as a possible way of how the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal believes to the observed error between what it expected to happen, and what actually happens. In this paper we present a potpourri of developmental neurorobotics experiments in which minimalist prediction-error based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions.

  10. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  11. Establishment the code for prediction of waste volume on NPP decommissioning

    International Nuclear Information System (INIS)

    Cho, W. H.; Park, S. K.; Choi, Y. D.; Kim, I. S.; Moon, J. K.

    2013-01-01

    In practice, decommissioning waste volume can be estimated appropriately by finding the differences between prediction and actual operation and considering the operational problem or supplementary matters. So in the nuclear developed countries such as U.S. or Japan, the decommissioning waste volume is predicted on the basis of the experience in their own decommissioning projects. Because of the contamination caused by radioactive material, decontamination activity and management of radio-active waste should be considered in decommissioning of nuclear facility unlike the usual plant or facility. As the decommissioning activity is performed repeatedly, data for similar activities are accumulated, and optimal strategy can be achieved by comparison with the predicted strategy. Therefore, a variety of decommissioning experiences are the most important. In Korea, there is no data on the decommissioning of commercial nuclear power plants yet. However, KAERI has accumulated the basis decommissioning data of nuclear facility through decommissioning of research reactor (KRR-2) and uranium conversion plant (UCP). And DECOMMIS(DECOMMissioning Information Management System) was developed to provide and manage the whole data of decommissioning project. Two codes, FAC code and WBS code, were established in this process. FAC code is the one which is classified by decommissioning target of nuclear facility, and WBS code is classified by each decommissioning activity. The reason why two codes where created is that the codes used in DEFACS (Decommissioning Facility Characterization management System) and DEWOCS (Decommissioning Work-unit productivity Calculation System) are different from each other, and they were classified each purpose. DEFACS which manages the facility needs the code that categorizes facility characteristics, and DEWOCS which calculates unit productivity needs the code that categorizes decommissioning waste volume. KAERI has accumulated decommissioning data of KRR

  12. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  13. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  14. Sonic boom predictions using a modified Euler code

    Science.gov (United States)

    Siclari, Michael J.

    1992-04-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  15. Void fraction prediction of NUPEC PSBT tests by CATHARE code

    International Nuclear Information System (INIS)

    Del Nevo, A.; Michelotti, L.; Moretti, F.; Rozzia, D.; D'Auria, F.

    2011-01-01

    The current generation of thermal-hydraulic system codes benefits of about sixty years of experiments and forty years of development and are considered mature tools to provide best estimate description of phenomena and detailed reactor system representations. However, there are continuous needs for checking the code capabilities in representing nuclear system, for drawing attention to their weak points, for identifying models which need to be refined for best-estimate calculations. Prediction of void fraction and Departure from Nucleate Boiling (DNB) in system thermal-hydraulics is currently based on empirical approaches. The database carried out by Nuclear Power Engineering Corporation (NUPEC), Japan addresses these issues. It is suitable for supporting the development of new computational tools based on more mechanistic approaches (i.e. three-field codes, two-phase CFD, etc.) as well as for validating current generation of thermal-hydraulic system codes. Selected experiments belonging to this database are used for the OECD/NRC PSBT benchmark. The paper reviews the activity carried out by CATHARE2 code on the basis of the subchannel (four test sections) and presents rod bundle (different axial power profile and test sections) experiments available in the database in steady state and transient conditions. The results demonstrate the accuracy of the code in predicting the void fraction in different thermal-hydraulic conditions. The tests are performed varying the pressure, coolant temperature, mass flow and power. Sensitivity analyses are carried out addressing nodalization effect and the influence of the initial and boundary conditions of the tests. (author)

  16. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    Science.gov (United States)

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  17. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  18. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...... signal. The method is based on linear predictive coding and Kalman estimation at the decoder. We employ a novel encoder state-space representation with a linear quantization noise model. The encoder is represented by the Kalman measurement at the decoder. The presented method designs the encoder...... and decoder offline through an iterative algorithm based on closed-form minimization of the trace of the decoder state error covariance. The design method is shown to provide considerable performance gains, when the transmitted quantized prediction errors are subject to loss, in terms of signal-to-noise ratio...

  19. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    Science.gov (United States)

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. Copyright © 2014 the authors 0270-6474/14/3416046-12$15.00/0.

  20. Evaluation of the MMCLIFE 3.0 code in predicting crack growth in titanium aluminide composites

    International Nuclear Information System (INIS)

    Harmon, D.; Larsen, J.M.

    1999-01-01

    Crack growth and fatigue life predictions made with the MMCLIFE 3.0 code are compared to test data for unidirectional, continuously reinforced SCS-6/Ti-14Al-21Nb (wt pct) composite laminates. The MMCLIFE 3.0 analysis package is a design tool capable of predicting strength and fatigue performance in metal matrix composite (MMC) laminates. The code uses a combination of micromechanic lamina and macromechanic laminate analyses to predict stresses and uses linear elastic fracture mechanics to predict crack growth. The crack growth analysis includes a fiber bridging model to predict the growth of matrix flaws in 0 degree laminates and is capable of predicting the effects of interfacial shear stress and thermal residual stresses. The code has also been modified to include edge-notch flaws in addition to center-notch flaws. The model was correlated with constant amplitude, isothermal data from crack growth tests conducted on 0- and 90 degree SCS-6/Ti-14-21 laminates. Spectrum fatigue tests were conducted, which included dwell times and frequency effects. Strengths and areas for improvement for the analysis are discussed

  1. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  2. Quantitative accuracy assessment of thermalhydraulic code predictions with SARBM

    International Nuclear Information System (INIS)

    Prosek, A.

    2001-01-01

    In recent years, the nuclear reactor industry has focused significant attention on nuclear reactor systems code accuracy and uncertainty issues. A few methods suitable to quantify code accuracy of thermalhydraulic code calculations were proposed and applied in the past. In this study a Stochastic Approximation Ratio Based Method (SARBM) was adapted and proposed for accuracy quantification. The objective of the study was to qualify the SARBM. The study compare the accuracy obtained by SARBM with the results obtained by widely used Fast Fourier Transform Based Method (FFTBM). The methods were applied to RELAP5/MOD3.2 code calculations of various BETHSY experiments. The obtained results showed that the SARBM was able to satisfactorily predict the accuracy of the calculated trends when visually comparing plots and comparing the results with the qualified FFTBM. The analysis also showed that the new figure-of-merit called accuracy factor (AF) is more convenient than stochastic approximation ratio for combining single variable accuracy's into total accuracy. The accuracy results obtained for the selected tests suggest that the acceptability factors for the SAR method were reasonably defined. The results also indicate that AF is a useful quantitative measure of accuracy.(author)

  3. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    Science.gov (United States)

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss. © 2011 IEEE

  4. DYMEL code for prediction of dynamic stability limits in boilers

    International Nuclear Information System (INIS)

    Deam, R.T.

    1980-01-01

    Theoretical and experimental studies of Hydrodynamic Instability in boilers were undertaken to resolve the uncertainties of the existing predictive methods at the time the first Advanced Gas Cooled Reactor (AGR) plant was commissioned. The experiments were conducted on a full scale electrical simulation of an AGR boiler and revealed inadequacies in existing methods. As a result a new computer code called DYMEL was developed based on linearisation and Fourier/Laplace Transformation of the one-dimensional boiler equations in both time and space. Beside giving good agreement with local experimental data, the DYMEL code has since shown agreement with stability data from the plant, sodium heated helical tubes, a gas heated helical tube and an electrically heated U-tube. The code is now used widely within the U.K. (author)

  5. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  6. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  7. Development of computer code for determining prediction parameters of radionuclide migration in soil layer

    International Nuclear Information System (INIS)

    Ogawa, Hiromichi; Ohnuki, Toshihiko

    1986-07-01

    A computer code (MIGSTEM-FIT) has been developed to determine the prediction parameters, retardation factor, water flow velocity, dispersion coefficient, etc., of radionuclide migration in soil layer from the concentration distribution of radionuclide in soil layer or in effluent. In this code, the solution of the predicting equation for radionuclide migration is compared with the concentration distribution measured, and the most adequate values of parameter can be determined by the flexible tolerance method. The validity of finite differential method, which was one of the method to solve the predicting equation, was confirmed by comparison with the analytical solution, and also the validity of fitting method was confirmed by the fitting of the concentration distribution calculated from known parameters. From the examination about the error, it was found that the error of the parameter obtained by using this code was smaller than that of the concentration distribution measured. (author)

  8. RELAP5/MOD2 code modifications to obtain better predictions for the once-through steam generator

    International Nuclear Information System (INIS)

    Blanchat, T.; Hassan, Y.

    1989-01-01

    The steam generator is a major component in pressurized water reactors. Predicting the response of a steam generator during both steady-state and transient conditions is essential in studying the thermal-hydraulic behavior of a nuclear reactor coolant system. Therefore, many analytical and experimental efforts have been performed to investigate the thermal-hydraulic behavior of the steam generators during operational and accident transients. The objective of this study is to predict the behavior of the secondary side of the once-through steam generator (OTSG) using the RELAP5/MOD2 computer code. Steady-state conditions were predicted with the current version of the RELAP5/MOD2 code and compared with experimental plant data. The code predictions consistently underpredict the degree of superheat. A new interface friction model has been implemented in a modified version of RELAP5/MOD2. This modification, along with changes to the flow regime transition criteria and the heat transfer correlations, correctly predicts the degree of superheat and matches plant data

  9. MASTR: multiple alignment and structure prediction of non-coding RNAs using simulated annealing

    DEFF Research Database (Denmark)

    Lindgreen, Stinus; Gardner, Paul P; Krogh, Anders

    2007-01-01

    function that considers sequence conservation, covariation and basepairing probabilities. The results show that the method is very competitive to similar programs available today, both in terms of accuracy and computational efficiency. AVAILABILITY: Source code available from http://mastr.binf.ku.dk/......MOTIVATION: As more non-coding RNAs are discovered, the importance of methods for RNA analysis increases. Since the structure of ncRNA is intimately tied to the function of the molecule, programs for RNA structure prediction are necessary tools in this growing field of research. Furthermore......, it is known that RNA structure is often evolutionarily more conserved than sequence. However, few existing methods are capable of simultaneously considering multiple sequence alignment and structure prediction. RESULT: We present a novel solution to the problem of simultaneous structure prediction...

  10. TRANSENERGY S: computer codes for coolant temperature prediction in LMFBR cores during transient events

    International Nuclear Information System (INIS)

    Glazer, S.; Todreas, N.; Rohsenow, W.; Sonin, A.

    1981-02-01

    This document is intended as a user/programmer manual for the TRANSENERGY-S computer code. The code represents an extension of the steady state ENERGY model, originally developed by E. Khan, to predict coolant and fuel pin temperatures in a single LMFBR core assembly during transient events. Effects which may be modelled in the analysis include temporal variation in gamma heating in the coolant and duct wall, rod power production, coolant inlet temperature, coolant flow rate, and thermal boundary conditions around the single assembly. Numerical formulations of energy equations in the fuel and coolant are presented, and the solution schemes and stability criteria are discussed. A detailed description of the input deck preparation is presented, as well as code logic flowcharts, and a complete program listing. TRANSENERGY-S code predictions are compared with those of two different versions of COBRA, and partial results of a 61 pin bundle test case are presented

  11. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction

    Directory of Open Access Journals (Sweden)

    Kiran Sree Pokkuluri

    2014-01-01

    Full Text Available Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000. The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata and MCC (modified clonal classifier to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992 datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006 dataset and nonpromoters from EID (Saxonov et al., 2000 and UTRdb (Pesole et al., 2002 datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively.

  12. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  13. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    Science.gov (United States)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  14. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    Science.gov (United States)

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  15. Information-Theoretic Evidence for Predictive Coding in the Face-Processing System.

    Science.gov (United States)

    Brodski-Guerniero, Alla; Paasch, Georg-Friedrich; Wollstadt, Patricia; Özdemir, Ipek; Lizier, Joseph T; Wibral, Michael

    2017-08-23

    Predictive coding suggests that the brain infers the causes of its sensations by combining sensory evidence with internal predictions based on available prior knowledge. However, the neurophysiological correlates of (pre)activated prior knowledge serving these predictions are still unknown. Based on the idea that such preactivated prior knowledge must be maintained until needed, we measured the amount of maintained information in neural signals via the active information storage (AIS) measure. AIS was calculated on whole-brain beamformer-reconstructed source time courses from MEG recordings of 52 human subjects during the baseline of a Mooney face/house detection task. Preactivation of prior knowledge for faces showed as α-band-related and β-band-related AIS increases in content-specific areas; these AIS increases were behaviorally relevant in the brain's fusiform face area. Further, AIS allowed decoding of the cued category on a trial-by-trial basis. Our results support accounts indicating that activated prior knowledge and the corresponding predictions are signaled in low-frequency activity (information our eyes/retina and other sensory organs receive from the outside world, but strongly depends also on information already present in our brains, such as prior knowledge about specific situations or objects. A currently popular theory in neuroscience, predictive coding theory, suggests that this prior knowledge is used by the brain to form internal predictions about upcoming sensory information. However, neurophysiological evidence for this hypothesis is rare, mostly because this kind of evidence requires strong a priori assumptions about the specific predictions the brain makes and the brain areas involved. Using a novel, assumption-free approach, we find that face-related prior knowledge and the derived predictions are represented in low-frequency brain activity. Copyright © 2017 the authors 0270-6474/17/378273-11$15.00/0.

  16. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  17. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    International Nuclear Information System (INIS)

    Chen, K.F.; Olson, C.A.

    1983-01-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction

  18. Biocomputational prediction of small non-coding RNAs in Streptomyces

    Czech Academy of Sciences Publication Activity Database

    Pánek, Josef; Bobek, Jan; Mikulík, Karel; Basler, Marek; Vohradský, Jiří

    2008-01-01

    Roč. 9, č. 217 (2008), s. 1-14 ISSN 1471-2164 R&D Projects: GA ČR GP204/07/P361; GA ČR GA203/05/0106; GA ČR GA310/07/1009 Grant - others:XE(XE) EC Integrated Project ActinoGEN, LSHM-CT-2004-005224. Institutional research plan: CEZ:AV0Z50200510 Keywords : non-coding RNA * streptomyces * biocomputational prediction Subject RIV: IN - Informatics, Computer Science Impact factor: 3.926, year: 2008

  19. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    Science.gov (United States)

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  20. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  1. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  2. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  3. Novel Intermode Prediction Algorithm for High Efficiency Video Coding Encoder

    Directory of Open Access Journals (Sweden)

    Chan-seob Park

    2014-01-01

    Full Text Available The joint collaborative team on video coding (JCT-VC is developing the next-generation video coding standard which is called high efficiency video coding (HEVC. In the HEVC, there are three units in block structure: coding unit (CU, prediction unit (PU, and transform unit (TU. The CU is the basic unit of region splitting like macroblock (MB. Each CU performs recursive splitting into four blocks with equal size, starting from the tree block. In this paper, we propose a fast CU depth decision algorithm for HEVC technology to reduce its computational complexity. In 2N×2N PU, the proposed method compares the rate-distortion (RD cost and determines the depth using the compared information. Moreover, in order to speed up the encoding time, the efficient merge SKIP detection method is developed additionally based on the contextual mode information of neighboring CUs. Experimental result shows that the proposed algorithm achieves the average time-saving factor of 44.84% in the random access (RA at Main profile configuration with the HEVC test model (HM 10.0 reference software. Compared to HM 10.0 encoder, a small BD-bitrate loss of 0.17% is also observed without significant loss of image quality.

  4. Modification V to the computer code, STRETCH, for predicting coated-particle behavior

    International Nuclear Information System (INIS)

    Valentine, K.H.

    1975-04-01

    Several modifications have been made to the stress analysis code, STRETCH, in an attempt to improve agreement between the calculated and observed behavior of pyrocarbon-coated fuel particles during irradiation in a reactor environment. Specific areas of the code that have been modified are the neutron-induced densification model and the neutron-induced creep calculation. Also, the capability for modeling surface temperature variations has been added. HFIR Target experiments HT-12 through HT-15 have been simulated with the modified code, and the neutron-fluence vs particle-failure predictions compare favorably with the experimental results. Listings of the modified FORTRAN IV main source program and additional FORTRAN IV functions are provided along with instructions for supplying the additional input data. (U.S.)

  5. A Predictive Coding Account of Psychotic Symptoms in Autism Spectrum Disorder

    Science.gov (United States)

    van Schalkwyk, Gerrit I.; Volkmar, Fred R.; Corlett, Philip R.

    2017-01-01

    The co-occurrence of psychotic and autism spectrum disorder (ASD) symptoms represents an important clinical challenge. Here we consider this problem in the context of a computational psychiatry approach that has been applied to both conditions--predictive coding. Some symptoms of schizophrenia have been explained in terms of a failure of top-down…

  6. Software Code Smell Prediction Model Using Shannon, Rényi and Tsallis Entropies

    Directory of Open Access Journals (Sweden)

    Aakanshi Gupta

    2018-05-01

    Full Text Available The current era demands high quality software in a limited time period to achieve new goals and heights. To meet user requirements, the source codes undergo frequent modifications which can generate the bad smells in software that deteriorate the quality and reliability of software. Source code of the open source software is easily accessible by any developer, thus frequently modifiable. In this paper, we have proposed a mathematical model to predict the bad smells using the concept of entropy as defined by the Information Theory. Open-source software Apache Abdera is taken into consideration for calculating the bad smells. Bad smells are collected using a detection tool from sub components of the Apache Abdera project, and different measures of entropy (Shannon, Rényi and Tsallis entropy. By applying non-linear regression techniques, the bad smells that can arise in the future versions of software are predicted based on the observed bad smells and entropy measures. The proposed model has been validated using goodness of fit parameters (prediction error, bias, variation, and Root Mean Squared Prediction Error (RMSPE. The values of model performance statistics ( R 2 , adjusted R 2 , Mean Square Error (MSE and standard error also justify the proposed model. We have compared the results of the prediction model with the observed results on real data. The results of the model might be helpful for software development industries and future researchers.

  7. An adaptive mode-driven spatiotemporal motion vector prediction for wavelet video coding

    Science.gov (United States)

    Zhao, Fan; Liu, Guizhong; Qi, Yong

    2010-07-01

    The three-dimensional subband/wavelet codecs use 5/3 filters rather than Haar filters for the motion compensation temporal filtering (MCTF) to improve the coding gain. In order to curb the increased motion vector rate, an adaptive motion mode driven spatiotemporal motion vector prediction (AMDST-MVP) scheme is proposed. First, by making use of the direction histograms of four motion vector fields resulting from the initial spatial motion vector prediction (SMVP), the motion mode of the current GOP is determined according to whether the fast or complex motion exists in the current GOP. Then the GOP-level MVP scheme is thereby determined by either the S-MVP or the AMDST-MVP, namely, AMDST-MVP is the combination of S-MVP and temporal-MVP (T-MVP). If the latter is adopted, the motion vector difference (MVD) between the neighboring MV fields and the S-MVP resulting MV of the current block is employed to decide whether or not the MV of co-located block in the previous frame is used for prediction the current block. Experimental results show that AMDST-MVP not only can improve the coding efficiency but also reduce the number of computation complexity.

  8. Comparison of LIFE-4 and TEMECH code predictions with TREAT transient test data

    International Nuclear Information System (INIS)

    Gneiting, B.C.; Bard, F.E.; Hunter, C.W.

    1984-09-01

    Transient tests in the TREAT reactor were performed on FFTF Reference design mixed-oxide fuel pins, most of which had received prior steady-state irradiation in the EBR-II reactor. These transient test results provide a data base for calibration and verification of fuel performance codes and for evaluation of processes that affect pin damage during transient events. This paper presents a comparison of the LIFE-4 and TEMECH fuel pin thermal/mechanical analysis codes with the results from 20 HEDL TREAT experiments, ten of which resulted in pin failure. Both the LIFE-4 and TEMECH codes provided an adequate representation of the thermal and mechanical data from the TREAT experiments. Also, a criterion for 50% probability of pin failure was developed for each code using an average cumulative damage fraction value calculated for the pins that failed. Both codes employ the two major cladding loading mechanisms of differential thermal expansion and central cavity pressurization which were demonstrated by the test results. However, a detailed evaluation of the code predictions shows that the two code systems weigh the loading mechanism differently to reach the same end points of the TREAT transient results

  9. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  10. Thought insertion as a self-disturbance: An integration of predictive coding and phenomenological approaches

    Directory of Open Access Journals (Sweden)

    Philipp Sterzer

    2016-10-01

    Full Text Available Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from nowhere, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: Ichstörungen of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a becoming sensory of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, internal events such as thoughts (including volitions, emotions and memories can also be associated with increased prediction-error signaling and are thus imbued with

  11. Coding Scheme for Assessment of Students’ Explanations and Predictions

    Directory of Open Access Journals (Sweden)

    Mihael Gojkošek

    2017-04-01

    Full Text Available In the process of analyzing students’ explanations and predictions for interaction between brightness enhancement film and beam of white light, a need for objective and reliable assessment instrumentarose. Consequently, we developed a codingscheme that was mostly inspired by the rubrics for self-assessment of scientific abilities. In the paper we present the grading categories that were integrated in the coding scheme, and descriptions of criteria used for evaluation of students work. We report the results of reliability analysis of new assessment tool and present some examples of its application.

  12. Improved predictions of nuclear reaction rates for astrophysics applications with the TALYS reaction code

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J.

    2008-01-01

    Nuclear reaction rates for astrophysics applications are traditionally determined on the basis of Hauser-Feshbach reaction codes, like MOST. These codes use simplified schemes to calculate the capture reaction cross section on a given target nucleus, not only in its ground state but also on the different thermally populated states of the stellar plasma at a given temperature. Such schemes include a number of approximations that have never been tested, such as an approximate width fluctuation correction, the neglect of delayed particle emission during the electromagnetic decay cascade or the absence of the pre-equilibrium contribution at increasing incident energies. New developments have been brought to the reaction code TALYS to estimate the Maxwellian-averaged reaction rates of astrophysics relevance. These new developments give us the possibility to calculate with an improved accuracy the reaction cross sections and the corresponding astrophysics rates. The TALYS predictions for the thermonuclear rates of astrophysics relevance are presented and compared with those obtained with the MOST code on the basis of the same nuclear ingredients for nuclear structure properties, optical model potential, nuclear level densities and γ-ray strength. It is shown that, in particular, the pre-equilibrium process significantly influences the astrophysics rates of exotic neutron-rich nuclei. The reciprocity theorem traditionally used in astrophysics to determine photo-rates is also shown no to be valid for exotic nuclei. The predictions obtained with different nuclear inputs are also analyzed to provide an estimate of the theoretical uncertainties still affecting the reaction rate prediction far away from the experimentally known regions. (authors)

  13. Chromosome preference of disease genes and vectorization for the prediction of non-coding disease genes.

    Science.gov (United States)

    Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan

    2017-10-03

    Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.

  14. Status of development of a code for predicting the migration of ground additions - MOGRA

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment. MOGRA consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for computation parameter settings and results displays, data bases and so on. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. These codes are able to create or delete compartments and set the migration of environmental-load substances between compartments by a simple mouse operation. The system features universality and excellent expandability in the application of computations to various nuclides. (author)

  15. Experiment predictions of LOFT reflood behavior using the RELAP4/MOD6 code

    International Nuclear Information System (INIS)

    Lin, J.C.; Kee, E.J.; Grush, W.H.; White, J.R.

    1978-01-01

    The RELAP4/MOD6 computer code was used to predict the thermal-hydraulic transient for Loss-of-Fluid Test (LOFT) Loss-of-Coolant Accident (LOCA) experiments L2-2, L2-3, and L2-4. This analysis will aid in the development and assessment of analytical models used to analyze the LOCA performance of commercial power reactors. Prior to performing experiments in the LOFT facility, the experiments are modeled in counterpart tests performed in the nonnuclear Semiscale MOD 1 facility. A comparison of the analytical results with Semiscale data will verify the analytical capability of the RELAP4 code to predict the thermal-hydraulic behavior of the Semiscale LOFT counterpart tests. The analytical model and the results of analyses for the reflood portion of the LOFT LOCA experiments are described. These results are compared with the data from Semiscale

  16. A code MOGRA for predicting and assessing the migration of ground additions

    International Nuclear Information System (INIS)

    Amano, Hikaru; Atarashi-Andoh, Mariko; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2004-01-01

    The environment should be protected from the toxic effects of not only ionizing radiation but also any other environmental load materials. A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers for not only the purpose of the migration analysis but also the environmental assessment to livings of the environmental load materials. The functionality of MOGRA has been verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. Migration of radionuclides in combinations of hypothetical various land utilization areas was also verified. The system can analyze the dynamic changes of target radionuclide's concentrations in each compartment, fluxes from one compartment to another compartment. The code MOGRA has varieties of databases, which is included in an additional code MOGRA-DB. This additional code MOGRA-DB consists of radionuclides decay chart, distribution coefficients between solid and liquid, transfer factors from soil to plant, transfer coefficients from feed to beef and milk, concentration factors, and age dependent dose conversion factors for many radionuclides. Another additional code MOGRA-MAP can take in graphic map such as JPEG, TIFF, BITMAP, and GIF files, and calculate the square measure of the target land. (author)

  17. Assessment of the GOTHIC code for prediction of hydrogen flame propagation in small scale experiments

    International Nuclear Information System (INIS)

    Lee, Jin-Yong . E-mail jinyong1@fnctech.com; Lee, Jung-Jae; Park, Goon-Cherl . E-mail parkgc@snu.ac.kr

    2006-01-01

    With the rising concerns regarding the time and space dependent hydrogen behavior in severe accidents, the calculation for local hydrogen combustion in compartment has been attempted using CFD codes like GOTHIC. In particular, the space resolved hydrogen combustion analysis is essential to address certain safety issues such as the safety components survivability, and to determine proper positions for hydrogen control devices as e.q. recombiners or igniters. In the GOTHIC 6.1b code, there are many advanced features associated with the hydrogen burn models to enhance its calculation capability. In this study, we performed premixed hydrogen/air combustion experiments with an upright, rectangular shaped, combustion chamber of dimensions 1 m x 0.024 m x 1 m. The GOTHIC 6.1b code was used to simulate the hydrogen/air combustion experiments, and its prediction capability was assessed by comparing the experimental with multidimensional calculational results. Especially, the prediction capability of the GOTHIC 6.1b code for local hydrogen flame propagation phenomena was examined. For some cases, comparisons are also presented for lumped modeling of hydrogen combustion. By evaluating the effect of parametric simulations, we present some instructions for local hydrogen combustion analysis using the GOTHIC 6.1b code. From the analyses results, it is concluded that the modeling parameter of GOTHIC 6.1b code should be modified when applying the mechanistic burn model for hydrogen propagation analysis in small geometry

  18. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  19. Benchmarking and qualification of the NUFREQ-NPW code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.

    1988-01-01

    The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs

  20. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  1. Magnified Neural Envelope Coding Predicts Deficits in Speech Perception in Noise.

    Science.gov (United States)

    Millman, Rebecca E; Mattys, Sven L; Gouws, André D; Prendergast, Garreth

    2017-08-09

    Verbal communication in noisy backgrounds is challenging. Understanding speech in background noise that fluctuates in intensity over time is particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction in fast-acting cochlear compression associated with SNHL exaggerates the perceived fluctuations in intensity in amplitude-modulated sounds. SNHL-induced changes in the coding of amplitude-modulated sounds may have a detrimental effect on the ability of SNHL listeners to understand speech in the presence of modulated background noise. To date, direct evidence for a link between magnified envelope coding and deficits in speech identification in modulated noise has been absent. Here, magnetoencephalography was used to quantify the effects of SNHL on phase locking to the temporal envelope of modulated noise (envelope coding) in human auditory cortex. Our results show that SNHL enhances the amplitude of envelope coding in posteromedial auditory cortex, whereas it enhances the fidelity of envelope coding in posteromedial and posterolateral auditory cortex. This dissociation was more evident in the right hemisphere, demonstrating functional lateralization in enhanced envelope coding in SNHL listeners. However, enhanced envelope coding was not perceptually beneficial. Our results also show that both hearing thresholds and, to a lesser extent, magnified cortical envelope coding in left posteromedial auditory cortex predict speech identification in modulated background noise. We propose a framework in which magnified envelope coding in posteromedial auditory cortex disrupts the segregation of speech from background noise, leading to deficits in speech perception in modulated background noise. SIGNIFICANCE STATEMENT People with hearing loss struggle to follow conversations in noisy environments. Background noise that fluctuates in intensity over time poses a particular challenge. Using magnetoencephalography, we demonstrate

  2. Assessment of predictive capability of REFLA/TRAC code for large break LOCA transient in PWR using LOFT L2-5 test data

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best estimate code developed at Japan Atomic Energy Research Institute (JAERI) to provide advanced predictions of thermal hydraulic transient in light water reactors (LWRs). The REFLA/TRAC code uses the TRAC-PF1/MOD1 code as the framework of the code. The REFLA/TRAC code is expected to be used for the calibration of licensing codes, accident analysis, accident simulation of LWRs, and design of advanced LWRs. Several models have been implemented to the TRAC-PF1/MOD1 code at JAERI including reflood model, condensation model, interfacial and wall friction models, etc. These models have been verified using data from various separate effect tests. This report describes an assessment result of the REFLA/TRAC code, which was performed to assess the predictive capability for integral system behavior under large break loss of coolant accident (LBLOCA) using data from the LOFT L2-5 test. The assessment calculation confirmed that the REFLA/TRAC code can predict break mass flow rate, emergency core cooling water bypass and clad temperature excellently in the LOFT L2-5 test. The CPU time of the REFLA/TRAC code was about 1/3 of the TRAC-PF1/MOD1 code. The REFLA/TRAC code can perform stable and fast simulation of thermal hydraulic behavior in PWR LBLOCA with enough accuracy for practical use. (author)

  3. Modified linear predictive coding approach for moving target tracking by Doppler radar

    Science.gov (United States)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  4. A study on the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations

    International Nuclear Information System (INIS)

    Choi, Y. S.; Lee, W. J.; Lee, J. J.; Park, K. C.

    2002-01-01

    In this study the prediction capability of GOTHIC and HYCA3D code for local hydrogen concentrations was verified with experimental results. Among the experiments, executed by SNU and other organization inside and outside of the country, the fast transient and the obstacle cases are selected. In case of large subcompartment both the code show good agreement with the experimental data. But in case of small and complex geometry or fast transient the results of GOTHIC code have the large difference from experimental ones. This represents that GOTHIC code is unsuitable for these cases. On the contrary HTCA3D code agrees well with all the experimental data

  5. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    Science.gov (United States)

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Prediction of surface cracks from thick-walled pressurized vessels with ASME code

    International Nuclear Information System (INIS)

    Thieme, W.

    1983-01-01

    The ASME-Code, Section XI, Appendix A 'Analysis of flow indications' is still non-mandatory for the pressure components of nuclear power plants. It is certainly difficult to take realistic account of the many factors influencing crack propagation while making life predictions. The accuracy of the US guideline is analysed, and its possible applications are roughly outlined. (orig./IHOE) [de

  7. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1995-01-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. ((orig.))

  8. BOW. A computer code to predict lateral deflections of composite beams. A computer code to predict lateral deflections of composite beams

    Energy Technology Data Exchange (ETDEWEB)

    Tayal, M.

    1987-08-15

    Arrays of tubes are used in many engineered structures, such as in nuclear fuel bundles and in steam generators. The tubes can bend (bow) due to in-service temperatures and loads. Assessments of bowing of nuclear fuel elements can help demonstrate the integrity of fuel and of surrounding components, as a function of operating conditions such as channel power. The BOW code calculates the bending of composite beams such as fuel elements, due to gradients of temperature and due to hydraulic forces. The deflections and rotations are calculated in both lateral directions, for given conditions of temperatures. Wet and dry operation of the sheath can be simulated. Bow accounts for the following physical phenomena: circumferential and axial variations in the temperatures of the sheath and of the pellet; cracking of pellets; grip and slip between the pellets and the sheath; hydraulic drag; restraints from endplates, from neighbouring elements, and from the pressure-tube; gravity; concentric or eccentric welds between endcap and endplate; neutron flux gradients; and variations of material properties with temperature. The code is based on fundamental principles of mechanics. The governing equations are solved numerically using the finite element method. Several comparisons with closed-form equations show that the solutions of BOW are accurate. BOW`s predictions for initial in-reactor bow are also consistent with two post-irradiation measurements.

  9. Multi codes and multi-scale analysis for void fraction prediction in hot channel for VVER-1000/V392

    International Nuclear Information System (INIS)

    Hoang Minh Giang; Hoang Tan Hung; Nguyen Huu Tiep

    2015-01-01

    Recently, an approach of multi codes and multi-scale analysis is widely applied to study core thermal hydraulic behavior such as void fraction prediction. Better results are achieved by using multi codes or coupling codes such as PARCS and RELAP5. The advantage of multi-scale analysis is zooming of the interested part in the simulated domain for detail investigation. Therefore, in this study, the multi codes between MCNP5, RELAP5, CTF and also the multi-scale analysis based RELAP5 and CTF are applied to investigate void fraction in hot channel of VVER-1000/V392 reactor. Since VVER-1000/V392 reactor is a typical advanced reactor that can be considered as the base to develop later VVER-1200 reactor, then understanding core behavior in transient conditions is necessary in order to investigate VVER technology. It is shown that the item of near wall boiling, Γ w in RELAP5 proposed by Lahey mechanistic method may not give enough accuracy of void fraction prediction as smaller scale code as CTF. (author)

  10. Assessment of Prediction Capabilities of COCOSYS and CFX Code for Simplified Containment

    Directory of Open Access Journals (Sweden)

    Jia Zhu

    2016-01-01

    Full Text Available The acceptable accuracy for simulation of severe accident scenarios in containments of nuclear power plants is required to investigate the consequences of severe accidents and effectiveness of potential counter measures. For this purpose, the actual capability of CFX tool and COCOSYS code is assessed in prototypical geometries for simplified physical process-plume (due to a heat source under adiabatic and convection boundary condition, respectively. Results of the comparison under adiabatic boundary condition show that good agreement is obtained among the analytical solution, COCOSYS prediction, and CFX prediction for zone temperature. The general trend of the temperature distribution along the vertical direction predicted by COCOSYS agrees with the CFX prediction except in dome, and this phenomenon is predicted well by CFX and failed to be reproduced by COCOSYS. Both COCOSYS and CFX indicate that there is no temperature stratification inside dome. CFX prediction shows that temperature stratification area occurs beneath the dome and away from the heat source. Temperature stratification area under adiabatic boundary condition is bigger than that under convection boundary condition. The results indicate that the average temperature inside containment predicted with COCOSYS model is overestimated under adiabatic boundary condition, while it is underestimated under convection boundary condition compared to CFX prediction.

  11. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU Fuel

    International Nuclear Information System (INIS)

    Tait, J.C.; Gauld, I.; Kerr, A.H.

    1994-10-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty. 38 refs., 4 figs., 5 tabs

  12. Transitioning from interpretive to predictive in thermal hydraulic codes

    International Nuclear Information System (INIS)

    Mousseau, V.A.

    2004-01-01

    The current thermal hydraulic codes in use in the US, RELAP and TRAC, where originally written in the mid to late 1970's. At that time computers were slow, expensive, and had small memories. Because of these constraints, sacrifices had to be made, both in physics and numerical methods, which resulted in limitations on the accuracy of the solutions. Significant changes have occurred that induce very different requirements for the thermal hydraulic codes to be used for the future GEN-IV nuclear reactors. First, computers speed and memory grow at an exponential rate while the costs hold constant or decrease. Second, passive safety systems in modern designs stretch the length of relevant transients to many days. Finally, costs of experiments have grown very rapidly. Because of these new constraints, modern thermal hydraulic codes will be relied on for a significantly larger portion of bringing a nuclear reactor on line. Simulation codes will have to define in which part of state space experiments will be run. They will then have to be able to extend the small number of experiments to cover the large state space in which the reactors will operate. This data extrapolation mode will be referred to as 'predictive'. One of the keys to analyzing the accuracy of a simulation is to consider the entire domain being simulated. For example, in a reactor design where the containment is coupled to the reactor cooling system through radiative heat transfer, the accuracy of a transient includes the containment, the radiation heat transfer, the fluid flow in the cooling system, the thermal conduction in the solid, and the neutron transport in the reactor. All of this physics is coupled together in one nonlinear system through material properties, cross sections, heat transfer coefficients, and other mechanisms that exchange mass, momentum, and energy. Traditionally, these different physical domains, (containment, cooling system, nuclear fuel, etc.) have been solved in different

  13. Prediction of the HBS width and Xe concentration in grain matrix by the INFRA code

    International Nuclear Information System (INIS)

    Yang, Yong Sik; Lee, Chan Bok; Kim, Dae Ho; Kim, Young Min

    2004-01-01

    Formation of a HBS(High Burnup Structure) is an important phenomenon for the high burnup fuel performance and safety. For the prediction of the HBS(so called 'rim microstructure') proposed rim microstructure formation model, which is a function of the fuel temperature, grain size and fission rate, was inserted into the high burnup fuel performance code INFRA. During the past decades, various examinations have been performed to find the HBS formation mechanism and define HBS characteristics. In the HBEP(High Burnup Effects Program), several rods were examined by EPMA analysis to measure HBS width and these results were re-measured by improved technology including XRF and detail microstructure examination. Recently, very high burnup(∼100MWd/kgU) fuel examination results were reported by Manzel et al., and EPMA analysis results have been released. Using the measured EPMA analysis data, HBS formation prediction model of INFRA code are verified. HBS width prediction results are compared with measured ones and Xe concentration profile is compared with measured EPMA data. Calculated HBS width shows good agreement with measured data in a reasonable error range. Though, there are some difference in transition region and central region due to model limitation and fission gas release prediction error respectively, however, predicted Xe concentration in the fully developed HBS region shows a good agreement with the measured data. (Author)

  14. IAMBUS, a computer code for the design and performance prediction of fast breeder fuel rods

    International Nuclear Information System (INIS)

    Toebbe, H.

    1990-05-01

    IAMBUS is a computer code for the thermal and mechanical design, in-pile performance prediction and post-irradiation analysis of fast breeder fuel rods. The code deals with steady, non-steady and transient operating conditions and enables to predict in-pile behavior of fuel rods in power reactors as well as in experimental rigs. Great effort went into the development of a realistic account of non-steady fuel rod operating conditions. The main emphasis is placed on characterizing the mechanical interaction taking place between the cladding tube and the fuel as a result of contact pressure and friction forces, with due consideration of axial and radial crack configuration within the fuel as well as the gradual transition at the elastic/plastic interface in respect to fuel behavior. IAMBUS can be readily adapted to various fuel and cladding materials. The specific models and material correlations of the reference version deal with the actual in-pile behavior and physical properties of the KNK II and SNR 300 related fuel rod design, confirmed by comparison of the fuel performance model with post-irradiation data. The comparison comprises steady, non-steady and transient irradiation experiments within the German/Belgian fuel rod irradiation program. The code is further validated by comparison of model predictions with post-irradiation data of standard fuel and breeder rods of Phenix and PFR as well as selected LWR fuel rods in non-steady operating conditions

  15. Predictions of Critical Heat Flux Using the ASSERT-PV Subchannel Code for a CANFLEX Variant Bundle

    International Nuclear Information System (INIS)

    Onder, Ebru Nihan; Leung, Laurence; Kim, Hung; Rao, Yanfei

    2009-01-01

    The ASSERT-PV subchannel code developed by AECL has been applied as a design-assist tool to the advanced CANDU 1 reactor fuel bundle. Based primarily on the CANFLEX 2 fuel bundle, several geometry changes (such as element sizes and pitchcircle diameters of various element rings) were examined to optimize the dryout power and pressure-drop performances of the new fuel bundle. An experiment was performed to obtain dryout power measurements for verification of the ASSERT-PV code predictions. It was carried out using an electrically heated, Refrigerant-134a cooled, fuel bundle string simulator. The axial power profile of the simulator was uniform, while the radial power profile of the element rings was varied simulating profiles in bundles with various fuel compositions and burn-ups. Dryout power measurements are predicted closely using the ASSERT-PV code, particularly at low flows and low pressures, but are overpredicted at high flows and high pressures. The majority of data shows that dryout powers are underpredicted at low inlet-fluid temperatures but overpredicted at high inlet-fluid temperatures

  16. Rod behaviour under base load, load follow and frequency control operation: CYRANO 2 code predictions versus experimental results

    International Nuclear Information System (INIS)

    Gautier, B.; Raybaud, A.

    1984-01-01

    The French PWR reactors are now currently operating under load follow and frequency control. In order to demonstrate that these operating conditions were not able to increase the fuel failure rate, fuel rod behaviour calculations have been performed by E.D.F. with CYRANO 2 code. In parallel with these theoretical calculations, code predictions have been compared to experimental results. The paper presents some of the comparisons performed on 17x17 fuel irradiated in FESSENHEIM 2 up to 30 GWd/tU under base load operation and in the CAP reactor under load follow and frequency control conditions. It is shown that experimental results can be predicted with a reasonable accuracy by CYRANO 2 code. The experimental work was carried out under joint R and D programs by EDF, FRAGEMA, CEA, and WESTINGHOUSE (CAP program by French partners only). (author)

  17. Use of a commercial heat transfer code to predict horizontally oriented spent fuel rod temperatures

    International Nuclear Information System (INIS)

    Wix, S.D.; Koski, J.A.

    1992-01-01

    Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. The maximum rod temperature is a limiting factor in the amount of spent fuel that can be loaded in a transportation cask. The scope of this work is to demonstrate that reasonable and conservative spent fuel rod temperature predictions can be made using commercially available thermal analysis codes. The demonstration is accomplished by a comparison between numerical temperature predictions, with a commercially available thermal analysis code, and experimental temperature data for electrical rod heaters simulating a horizontally oriented spent fuel rod bundle

  18. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  19. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  20. Positive Predictive Values of International Classification of Diseases, 10th Revision Coding Algorithms to Identify Patients With Autosomal Dominant Polycystic Kidney Disease

    Directory of Open Access Journals (Sweden)

    Vinusha Kalatharan

    2016-12-01

    Full Text Available Background: International Classification of Diseases, 10th Revision codes (ICD-10 for autosomal dominant polycystic kidney disease (ADPKD is used within several administrative health care databases. It is unknown whether these codes identify patients who meet strict clinical criteria for ADPKD. Objective: The objective of this study is (1 to determine whether different ICD-10 coding algorithms identify adult patients who meet strict clinical criteria for ADPKD as assessed through medical chart review and (2 to assess the number of patients identified with different ADPKD coding algorithms in Ontario. Design: Validation study of health care database codes, and prevalence. Setting: Ontario, Canada. Patients: For the chart review, 201 adult patients with hospital encounters between April 1, 2002, and March 31, 2014, assigned either ICD-10 codes Q61.2 or Q61.3. Measurements: This study measured positive predictive value of the ICD-10 coding algorithms and the number of Ontarians identified with different coding algorithms. Methods: We manually reviewed a random sample of medical charts in London, Ontario, Canada, and determined whether or not ADPKD was present according to strict clinical criteria. Results: The presence of either ICD-10 code Q61.2 or Q61.3 in a hospital encounter had a positive predictive value of 85% (95% confidence interval [CI], 79%-89% and identified 2981 Ontarians (0.02% of the Ontario adult population. The presence of ICD-10 code Q61.2 in a hospital encounter had a positive predictive value of 97% (95% CI, 86%-100% and identified 394 adults in Ontario (0.003% of the Ontario adult population. Limitations: (1 We could not calculate other measures of validity; (2 the coding algorithms do not identify patients without hospital encounters; and (3 coding practices may differ between hospitals. Conclusions: Most patients with ICD-10 code Q61.2 or Q61.3 assigned during their hospital encounters have ADPKD according to the clinical

  1. From structure prediction to genomic screens for novel non-coding RNAs

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.

    2011-01-01

    Abstract: Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction....... This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early...... upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other....

  2. Predicting holland occupational codes by means of paq job dimension scores

    Directory of Open Access Journals (Sweden)

    R. P. Van Der Merwe

    1990-06-01

    Full Text Available A study was conducted on how to obtain Holland's codes for South African occupations practically and economically by deducing them from information on the nature of the occupation (as derived by means of the Position Analysis Questionnaire. A discriminant analysis revealed that on the basis of the PAQ information the occupations could be distinguished clearly according to the main orientations of their American codes. Regression equations were also developed to predict the mean Self-Directed Search scores of the occupations on the basis of their PAQ information. Opsomming Ondersoek is ingestel om Holland se kodes vir Suid- Afrikaanse beroepe op 'n praktiese en ekonomiese wyse te bekom deur hulle van inligting oor die aard van die beroep (soos verkry met behulp van die Position Analysis Questionnaire af te lei. 'n Diskriminantontleding het getoon dat die beroepe op grond van die PAQ-inligting duidelik volgens die hoofberoepsgroepe van hulle Amerikaanse kodes onderskei kan word. Verder is regressievergelykings ontwikkel om beroepe se gemiddelde Self-Directed Search-tellings op grond van hulle PAQ-inligting te voorspel.

  3. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  4. Evaluation of CRUDTRAN code to predict transport of corrosion products and radioactivity in the PWR primary coolant system

    International Nuclear Information System (INIS)

    Lee, C.B.

    2002-01-01

    CRUDTRAN code is to predict transport of the corrosion products and their radio-activated nuclides such as cobalt-58 and cobalt-60 in the PWR primary coolant system. In CRUDTRAN code the PWR primary circuit is divided into three principal sections such as the core, the coolant and the steam generator. The main driving force for corrosion product transport in the PWR primary coolant comes from coolant temperature change throughout the system and a subsequent change in corrosion product solubility. As the coolant temperature changes around the PWR primary circuit, saturation status of the corrosion products in the coolant also changes such that under-saturation in steam generator and super-saturation in the core. CRUDTRAN code was evaluated by comparison with the results of the in-reactor loop tests simulating the PWR primary coolant system and PWR plant data. It showed that CRUDTRAN could predict variations of cobalt-58 and cobalt-60 radioactivity with time, plant cycle and coolant chemistry in the PWR plant. (author)

  5. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  6. Comparison of the Aerospace Systems Test Reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1983-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code

  7. Comparison of the aerospace systems test reactor loss-of-coolant test data with predictions of the 3D-AIRLOCA code

    International Nuclear Information System (INIS)

    Warinner, D.K.

    1984-01-01

    This paper compares the predictions of the revised 3D-AIRLOCA computer code to those data available from the Aerospace Systems Test Reactor's (ASTR's) loss-of-coolant-accident (LOCA) tests run in 1964. The theoretical and experimental hot-spot temperature responses compare remarkably well. In the thirteen cases studied, the irradiation powers varied from 0.4 to 8.87 MW; the irradiation times were 300, 1540, 1800, and 10 4 s. The degrees of agreement between the data and predictions provide an experimental validation of the 3D-AIRLOCA code. (author)

  8. Validation of the ASSERT subchannel code for prediction of CHF in standard and non-standard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Kiteley, J.C.; Carver, M.B.; Zhou, Q.N.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting critical heat flux (CHF) at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is the only tool available to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries. 28 refs., 12 figs

  9. Validation of the ASSERT subchannel code: Prediction of critical heat flux in standard and nonstandard CANDU bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1995-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of Canada uranium deuterium (CANDU) pressurized heavy water reactor fuel channels and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting CHF at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental database. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. The numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology are discussed. The evolutionary validation plan is also discussed and early validation exercises are summarized. More recent validation exercises in standard and nonstandard geometries are emphasized

  10. Plasma burn-through simulations using the DYON code and predictions for ITER

    International Nuclear Information System (INIS)

    Kim, Hyun-Tae; Sips, A C C; De Vries, P C

    2013-01-01

    This paper will discuss simulations of the full ionization process (i.e. plasma burn-through), fundamental to creating high temperature plasma. By means of an applied electric field, the gas is partially ionized by the electron avalanche process. In order for the electron temperature to increase, the remaining neutrals need to be fully ionized in the plasma burn-through phase, as radiation is the main contribution to the electron power loss. The radiated power loss can be significantly affected by impurities resulting from interaction with the plasma facing components. The DYON code is a plasma burn-through simulator developed at Joint European Torus (JET) (Kim et al and EFDA-JET Contributors 2012 Nucl. Fusion 52 103016, Kim, Sips and EFDA-JET Contributors 2013 Nucl. Fusion 53 083024). The dynamic evolution of the plasma temperature and plasma densities including the impurity content is calculated in a self-consistent way using plasma wall interaction models. The recent installation of a beryllium wall at JET enabled validation of the plasma burn-through model in the presence of new, metallic plasma facing components. The simulation results of the plasma burn-through phase show a consistent good agreement against experiments at JET, and explain differences observed during plasma initiation with the old carbon plasma facing components. In the International Thermonuclear Experimental Reactor (ITER), the allowable toroidal electric field is restricted to 0.35 (V m −1 ), which is significantly lower compared to the typical value (∼1 (V m −1 )) used in the present devices. The limitation on toroidal electric field also reduces the range of other operation parameters during plasma formation in ITER. Thus, predictive simulations of plasma burn-through in ITER using validated model is of crucial importance. This paper provides an overview of the DYON code and the validation, together with new predictive simulations for ITER using the DYON code. (paper)

  11. Comparison of Heavy Water Reactor Thermalhydraulic Code Predictions with Small Break LOCA Experimental Data

    International Nuclear Information System (INIS)

    2012-08-01

    Activities within the frame of the IAEA's Technical Working Group on Advanced Technologies for HWRs (TWG-HWR) are conducted in a project within the IAEA's subprogramme on nuclear power reactor technology development. The objective of the activities on HWRs is to foster, within the frame of the TWG-HWR, information exchange and cooperative research on technology development for current and future HWRs, with an emphasis on safety, economics and fuel resource sustainability. One of the activities recommended by the TWG-HWR was an international standard problem exercise entitled Intercomparison and Validation of Computer Codes for Thermalhydraulics Safety Analyses. Intercomparison and validation of computer codes used in different countries for thermalhydraulics safety analyses will enhance the confidence in the predictions made by these codes. However, the intercomparison and validation exercise needs a set of reliable experimental data. Two RD-14M small break loss of coolant accident (SBLOCA) tests, simulating HWR LOCA behaviour, conducted by Atomic Energy of Canada Ltd (AECL), were selected for this validation project. This report provides a comparison of the results obtained from eight participating organizations from six countries (Argentina, Canada, China, India, Republic of Korea, and Romania), utilizing four different computer codes (ATMIKA, CATHENA, MARS-KS, and RELAP5). General conclusions are reached and recommendations made.

  12. Validation of the assert subchannel code: Prediction of CHF in standard and non-standard Candu bundle geometries

    International Nuclear Information System (INIS)

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1993-01-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of CANDU PHWR fuel channels, and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of prediting CHF at these local conditions, makes it a unique tool for predicting CHF for situations outside the existing experimental data base. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. This paper discusses the numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology. The evolutionary validation plan is discussed, and early validation exercises are summarized. The paper concentrates, however, on more recent validation exercises in standard and non-standard geometries

  13. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    Science.gov (United States)

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  14. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  15. Using clinical data to predict high-cost performance coding issues associated with pressure ulcers: a multilevel cohort model.

    Science.gov (United States)

    Padula, William V; Gibbons, Robert D; Pronovost, Peter J; Hedeker, Donald; Mishra, Manish K; Makic, Mary Beth F; Bridges, John Fp; Wald, Heidi L; Valuck, Robert J; Ginensky, Adam J; Ursitti, Anthony; Venable, Laura Ruth; Epstein, Ziv; Meltzer, David O

    2017-04-01

    Hospital-acquired pressure ulcers (HAPUs) have a mortality rate of 11.6%, are costly to treat, and result in Medicare reimbursement penalties. Medicare codes HAPUs according to Agency for Healthcare Research and Quality Patient-Safety Indicator 3 (PSI-03), but they are sometimes inappropriately coded. The objective is to use electronic health records to predict pressure ulcers and to identify coding issues leading to penalties. We evaluated all hospitalized patient electronic medical records at an academic medical center data repository between 2011 and 2014. These data contained patient encounter level demographic variables, diagnoses, prescription drugs, and provider orders. HAPUs were defined by PSI-03: stages III, IV, or unstageable pressure ulcers not present on admission as a secondary diagnosis, excluding cases of paralysis. Random forests reduced data dimensionality. Multilevel logistic regression of patient encounters evaluated associations between covariates and HAPU incidence. The approach produced a sample population of 21 153 patients with 1549 PSI-03 cases. The greatest odds ratio (OR) of HAPU incidence was among patients diagnosed with spinal cord injury (ICD-9 907.2: OR = 14.3; P  coded for paralysis, leading to a PSI-03 flag. Other high ORs included bed confinement (ICD-9 V49.84: OR = 3.1, P  coded without paralysis, leading to PSI-03 flags. The resulting statistical model can be tested to predict HAPUs during hospitalization. Inappropriate coding of conditions leads to poor hospital performance measures and Medicare reimbursement penalties. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Analytical prediction of CHF by FIDAS code based on three-fluid and film-dryout model

    International Nuclear Information System (INIS)

    Sugawara, Satoru

    1990-01-01

    Analytical prediction model of critical heat flux (CHF) has been developed on the basis of film dryout criterion due to droplets deposition and entrainment in annular mist flow. Critical heat flux in round tubes were analyzed by the Film Dryout Analysis Code in Subchannels (FIDAS) which is based on the three-fluid, three-field and newly developed film dryout model. Predictions by FIDAS were compared with the world-wide experimental data on CHF obtained in water and Freon for uniformly and non-uniformly heated tubes under vertical upward flow condition. Furthermore, CHF prediction capability of FIDAS was compared with those of other film dryout models for annular flow and Katto's CHF correlation. The predictions of FIDAS are in sufficient agreement with the experimental CHF data, and indicate better agreement than the other film dryout models and empirical correlation of Katto. (author)

  17. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  18. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  19. Predicting multiprocessing efficiency on the Cray multiprocessors in a (CTSS) time-sharing environment/application to a 3-D magnetohydrodynamics code

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    A formula is derived for predicting multiprocessing efficiency on Cray supercomputers equipped with the Cray Time-Sharing System (CTSS). The model is applicable to an intensive time-sharing environment. The actual efficiency estimate depends on three factors: the code size, task length, and job mix. The implementation of multitasking in a three-dimensional plasma magnetohydrodynamics (MHD) code, TEMCO, is discussed. TEMCO solves the primitive one-fluid compressible MHD equations and includes resistive and Hall effects in Ohm's law. Virtually all segments of the main time-integration loop are multitasked. The multiprocessing efficiency model is applied to TEMCO. Excellent agreement is obtained between the actual multiprocessing efficiency and the theoretical prediction

  20. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  1. The effect of turbulent mixing models on the predictions of subchannel codes

    International Nuclear Information System (INIS)

    Tapucu, A.; Teyssedou, A.; Tye, P.; Troche, N.

    1994-01-01

    In this paper, the predictions of the COBRA-IV and ASSERT-4 subchannel codes have been compared with experimental data on void fraction, mass flow rate, and pressure drop obtained for two interconnected subchannels. COBRA-IV is based on a one-dimensional separated flow model with the turbulent intersubchannel mixing formulated as an extension of the single-phase mixing model, i.e. fluctuating equal mass exchange. ASSERT-4 is based on a drift flux model with the turbulent mixing modelled by assuming an exchange of equal volumes with different densities thus allowing a net fluctuating transverse mass flux from one subchannel to the other. This feature is implemented in the constitutive relationship for the relative velocity required by the conservation equations. It is observed that the predictions of ASSERT-4 follow the experimental trends better than COBRA-IV; therefore the approach of equal volume exchange constitutes an improvement over that of the equal mass exchange. ((orig.))

  2. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  3. Evaluation of Design & Analysis Code, CACTUS, for Predicting Crossflow Hydrokinetic Turbine Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wosnik, Martin [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Bachant, Pete [Univ. of New Hampshire, Durham, NH (United States). Center for Ocean Renewable Energy; Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murphy, Andrew W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-09-01

    CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements in a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.

  4. Assessment of the prediction capability of the TRANSURANUS fuel performance code on the basis of power ramp tested LWR fuel rods

    International Nuclear Information System (INIS)

    Pastore, G.; Botazzoli, P.; Di Marcello, V.; Luzzi, L.

    2009-01-01

    The present work is aimed at assessing the prediction capability of the TRANSURANUS code for the performance analysis of LWR fuel rods under power ramp conditions. The analysis refers to all the power ramp tested fuel rods belonging to the Studsvik PWR Super-Ramp and BWR Inter-Ramp Irradiation Projects, and is focused on some integral quantities (i.e., burn-up, fission gas release, cladding creep-down and failure due to pellet cladding interaction) through a systematic comparison between the code predictions and the experimental data. To this end, a suitable setup of the code is established on the basis of previous works. Besides, with reference to literature indications, a sensitivity study is carried out, which considers the 'ITU model' for fission gas burst release and modifications in the treatment of the fuel solid swelling and the cladding stress corrosion cracking. The performed analyses allow to individuate some issues, which could be useful for the future development of the code. Keywords: Light Water Reactors, Fuel Rod Performance, Power Ramps, Fission Gas Burst Release, Fuel Swelling, Pellet Cladding Interaction, Stress Corrosion Cracking

  5. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks, R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem

  6. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    International Nuclear Information System (INIS)

    Yu Jia-Feng; Sui Tian-Xiang; Wang Ji-Hua; Wang Hong-Mei; Wang Chun-Ling; Jing Li

    2015-01-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. (special topic)

  7. Impact testing and analysis for structural code benchmarking

    International Nuclear Information System (INIS)

    Glass, R.E.

    1989-01-01

    Sandia National Laboratories, in cooperation with industry and other national laboratories, has been benchmarking computer codes (''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Cask,'' R.E. Glass, Sandia National Laboratories, 1985; ''Sample Problem Manual for Benchmarking of Cask Analysis Codes,'' R.E. Glass, Sandia National Laboratories, 1988; ''Standard Thermal Problem Set for the Evaluation of Heat Transfer Codes Used in the Assessment of Transportation Packages, R.E. Glass, et al., Sandia National Laboratories, 1988) used to predict the structural, thermal, criticality, and shielding behavior of radioactive materials packages. The first step in the benchmarking of the codes was to develop standard problem sets and to compare the results from several codes and users. This step for structural analysis codes has been completed as described in ''Structural Code Benchmarking for the Analysis of Impact Response of Nuclear Material Shipping Casks,'' R.E. Glass, Sandia National Laboratories, 1985. The problem set is shown in Fig. 1. This problem set exercised the ability of the codes to predict the response to end (axisymmetric) and side (plane strain) impacts with both elastic and elastic/plastic materials. The results from these problems showed that there is good agreement in predicting elastic response. Significant differences occurred in predicting strains for the elastic/plastic models. An example of the variation in predicting plastic behavior is given, which shows the hoop strain as a function of time at the impacting end of Model B. These differences in predicting plastic strains demonstrated a need for benchmark data for a cask-like problem. 6 refs., 5 figs

  8. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Science.gov (United States)

    Stewart, Kyle R.; Maller, Ariyeh H.; Oñorbe, Jose; Bullock, James S.; Joung, M. Ryan; Devriendt, Julien; Ceverino, Daniel; Kereš, Dušan; Hopkins, Philip F.; Faucher-Giguère, Claude-André

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ˜4 times more specific angular momentum in cold halo gas (λ cold ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  9. High Angular Momentum Halo Gas: A Feedback and Code-independent Prediction of LCDM

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Kyle R. [Department of Mathematical Sciences, California Baptist University, 8432 Magnolia Ave., Riverside, CA 92504 (United States); Maller, Ariyeh H. [Department of Physics, New York City College of Technology, 300 Jay St., Brooklyn, NY 11201 (United States); Oñorbe, Jose [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Bullock, James S. [Center for Cosmology, Department of Physics and Astronomy, The University of California at Irvine, Irvine, CA 92697 (United States); Joung, M. Ryan [Department of Astronomy, Columbia University, New York, NY 10027 (United States); Devriendt, Julien [Department of Physics, University of Oxford, The Denys Wilkinson Building, Keble Rd., Oxford OX1 3RH (United Kingdom); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany); Kereš, Dušan [Department of Physics, Center for Astrophysics and Space Sciences, University of California at San Diego, 9500 Gilman Dr., La Jolla, CA 92093 (United States); Hopkins, Philip F. [California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125 (United States); Faucher-Giguère, Claude-André [Department of Physics and Astronomy and CIERA, Northwestern University, 2145 Sheridan Rd., Evanston, IL 60208 (United States)

    2017-07-01

    We investigate angular momentum acquisition in Milky Way-sized galaxies by comparing five high resolution zoom-in simulations, each implementing identical cosmological initial conditions but utilizing different hydrodynamic codes: Enzo, Art, Ramses, Arepo, and Gizmo-PSPH. Each code implements a distinct set of feedback and star formation prescriptions. We find that while many galaxy and halo properties vary between the different codes (and feedback prescriptions), there is qualitative agreement on the process of angular momentum acquisition in the galaxy’s halo. In all simulations, cold filamentary gas accretion to the halo results in ∼4 times more specific angular momentum in cold halo gas ( λ {sub cold} ≳ 0.1) than in the dark matter halo. At z > 1, this inflow takes the form of inspiraling cold streams that are co-directional in the halo of the galaxy and are fueled, aligned, and kinematically connected to filamentary gas infall along the cosmic web. Due to the qualitative agreement among disparate simulations, we conclude that the buildup of high angular momentum halo gas and the presence of these inspiraling cold streams are robust predictions of Lambda Cold Dark Matter galaxy formation, though the detailed morphology of these streams is significantly less certain. A growing body of observational evidence suggests that this process is borne out in the real universe.

  10. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  11. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  12. A numerical study of the influence of the void drift model on the predictions of the assert subchannel code

    International Nuclear Information System (INIS)

    Tye, P.; Teyssedou, A.; Troche, N.; Kiteley, J.

    1996-01-01

    One of the factors which is important in order to ensure the continued safe operation of nuclear reactors is the ability to accurately predict the 'Critical Heat Flux' (CHF) throughout the rod bundles in the fuel channel. One method currently used by the Canadian nuclear industry to predict the CHF in the fuel bundles of CANDU reactors is to use the ASSERT subchannel code to predict the local thermal-hydraulic conditions prevailing at each axial location in each subchannel in conjunction with appropriate correlations or the CHF look-up table. The successful application of the above methods depends greatly on the ability of ASSERT to accurately predict the local flow conditions throughout the fuel channel. In this paper, full range qualitative verification tests, using the ASSERT subchannel code are presented which show the influence of the void drift model on the predictions of the local subchannel quality. For typical cases using a 7 rod subset of a full 37 element rod bundle taken from the ASSERT validation database, it will be shown that the void drift term can significantly influence the calculated distribution of the quality in the rod bundle. In order to isolate, as much as possible, the influence of the void drift term this first numerical study is carried out with the rod bundle oriented both vertically and horizontally. Subsequently, additional numerical experiments will be presented which show the influence that the void drift model has on the predicted CHF locations. (author)

  13. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    Energy Technology Data Exchange (ETDEWEB)

    Karahan, Aydin, E-mail: karahan@mit.ed [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States); Buongiorno, Jacopo [Center for Advanced Nuclear Energy Systems, Nuclear Science and Engineering Department, Massachusetts Institute of Technology (United States)

    2010-01-31

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO{sub 2}-PuO{sub 2} mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium

  14. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    International Nuclear Information System (INIS)

    Karahan, Aydin; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO 2 -PuO 2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  15. Benchmarking and qualification of the nufreq-npw code for best estimate prediction of multi-channel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1988-01-01

    The work described in this paper is focused on the development, verification and benchmarking of the NUFREQ-NPW code at Westinghouse, USA for best estimate prediction of multi-channel core stability margins in US BWRs. Various models incorporated into NUFREQ-NPW are systematically compared against the Westinghouse channel stability analysis code MAZDA, which the Mathematical Model was developed in an entirely different manner. The NUFREQ-NPW code is extensively benchmarked against experimental stability data with and without nuclear reactivity feedback. Detailed comparisons are next performed against nuclear-coupled core stability data. A physically based algorithm is developed to correct for the effect of flow development on subcooled boiling. Use of this algorithm (to be described in the full paper) captures the peak magnitude as well as the resonance frequency with good accuracy

  16. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  17. Accurate prediction of the toxicity of benzoic acid compounds in mice via oral without using any computer codes

    International Nuclear Information System (INIS)

    Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad

    2012-01-01

    Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.

  18. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    Science.gov (United States)

    Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua

    2015-12-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.

  19. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  20. Improved Intra-coding Methods for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Li Song

    2009-01-01

    Full Text Available The H.264/AVC design adopts a multidirectional spatial prediction model to reduce spatial redundancy, where neighboring pixels are used as a prediction for the samples in a data block to be encoded. In this paper, a recursive prediction scheme and an enhanced (block-matching algorithm BMA prediction scheme are designed and integrated into the state-of-the-art H.264/AVC framework to provide a new intra coding model. Extensive experiments demonstrate that the coding efficiency can be on average increased by 0.27 dB with comparison to the performance of the conventional H.264 coding model.

  1. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  2. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    International Nuclear Information System (INIS)

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-01-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit

  3. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    Energy Technology Data Exchange (ETDEWEB)

    Geffraye, G.; Bazin, P.; Pichon, P. [CEA/DRN/STR, Grenoble (France)

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  4. Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code

    CSIR Research Space (South Africa)

    Peter, Xolani

    2016-09-01

    Full Text Available Ballistic Organization Cape Town, South Africa 27-29 September 2016 1 PREDICTION OF DETONATION AND JWL EOS PARAMETERS OF ENERGETIC MATERIALS USING EXPLO5 COMPUTER CODE X. Peter*, Z. Jiba, M. Olivier, I.M. Snyman, F.J. Mostert and T.J. Sono.... Nowadays many numerical methods and programs are being used for carrying out thermodynamic calculations of the detonation parameters of condensed explosives, for example a BKW Fortran (Mader, 1967), Ruby (Cowperthwaite and Zwisler, 1974) TIGER...

  5. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  6. Bayesian decision support for coding occupational injury data.

    Science.gov (United States)

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  7. The representatives of the various intersubchannel transfer mechanisms and their effects on the predictions of the ASSERT-4 subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Tye, P [Ecole Polytechnique, Montreal, PQ (Canada)

    1994-12-31

    In this paper, effects of that the constitutive relations used to represent some of the intersubchannel transfer mechanisms have on the predictions of the ASSERT-4 subchannel code for horizontal flows are examined. In particular the choices made in the representation of the gravity driven phase separation phenomena, which is unique to the horizontal fuel channel arrangement seen in CANDU reactors, are analyzed. This is done by comparing the predictions of the ASSERT-4 subchannel code with experimental data on void fraction, mass flow rate, and pressure drop obtained for two horizontal interconnected subchannels. ASSERT-4, the subchannel code used by the Canadian nuclear industry, uses an advanced drift flux model which permits departure from both thermal and mechanical equilibrium between the phases to be accurately modeled. In particular ASSERT-4 contains models for the buoyancy effects which cause phase separation between adjacent subchannels in horizontal flows. This feature, which is of great importance in the subchannel analysis of CANDU reactors, is implemented in the constitutive relationship for the relative velocity required by the conservation equations. In order to, as much as is physically possible, isolate different inter-subchannel transfer mechanisms, three different subchannel orientations are analyzed. These are: the two subchannels at the same elevation, the high void subchannel below the low void subchannel, and the high void subchannel above the low void subchannel. It is observed that for all three subchannel orientations ASSERT-4 does a reasonably good job of predicting the experimental trends. However, certain modifications to the representation of the gravitational phase separation effects which seem to improve the overall predictions are suggested. (author). 12 refs., 12 figs.

  8. Assessment of 12 CHF prediction methods, for an axially non-uniform heat flux distribution, with the RELAP5 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Ferrouk, M. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria)], E-mail: m_ferrouk@yahoo.fr; Aissani, S. [Laboratoire du Genie Physique des Hydrocarbures, University of Boumerdes, Boumerdes 35000 (Algeria); D' Auria, F.; DelNevo, A.; Salah, A. Bousbia [Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione, Universita di Pisa (Italy)

    2008-10-15

    The present article covers the evaluation of the performance of twelve critical heat flux methods/correlations published in the open literature. The study concerns the simulation of an axially non-uniform heat flux distribution with the RELAP5 computer code in a single boiling water reactor channel benchmark problem. The nodalization scheme employed for the considered particular geometry, as modelled in RELAP5 code, is described. For this purpose a review of critical heat flux models/correlations applicable to non-uniform axial heat profile is provided. Simulation results using the RELAP5 code and those obtained from our computer program, based on three type predictions methods such as local conditions, F-factor and boiling length average approaches were compared.

  9. The Interaction between Interoceptive and Action States within a Framework of Predictive Coding

    Science.gov (United States)

    Marshall, Amanda C.; Gentsch, Antje; Schütz-Bosbach, Simone

    2018-01-01

    The notion of predictive coding assumes that perception is an iterative process between prior knowledge and sensory feedback. To date, this perspective has been primarily applied to exteroceptive perception as well as action and its associated phenomenological experiences such as agency. More recently, this predictive, inferential framework has been theoretically extended to interoception. This idea postulates that subjective feeling states are generated by top–down inferences made about internal and external causes of interoceptive afferents. While the processing of motor signals for action control and the emergence of selfhood have been studied extensively, the contributions of interoceptive input and especially the potential interaction of motor and interoceptive signals remain largely unaddressed. Here, we argue for a specific functional relation between motor and interoceptive awareness. Specifically, we implicate interoceptive predictions in the generation of subjective motor-related feeling states. Furthermore, we propose a distinction between reflexive and pre-reflexive modes of agentic action control and suggest that interoceptive input may affect each differently. Finally, we advocate the necessity of continuous interoceptive input for conscious forms of agentic action control. We conclude by discussing further research contributions that would allow for a fuller understanding of the interaction between agency and interoceptive awareness. PMID:29515495

  10. REVISED STREAM CODE AND WASP5 BENCHMARK

    International Nuclear Information System (INIS)

    Chen, K

    2005-01-01

    STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within ±20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within ±3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls

  11. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  12. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  13. Effect of deformability on fluid flow through a fractured-porous medium

    International Nuclear Information System (INIS)

    Tsang, C.F.; Noorishad, J.; Witherspoon, P.A.

    1985-01-01

    A permeable geologic medium containing interstitial fluids generally undergoes deformation as the fluid pressure changes. Depending on the nature of the medium, the strain ranges from infinitesimal to finite quantities. This response is the result of a coupled hydraulic-mechanical phenomenon which can basically be formulated in the generalized three-dimensional theory of consolidation. Dealing mainly with media of little deformability, traditional hydrogeology accounts for medium deformability as far as it affects the volume of pore spaces, through the introduction of a coefficient of specific storage in the fluid flow equation. This treatment can be justified on the basis of a one-dimensional effective stress law and the assumption of homogeneity of the total stress field throughout the medium. The present paper uses a numerical model called ROCMAS (Noorishad et al., 1982; Noorishad e al., 1984) which was developed to calculate fluid flow through a deformable fractured-porous medium. The code employs the Finite Element Method based on a variational approach. It has been verified against a number of simple analytic solutions. In this work, the code is used to address the role of medium deformability in continuous and pulse testing techniques. The errors that may result because of application of traditional fluid flow methods are discussed. It is found that low pressure continuous well testing or pulse testing procedures can reduce such errors. 16 references, 9 figures, 1 table

  14. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  15. Benchmark calculation of subchannel analysis codes

    International Nuclear Information System (INIS)

    1996-02-01

    In order to evaluate the analysis capabilities of various subchannel codes used in thermal-hydraulic design of light water reactors, benchmark calculations were performed. The selected benchmark problems and major findings obtained by the calculations were as follows: (1)As for single-phase flow mixing experiments between two channels, the calculated results of water temperature distribution along the flow direction were agreed with experimental results by tuning turbulent mixing coefficients properly. However, the effect of gap width observed in the experiments could not be predicted by the subchannel codes. (2)As for two-phase flow mixing experiments between two channels, in high water flow rate cases, the calculated distributions of air and water flows in each channel were well agreed with the experimental results. In low water flow cases, on the other hand, the air mixing rates were underestimated. (3)As for two-phase flow mixing experiments among multi-channels, the calculated mass velocities at channel exit under steady-state condition were agreed with experimental values within about 10%. However, the predictive errors of exit qualities were as high as 30%. (4)As for critical heat flux(CHF) experiments, two different results were obtained. A code indicated that the calculated CHF's using KfK or EPRI correlations were well agreed with the experimental results, while another code suggested that the CHF's were well predicted by using WSC-2 correlation or Weisman-Pei mechanistic model. (5)As for droplets entrainment and deposition experiments, it was indicated that the predictive capability was significantly increased by improving correlations. On the other hand, a remarkable discrepancy between codes was observed. That is, a code underestimated the droplet flow rate and overestimated the liquid film flow rate in high quality cases, while another code overestimated the droplet flow rate and underestimated the liquid film flow rate in low quality cases. (J.P.N.)

  16. Code package to analyse behavior of the WWER fuel rods in normal operation: TOPRA's code

    International Nuclear Information System (INIS)

    Scheglov, A.; Proselkov, V.

    2001-01-01

    This paper briefly describes the code package intended for analysis of WWER fuel rod characteristics. The package includes two computer codes: TOPRA-1 and TOPRA-2 for full-scale fuel rod analyses; MRZ and MKK codes for analyzing the separate sections of fuel rods in r-z and r-j geometry. The TOPRA's codes are developed on the base of PIN-mod2 version and verified against experimental results obtained in MR, MIR and Halden research reactors (in the framework of SOFIT, FGR-2 and FUMEX experimental programs). Comparative analysis of calculation results and results from post-reactor examination of the WWER-440 and WWER-1000 fuel rod are also made as additional verification of these codes. To avoid the enlarging of uncertainties in fuel behavior prediction as a result of simplifying of the fuel geometry, MKK and MRZ codes are developed on the basis of the finite element method with use of the three nodal finite elements. Results obtained in the course of the code verification indicate the possibility for application of the method and TOPRA's code for simplified engineering calculations of WWER fuel rods thermal-physical parameters. An analysis of maximum relative errors for predicting of the fuel rod characteristics in the range of the accepted parameter values is also presented in the paper

  17. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  18. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  19. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  20. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1985-01-01

    The Fusion Safety Program of EG and G Idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculated tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TWAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design. This structure was composed of composite layers of vanadium and stainless steel. Included in these calculations was the effect of contrasting material tritium solubility at the composite interface. In addition, TMAP was used to investigate the rate of tritium cleanup after an accidental release into the atmosphere of a reactor building. Tritium retention and release from surfaces and conversion to the oxide form was predicted

  1. Hybrid Video Coding Based on Bidimensional Matching Pursuit

    Directory of Open Access Journals (Sweden)

    Lorenzo Granai

    2004-12-01

    Full Text Available Hybrid video coding combines together two stages: first, motion estimation and compensation predict each frame from the neighboring frames, then the prediction error is coded, reducing the correlation in the spatial domain. In this work, we focus on the latter stage, presenting a scheme that profits from some of the features introduced by the standard H.264/AVC for motion estimation and replaces the transform in the spatial domain. The prediction error is so coded using the matching pursuit algorithm which decomposes the signal over an appositely designed bidimensional, anisotropic, redundant dictionary. Comparisons are made among the proposed technique, H.264, and a DCT-based coding scheme. Moreover, we introduce fast techniques for atom selection, which exploit the spatial localization of the atoms. An adaptive coding scheme aimed at optimizing the resource allocation is also presented, together with a rate-distortion study for the matching pursuit algorithm. Results show that the proposed scheme outperforms the standard DCT, especially at very low bit rates.

  2. APR1400 Containment Simulation with CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Chung, Bub Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  3. APR1400 Containment Simulation with CONTAIN code

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Chung, Bub Dong

    2010-01-01

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  4. Physical models and codes for prediction of activity release from defective fuel rods under operation conditions and in leakage tests during refuelling

    International Nuclear Information System (INIS)

    Likhanskii, V.; Evdokimov, I.; Khoruzhii, O.; Sorokin, A.; Novikov, V.

    2003-01-01

    It is appropriate to use the dependences, based on physical models, in the design-analytical codes for improving of reliability of defective fuel rod detection and for determination of defect characteristics by activity measuring in the primary coolant. In the paper the results on development of some physical models and integral mechanistic codes, assigned for prediction of defective fuel rod behaviour are presented. The analysis of mass transfer and mass exchange between fuel rod and coolant showed that the rates of these processes depends on many factors, such as coolant turbulent flow, pressure, effective hydraulic diameter of defect, fuel rod geometric parameters. The models, which describe these dependences, have been created. The models of thermomechanical fuel behaviour, stable gaseous FP release were modified and new computer code RTOP-CA was created thereupon for description of defective fuel rod behaviour and activity release into the primary coolant. The model of fuel oxidation in in-pile conditions, which includes radiolysis and RTOP-LT after validation of physical models are planned to be used for prediction of defective fuel rods behaviour

  5. Histone modification profiles are predictive for tissue/cell-type specific expression of both protein-coding and microRNA genes

    Directory of Open Access Journals (Sweden)

    Zhang Michael Q

    2011-05-01

    Full Text Available Abstract Background Gene expression is regulated at both the DNA sequence level and through modification of chromatin. However, the effect of chromatin on tissue/cell-type specific gene regulation (TCSR is largely unknown. In this paper, we present a method to elucidate the relationship between histone modification/variation (HMV and TCSR. Results A classifier for differentiating CD4+ T cell-specific genes from housekeeping genes using HMV data was built. We found HMV in both promoter and gene body regions to be predictive of genes which are targets of TCSR. For example, the histone modification types H3K4me3 and H3K27ac were identified as the most predictive for CpG-related promoters, whereas H3K4me3 and H3K79me3 were the most predictive for nonCpG-related promoters. However, genes targeted by TCSR can be predicted using other type of HMVs as well. Such redundancy implies that multiple type of underlying regulatory elements, such as enhancers or intragenic alternative promoters, which can regulate gene expression in a tissue/cell-type specific fashion, may be marked by the HMVs. Finally, we show that the predictive power of HMV for TCSR is not limited to protein-coding genes in CD4+ T cells, as we successfully predicted TCSR targeted genes in muscle cells, as well as microRNA genes with expression specific to CD4+ T cells, by the same classifier which was trained on HMV data of protein-coding genes in CD4+ T cells. Conclusion We have begun to understand the HMV patterns that guide gene expression in both tissue/cell-type specific and ubiquitous manner.

  6. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  7. Code Development and Analysis Program: developmental checkout of the BEACON/MOD2A code

    International Nuclear Information System (INIS)

    Ramsthaler, J.A.; Lime, J.F.; Sahota, M.S.

    1978-12-01

    A best-estimate transient containment code, BEACON, is being developed by EG and G Idaho, Inc. for the Nuclear Regulatory Commission's reactor safety research program. This is an advanced, two-dimensional fluid flow code designed to predict temperatures and pressures in a dry PWR containment during a hypothetical loss-of-coolant accident. The most recent version of the code, MOD2A, is presently in the final stages of production prior to being released to the National Energy Software Center. As part of the final code checkout, seven sample problems were selected to be run with BEACON/MOD2A

  8. From structure prediction to genomic screens for novel non-coding RNAs.

    Science.gov (United States)

    Gorodkin, Jan; Hofacker, Ivo L

    2011-08-01

    Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  9. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    Energy Technology Data Exchange (ETDEWEB)

    2018-03-19

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  10. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  11. RELAP5/MOD2 code assessment

    International Nuclear Information System (INIS)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-01-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G

  12. RELAP5/MOD2 code assessment

    Energy Technology Data Exchange (ETDEWEB)

    Nithianandan, C.K.; Shah, N.H.; Schomaker, R.J.; Miller, F.R.

    1985-11-01

    Babcock and Wilcox (B and W) has been working with the code developers at EG and G and the US Nuclear Regulatory Commission in assessing the RELAP5/MOD2 computer code for the past year by simulating selected separate-effects tests. The purpose of this assessment has been to evaluate the code for use in MIST (Ref. 2) and OTIS integral system tests simulations and in the prediction of pressurized water reactor transients. B and W evaluated various versions of the code and made recommendations to improve code performance. As a result, the currently released version (cycle 36.1) has been improved considerably over earlier versions. However, further refinements to some of the constitutive models may still be needed to further improve the predictive capability of RELAP5/MOD2. The following versions of the code were evaluated. (1) RELAP/MOD2/Cycle 22 - first released version; (2) YELAP5/Cycle 32 - EG and G test version of RELAP5/MOD2/Cycle 32; (3) RELAP5/MOD2/Cycle 36 - frozen cycle for international code assessment; (4) updates to cycle 36 based on recommendations developed by B and W during the simulation of a Massachusetts Institute of Technology (MIT) pressurizer test; and (5) cycle 36.1 updates received from EG and G.

  13. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  14. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  15. Use of AERIN code for determining internal doses of transuranic isotopes

    International Nuclear Information System (INIS)

    King, W.C.

    1980-01-01

    The AERIN computer code is a mathematical expression of the ICRP Lung Model. The code was developed at the Lawrence Livermore National Laboratory to compute the body organ burdens and absorbed radiation doses resulting from the inhalation of transuranic isotopes and to predict the amount of activity excreted in the urine and feces as a function of time. Over forty cases of internal exposure have been studied using the AERIN code. The code, as modified, has proven to be extremely versatile. The case studies presented demonstrate the excellent correlation that can be obtained between code predictions and observed bioassay data. In one case study a discrepancy was observed between an in vivo count of the whole body and the application of the code using urine and fecal data as input. The discrepancy was resolved by in vivo skull counts that showed the code had predicted the correct skeletal burden

  16. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  17. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  18. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants.

    Science.gov (United States)

    Vieira, Lucas Maciel; Grativol, Clicia; Thiebaut, Flavia; Carvalho, Thais G; Hardoim, Pablo R; Hemerly, Adriana; Lifschitz, Sergio; Ferreira, Paulo Cavalcanti Gomes; Walter, Maria Emilia M T

    2017-03-04

    Non-coding RNAs (ncRNAs) constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs), which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM). We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane ( Saccharum spp.) and in maize ( Zea mays ). From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  19. PlantRNA_Sniffer: A SVM-Based Workflow to Predict Long Intergenic Non-Coding RNAs in Plants

    Directory of Open Access Journals (Sweden)

    Lucas Maciel Vieira

    2017-03-01

    Full Text Available Non-coding RNAs (ncRNAs constitute an important set of transcripts produced in the cells of organisms. Among them, there is a large amount of a particular class of long ncRNAs that are difficult to predict, the so-called long intergenic ncRNAs (lincRNAs, which might play essential roles in gene regulation and other cellular processes. Despite the importance of these lincRNAs, there is still a lack of biological knowledge and, currently, the few computational methods considered are so specific that they cannot be successfully applied to other species different from those that they have been originally designed to. Prediction of lncRNAs have been performed with machine learning techniques. Particularly, for lincRNA prediction, supervised learning methods have been explored in recent literature. As far as we know, there are no methods nor workflows specially designed to predict lincRNAs in plants. In this context, this work proposes a workflow to predict lincRNAs on plants, considering a workflow that includes known bioinformatics tools together with machine learning techniques, here a support vector machine (SVM. We discuss two case studies that allowed to identify novel lincRNAs, in sugarcane (Saccharum spp. and in maize (Zea mays. From the results, we also could identify differentially-expressed lincRNAs in sugarcane and maize plants submitted to pathogenic and beneficial microorganisms.

  20. From structure prediction to genomic screens for novel non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Jan Gorodkin

    2011-08-01

    Full Text Available Non-coding RNAs (ncRNAs are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs. A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  1. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  2. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  3. Fire-accident analysis code (FIRAC) verification

    International Nuclear Information System (INIS)

    Nichols, B.D.; Gregory, W.S.; Fenton, D.L.; Smith, P.R.

    1986-01-01

    The FIRAC computer code predicts fire-induced transients in nuclear fuel cycle facility ventilation systems. FIRAC calculates simultaneously the gas-dynamic, material transport, and heat transport transients that occur in any arbitrarily connected network system subjected to a fire. The network system may include ventilation components such as filters, dampers, ducts, and blowers. These components are connected to rooms and corridors to complete the network for moving air through the facility. An experimental ventilation system has been constructed to verify FIRAC and other accident analysis codes. The design emphasizes network system characteristics and includes multiple chambers, ducts, blowers, dampers, and filters. A larger industrial heater and a commercial dust feeder are used to inject thermal energy and aerosol mass. The facility is instrumented to measure volumetric flow rate, temperature, pressure, and aerosol concentration throughout the system. Aerosol release rates and mass accumulation on filters also are measured. We have performed a series of experiments in which a known rate of thermal energy is injected into the system. We then simulated this experiment with the FIRAC code. This paper compares and discusses the gas-dynamic and heat transport data obtained from the ventilation system experiments with those predicted by the FIRAC code. The numerically predicted data generally are within 10% of the experimental data

  4. Predictions of the thermomechanical code ''RESTA'' compared with fuel element examinations after irradiation in the BR3 reactor

    International Nuclear Information System (INIS)

    Petitgrand, S.

    1980-01-01

    A large number of fuel rods have been irradiated in the small power plant BR3. Many of them have been examined in hot cells after irradiation, giving thus valuable experimental information. On the other hand a thermomechanical code, named RESTA, has been developed by the C.E.A. to describe and predict the behaviour of a fuel pin in a PWR environment and in stationary conditions. The models used in that code derive chiefly from the C.E.A.'s own experience and are briefly reviewed in this paper. The comparison between prediction and experience has been performed for four power history classes: (1) moderate (average linear rating approximately equal to 20 kw m -1 ) and short (approximately equal to 300 days) rating, (2) moderate (approximately equal to 20 kw m -1 ) and long (approximately equal to 600 days) rating, (3) high (25-30 kw m -1 ) and long (approximately equal to 600 days) rating and (4) very high (30-40 kw m -1 ) and long (approximately equal to 600 days) rating. Satisfactory agreement has been found between experimental and calculated results in all cases, concerning fuel structural change, fission gas release, pellet-clad interaction as well as clad permanent strain. (author)

  5. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  6. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  7. Predicting Motivation: Computational Models of PFC Can Explain Neural Coding of Motivation and Effort-based Decision-making in Health and Disease.

    Science.gov (United States)

    Vassena, Eliana; Deraeve, James; Alexander, William H

    2017-10-01

    Human behavior is strongly driven by the pursuit of rewards. In daily life, however, benefits mostly come at a cost, often requiring that effort be exerted to obtain potential benefits. Medial PFC (MPFC) and dorsolateral PFC (DLPFC) are frequently implicated in the expectation of effortful control, showing increased activity as a function of predicted task difficulty. Such activity partially overlaps with expectation of reward and has been observed both during decision-making and during task preparation. Recently, novel computational frameworks have been developed to explain activity in these regions during cognitive control, based on the principle of prediction and prediction error (predicted response-outcome [PRO] model [Alexander, W. H., & Brown, J. W. Medial prefrontal cortex as an action-outcome predictor. Nature Neuroscience, 14, 1338-1344, 2011], hierarchical error representation [HER] model [Alexander, W. H., & Brown, J. W. Hierarchical error representation: A computational model of anterior cingulate and dorsolateral prefrontal cortex. Neural Computation, 27, 2354-2410, 2015]). Despite the broad explanatory power of these models, it is not clear whether they can also accommodate effects related to the expectation of effort observed in MPFC and DLPFC. Here, we propose a translation of these computational frameworks to the domain of effort-based behavior. First, we discuss how the PRO model, based on prediction error, can explain effort-related activity in MPFC, by reframing effort-based behavior in a predictive context. We propose that MPFC activity reflects monitoring of motivationally relevant variables (such as effort and reward), by coding expectations and discrepancies from such expectations. Moreover, we derive behavioral and neural model-based predictions for healthy controls and clinical populations with impairments of motivation. Second, we illustrate the possible translation to effort-based behavior of the HER model, an extended version of PRO

  8. Predictive Coding and Multisensory Integration: An Attentional Account of the Multisensory Mind

    Directory of Open Access Journals (Sweden)

    Durk eTalsma

    2015-03-01

    Full Text Available Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top-down and bottom-up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top-down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.

  9. Field-based tests of geochemical modeling codes: New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  10. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  11. Benchmarking and qualification of the ppercase nufreq -ppercase npw code for best estimate prediction of multichannel core stability margins

    International Nuclear Information System (INIS)

    Taleyarkhan, R.P.; McFarlane, A.F.; Lahey, R.T. Jr.; Podowski, M.Z.

    1994-01-01

    The ppercase nufreq - ppercase np (G.C. Park et al. NUREG/CR-3375, 1983; S.J. Peng et al. NUREG/CR-4116, 1984; S.J. Peng et al. Nucl. Sci. Eng. 88 (1988) 404-411) code was modified and set up at Westinghouse, USA, for mixed fuel type multichannel core-wide stability analysis. The resulting code, ppercase nufreq - ppercase npw , allows for variable axial power profiles between channel groups and can handle mixed fuel types.Various models incorporated into ppercase nurfreq - ppercase npw were systematically compared against the Westinghouse channel stability analysis code ppercase mazda -ppercase nf (R. Taleyarkhan et al. J. Heat Transfer 107 (February 1985) 175-181; NUREG/CR2972, 1983), for which the mathematical model was developed in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All 13 Peach Bottom-2 EOC-2/3 low flow stability tests (L.A. Carmichael and R.O. Neimi, EPRI NP-564, Project 1020-1, 1978; F.B. Woffinden and R.O. Neimi, EPRI, NP 0972, Project 1020-2, 1981) were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between ppercase nufreq-npw predictions and data. ((orig.))

  12. Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code

    International Nuclear Information System (INIS)

    Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat

    2015-01-01

    The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)

  13. Two-dimensional steady-state thermal and hydraulic analysis code for prediction of detailed temperature fields around distorted fuel pin in LMFBR assembly: SPOTBOW

    International Nuclear Information System (INIS)

    Shimizu, T.

    1983-01-01

    SPOTBOW computer program has been developed for predicting detailed temperature and turbulent flow velocity fields around distorted fuel pins in LMFBR fuel assemblies, in which pin to pin and pin to wrapper tube contacts may occur. The present study started from the requirement of reactor core designers to evaluate local hot spot temperature due to the wire contact effect and the pin bowing effect on cladding temperature distribution. This code calculates for both unbaffled and wire-wrapped pin bundles. The Galerkin method and iterative procedure were used to solve the basic equations which govern the local heat and momentum transfer in turbulent fluid flow around the distorted pins. Comparisons have been made with cladding temperatures measured in normal and distorted pin bundle mockups to check the validity of this code. Predicted peak temperatures in the vicinity of wire contact point were somewhat higher than the measured values, and the shape of the peaks agreed well with measurement. The changes of cladding temperature due to the decrease of gap width between bowing pin and adjacent pin were predicted well

  14. Improved lossless intra coding for H.264/MPEG-4 AVC.

    Science.gov (United States)

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  15. GAPCON-THERMAL-3 code description

    International Nuclear Information System (INIS)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes

  16. GAPCON-THERMAL-3 code description

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes.

  17. Fuel behavior modeling using the MARS computer code

    International Nuclear Information System (INIS)

    Faya, S.C.S.; Faya, A.J.G.

    1983-01-01

    The fuel behaviour modeling code MARS against experimental data, was evaluated. Two cases were selected: an early comercial PWR rod (Maine Yankee rod) and an experimental rod from the Canadian BWR program (Canadian rod). The MARS predictions are compared with experimental data and predictions made by other fuel modeling codes. Improvements are suggested for some fuel behaviour models. Mars results are satisfactory based on the data available. (Author) [pt

  18. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    International Nuclear Information System (INIS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-01-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion

  19. Modeling peripheral olfactory coding in Drosophila larvae.

    Directory of Open Access Journals (Sweden)

    Derek J Hoare

    Full Text Available The Drosophila larva possesses just 21 unique and identifiable pairs of olfactory sensory neurons (OSNs, enabling investigation of the contribution of individual OSN classes to the peripheral olfactory code. We combined electrophysiological and computational modeling to explore the nature of the peripheral olfactory code in situ. We recorded firing responses of 19/21 OSNs to a panel of 19 odors. This was achieved by creating larvae expressing just one functioning class of odorant receptor, and hence OSN. Odor response profiles of each OSN class were highly specific and unique. However many OSN-odor pairs yielded variable responses, some of which were statistically indistinguishable from background activity. We used these electrophysiological data, incorporating both responses and spontaneous firing activity, to develop a bayesian decoding model of olfactory processing. The model was able to accurately predict odor identity from raw OSN responses; prediction accuracy ranged from 12%-77% (mean for all odors 45.2% but was always significantly above chance (5.6%. However, there was no correlation between prediction accuracy for a given odor and the strength of responses of wild-type larvae to the same odor in a behavioral assay. We also used the model to predict the ability of the code to discriminate between pairs of odors. Some of these predictions were supported in a behavioral discrimination (masking assay but others were not. We conclude that our model of the peripheral code represents basic features of odor detection and discrimination, yielding insights into the information available to higher processing structures in the brain.

  20. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    International Nuclear Information System (INIS)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions

  1. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  2. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  3. Development of safety analysis codes for light water reactor

    International Nuclear Information System (INIS)

    Akimoto, Masayuki

    1985-01-01

    An overview is presented of currently used major codes for the prediction of thermohydraulic transients in nuclear power plants. The overview centers on the two-phase fluid dynamics of the coolant system and the assessment of the codes. Some of two-phase phenomena such as phase separation are not still predicted with engineering accuracy. MINCS-PIPE are briefly introduced. The MINCS-PIPE code is to assess constitutive relations and to aid development of various experimental correlations for 1V1T model to 2V2T model. (author)

  4. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc; Impact de modeles de melange implantes dans le code de thermohydraulique Thyc sur les predictions de flux critique

    Energy Technology Data Exchange (ETDEWEB)

    Banner, D

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs.

  5. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  6. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    International Nuclear Information System (INIS)

    Aounallah, Y.; Coddington, P.; Gantner, U.

    2000-01-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  7. Assessment of void fraction prediction using the RETRAN-3d and CORETRAN-01/VIPRE-02 codes

    Energy Technology Data Exchange (ETDEWEB)

    Aounallah, Y.; Coddington, P.; Gantner, U

    2000-07-01

    A review of wide-range void fraction correlations against an extensive database has been undertaken to identify the correlations best suited for nuclear safety applications. Only those based on the drift-flux model have been considered. The survey confirmed the application range of the Chexal-Lellouche correlation, and the database was also used to obtain new parameters for the Inoue drift-flux correlation, which was also found suitable. A void fraction validation study has also been undertaken for the codes RETRAN-3D and CORETRAN-01/VIPRE-02 at the assembly and sub-assembly levels. The study showed the impact of the RETRAN-03 user options on the predicted void fraction, and the RETRAN-3D limitation at very low fluid velocity. At the sub-assembly level, CORETRAN-01/VIPRE-02 substantially underestimates the void in regions with low power-to-flow ratios. Otherwise, a generally good predictive performance was obtained with both RETRAN-3D and CORETRAN-01/VIPRE-02. (authors)

  8. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  9. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  10. Field-based tests of geochemical modeling codes usign New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1994-06-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will determine how the codes can be used to predict the chemical and mineralogical response of the environment to nuclear waste emplacement. Field-based exercises allow us to test the models on time scales unattainable in the laboratory. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei and Kawerau geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions

  11. Assessment of one dimensional reflood model in REFLA/TRAC code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-12-01

    Post-test calculations for twelve selected SSRTF, SCTF and CCTF tests were performed to assess the predictive capability of the one-dimensional reflood model in the REFLA/TRAC code for core thermal behavior during the reflood in a PWR LOCA. Both core void fraction profile and clad temperature transients were predicted excellently by the REFLA/TRAC code including parameter effect of core inlet subcooling, core flooding rate, core configuration, core power, system pressure, initial clad temperature and so on. The peak clad temperature was predicted within an error of 50 K. Based on these assessment results, it is verified that the core thermal hydraulic behaviors during the reflood can be predicted excellently with the REFLA/TRAC code under various conditions where the reflood may occur in a PWR LOCA. (author)

  12. A predictive transport modeling code for ICRF-heated tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Hwang, D.Q.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5

  13. Analytical validation of the CACECO containment analysis code

    International Nuclear Information System (INIS)

    Peak, R.D.

    1979-08-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. This report covers the verification of the CACECO code by problems that can be solved by hand calculations or by reference to textbook and literature examples. The verification concentrates on the accuracy of the material and energy balances maintained by the code and on the independence of the four cells analyzed by the code so that the user can be assured that the code analyses are numerically correct and independent of the organization of the input data submitted to the code

  14. Development of a code MOGRA for predicting the migration of ground additions and its application to various land utilization areas

    International Nuclear Information System (INIS)

    Amano, Hikaru; Uchida, Shigeo; Matsuoka, Syungo; Ikeda, Hiroshi; Hayashi, Hiroko; Kurosawa, Naohiro

    2003-01-01

    A Code MOGRA (Migration Of GRound Additions) is a migration prediction code for toxic ground additions including radioactive materials in a terrestrial environment, which consists of computational codes that are applicable to various evaluation target systems, and can be used on personal computers. The computational code has the dynamic compartment analysis block at its core, the graphical user interface (GUI) for model formation, computation parameter settings, and results displays. The compartments are obtained by classifying various natural environments into groups that exhibit similar properties. The functionality of MOGRA is being verified by applying it in the analyses of the migration rates of radioactive substances from the atmosphere to soils and plants and flow rates into the rivers. In this report, a hypothetical combination of land usage was supposed to check the function of MOGRA. The land usage was consisted from cultivated lands, forests, uncultivated lands, urban area, river, and lake. Each land usage has its own inside model which is basic module. Also supposed was homogeneous contamination of the surface land from atmospheric deposition of 137 Cs(1.0Bq/m 2 ). The system analyzed the dynamic changes of 137 Cs concentrations in each compartment, fluxes from one compartment to another compartment. (author)

  15. Quantitative software-reliability analysis of computer codes relevant to nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.

    1981-12-01

    This report presents the results of the first year of an ongoing research program to determine the probability of failure characteristics of computer codes relevant to nuclear safety. An introduction to both qualitative and quantitative aspects of nuclear software is given. A mathematical framework is presented which will enable the a priori prediction of the probability of failure characteristics of a code given the proper specification of its properties. The framework consists of four parts: (1) a classification system for software errors and code failures; (2) probabilistic modeling for selected reliability characteristics; (3) multivariate regression analyses to establish predictive relationships among reliability characteristics and generic code property and development parameters; and (4) the associated information base. Preliminary data of the type needed to support the modeling and the predictions of this program are described. Illustrations of the use of the modeling are given but the results so obtained, as well as all results of code failure probabilities presented herein, are based on data which at this point are preliminary, incomplete, and possibly non-representative of codes relevant to nuclear safety

  16. Does the Holland Code Predict Job Satisfaction and Productivity in Clothing Factory Workers?

    Science.gov (United States)

    Heesacker, Martin; And Others

    1988-01-01

    Administered Self-Directed Search to sewing machine operators to determine Holland code, and assessed work productivity, job satisfaction, absenteeism, and insurance claims. Most workers were of the Social code. Social subjects were the most satisfied, Conventional and Realistic subjects next, and subjects of other codes less so. Productivity of…

  17. A parametric study of MELCOR Accident Consequence Code System 2 (MACCS2) Input Values for the Predicted Health Effect

    International Nuclear Information System (INIS)

    Kim, So Ra; Min, Byung Il; Park, Ki Hyun; Yang, Byung Mo; Suh, Kyung Suk

    2016-01-01

    The MELCOR Accident Consequence Code System 2, MACCS2, has been the most widely used through the world among the off-site consequence analysis codes. MACCS2 code is used to estimate the radionuclide concentrations, radiological doses, health effects, and economic consequences that could result from the hypothetical nuclear accidents. Most of the MACCS model parameter values are defined by the user and those input parameters can make a significant impact on the output. A limited parametric study was performed to identify the relative importance of the values of each input parameters in determining the predicted early and latent health effects in MACCS2. These results would not be applicable to every case of the nuclear accidents, because only the limited calculation was performed with Kori-specific data. The endpoints of the assessment were early- and latent cancer-risk in the exposed population, therefore it might produce the different results with the parametric studies for other endpoints, such as contamination level, absorbed dose, and economic cost. Accident consequence assessment is important for decision making to minimize the health effect from radiation exposure, accordingly the sufficient parametric studies are required for the various endpoints and input parameters in further research

  18. Simulation of the KAERI PASCAL Test with MARS-KS and TRACE Codes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Won; Cheong, Aeju; Shin, Andong; Cho, Min Ki [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2016-10-15

    In order to validate the operational performance of the PAFS, KAERI has performed the experimental investigation using the PASCAL (PAFS Condensing heat removal Assessment Loop) facility. In this study, we simulated the KAERI PASCAL SS-540-P1 test with MARS-KS V1.4 and TRACE V5.0 p4 codes to assess the code predictability for the condensation heat transfer inside the passive auxiliary feedwater system. We simulated the KAERI PASCAL SS-540-P1 test with MARS-KS V1.4 and TRACE V5.0 p4 codes to assess the code predictability for the condensation heat transfer inside the passive auxiliary feedwater system. The calculated results of heat flux, inner wall surface temperature of the condensing tube, fluid temperature, and steam mass flow rate are compared with the experimental data. The result shows that the MARS-KS generally under-predict the heat fluxes. The TRACE over-predicts the heat flux at tube inlet region and under-predicts it at tube outlet region. The TRACE prediction shows larger amount of steam condensation by about 3% than the MARS-KS prediction.

  19. Numerical analysis of reflood simulation based on a mechanistic, best-estimate, approach by KREWET code

    International Nuclear Information System (INIS)

    Chun, Moon-Hyun; Jeong, Eun-Soo

    1983-01-01

    A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUEN1D, and the PWR-FLECHT data for various conditions. These show favourable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (∼413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than ∼4cm/sec) and low pressure (∼138Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work

  20. Numerical analysis for reflood simulation based on a mechanistic, best-estimate, approach by KREWET code

    International Nuclear Information System (INIS)

    Chun, M.-H.; Jeong, E.-S.

    1983-01-01

    A new computer code entitled KREWET has been developed in an effort to improve the accuracy and applicability of the existing reflood heat transfer simulation computer code. Sample calculations for temperature histories and heat transfer coefficient are made using KREWET code and the results are compared with the predictions of REFLUX, QUENID, and the PWR-FLECHT data for various conditions. These show favorable agreement in terms of clad temperature versus time. For high flooding rates (5-15cm/sec) and high pressure (approx. =413 Kpa), reflood predictions are reasonably well predicted by KREWET code as well as with other codes. For low flooding rates (less than approx. =4cm/sec) and low pressure (approx. =138 Kpa), predictions show considerable error in evaluating the rewet position versus time. This observation is common to all the codes examined in the present work

  1. Tokamak plasma power balance calculation code (TPC code) outline and operation manual

    International Nuclear Information System (INIS)

    Fujieda, Hirobumi; Murakami, Yoshiki; Sugihara, Masayoshi.

    1992-11-01

    This report is a detailed description on the TPC code, that calculates the power balance of a tokamak plasma according to the ITER guidelines. The TPC code works on a personal computer (Macintosh or J-3100/ IBM-PC). Using input data such as the plasma shape, toroidal magnetic field, plasma current, electron temperature, electron density, impurities and heating power, TPC code can determine the operation point of the fusion reactor (Ion temperature is assumed to be equal to the electron temperature). Supplied flux (Volt · sec) and burn time are also estimated by coil design parameters. Calculated energy confinement time is compared with various L-mode scaling laws and the confinement enhancement factor (H-factor) is evaluated. Divertor heat load is predicted by using simple scaling models (constant-χ, Bohm-type-χ and JT-60U empirical scaling models). Frequently used data can be stored in a 'device file' and used as the default values. TPC code can generate 2-D mesh data and the POPCON plot is drawn by a contour line plotting program (CONPLT). The operation manual about CONPLT code is also described. (author)

  2. Positive predictive value of peptic ulcer diagnosis codes in the Danish National Patient Registry.

    Science.gov (United States)

    Viborg, Søren; Søgaard, Kirstine Kobberøe; Jepsen, Peter

    2017-01-01

    Diagnoses of peptic ulcer are registered in the Danish National Patient Registry (DNPR) for administrative as well as research purposes, but it is unknown whether the coding validity depends on the location of the ulcer. To validate the International Classification of Diseases, 10 th revision diagnosis codes of peptic ulcer in the DNPR by estimating positive predictive values (PPVs) for gastric and duodenal ulcer diagnoses. We identified all patients registered with a hospital discharge diagnosis of peptic ulcer from Aarhus University Hospital, Denmark, in 1995-2006. Among them, we randomly selected 200 who had an outpatient gastroscopy at the time of ulcer diagnosis. We reviewed the findings from these gastroscopies to confirm the presence of peptic ulcer and its location. We calculated PPVs and corresponding 95% confidence intervals (CIs) of gastric and duodenal ulcer diagnoses, using descriptions from the gastroscopic examinations as standard reference. In total, 182 records (91%) were available for review. The overall PPV of peptic ulcer diagnoses in DNPR was 95.6% (95% CI 91.5-98.1), with PPVs of 90.3% (95% CI 82.4-95.5) for gastric ulcer diagnoses, and 94.4% (95% CI 87.4-98.2) for duodenal ulcer diagnoses. PPVs were constant over time. The PPV of uncomplicated peptic ulcer diagnoses in the DNPR is high, and the location of the ulcers is registered correctly in most cases, indicating that the diagnoses are useful for research purposes.

  3. Sensitivity analysis of the RESRAD, a dose assessment code

    International Nuclear Information System (INIS)

    Yu, C.; Cheng, J.J.; Zielen, A.J.

    1991-01-01

    The RESRAD code is a pathway analysis code that is designed to calculate radiation doses and derive soil cleanup criteria for the US Department of Energy's environmental restoration and waste management program. the RESRAD code uses various pathway and consumption-rate parameters such as soil properties and food ingestion rates in performing such calculations and derivations. As with any predictive model, the accuracy of the predictions depends on the accuracy of the input parameters. This paper summarizes the results of a sensitivity analysis of RESRAD input parameters. Three methods were used to perform the sensitivity analysis: (1) Gradient Enhanced Software System (GRESS) sensitivity analysis software package developed at oak Ridge National Laboratory; (2) direct perturbation of input parameters; and (3) built-in graphic package that shows parameter sensitivities while the RESRAD code is operational

  4. Fast neutron analysis code SAD1

    International Nuclear Information System (INIS)

    Jung, M.; Ott, C.

    1985-01-01

    A listing and an example of outputs of the M.C. code SAD1 are given here. This code has been used many times to predict responses of fast neutrons in hydrogenic materials (in our case emulsions or plastics) towards the elastic n, p scattering. It can be easily extended to other kinds of such materials and to any kind of incident fast neutron spectrum

  5. Annotating Diseases Using Human Phenotype Ontology Improves Prediction of Disease-Associated Long Non-coding RNAs.

    Science.gov (United States)

    Le, Duc-Hau; Dao, Lan T M

    2018-05-23

    Recently, many long non-coding RNAs (lncRNAs) have been identified and their biological function has been characterized; however, our understanding of their underlying molecular mechanisms related to disease is still limited. To overcome the limitation in experimentally identifying disease-lncRNA associations, computational methods have been proposed as a powerful tool to predict such associations. These methods are usually based on the similarities between diseases or lncRNAs since it was reported that similar diseases are associated with functionally similar lncRNAs. Therefore, prediction performance is highly dependent on how well the similarities can be captured. Previous studies have calculated the similarity between two diseases by mapping exactly each disease to a single Disease Ontology (DO) term, and then use a semantic similarity measure to calculate the similarity between them. However, the problem of this approach is that a disease can be described by more than one DO terms. Until now, there is no annotation database of DO terms for diseases except for genes. In contrast, Human Phenotype Ontology (HPO) is designed to fully annotate human disease phenotypes. Therefore, in this study, we constructed disease similarity networks/matrices using HPO instead of DO. Then, we used these networks/matrices as inputs of two representative machine learning-based and network-based ranking algorithms, that is, regularized least square and heterogeneous graph-based inference, respectively. The results showed that the prediction performance of the two algorithms on HPO-based is better than that on DO-based networks/matrices. In addition, our method can predict 11 novel cancer-associated lncRNAs, which are supported by literature evidence. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Evaluation of Yonggwang unit 4 cycle 5 using SPNOVA code

    International Nuclear Information System (INIS)

    Choi, Y. S.; Cha, K. H.; Lee, E. K.; Park, M. K.

    2004-01-01

    Core follow calculation of Yonggwang (YGN) unit 4 cycle 5 is performed to evaluate SPNOVA code if it can be applicable or not to Korean standard nuclear power plant (KSNP). SPNOVA code consists of BEPREPN and ANC code to represent incore detector and neutronics model, respectively. SPNOVA core deflection model is compared and verified with ANC depletion results in terms of critical boron concentration (CBC), peaking factor (Fq) and radial power distribution. In YGN4, SPNOVA predicts 30 ppm lower than that of ROCS predicting CBC. Fq and radial power distribution behavior of SPNOVA calculation have conservatively higher than those of ROCS predicting values. And also SPNOVA predicting results are compared with measurement data from snapshot and CECOR core calculation. It is reasonable to accept SPNOVA to analyze KSNP. The model of SPNOVA for KSNP will be used to develop the brand-new incore detector of platinum and vanadium

  7. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    Science.gov (United States)

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  8. Visual communication with retinex coding.

    Science.gov (United States)

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  9. Visual Communication with Retinex Coding

    Science.gov (United States)

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  10. Fast H.264/AVC FRExt intra coding using belief propagation.

    Science.gov (United States)

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  11. The analysis of thermal-hydraulic models in MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M H; Hur, C; Kim, D K; Cho, H J [POhang Univ., of Science and TECHnology, Pohang (Korea, Republic of)

    1996-07-15

    The objective of the present work is to verify the prediction and analysis capability of MELCOR code about the progression of severe accidents in light water reactor and also to evaluate appropriateness of thermal-hydraulic models used in MELCOR code. Comparing the results of experiment and calculation with MELCOR code is carried out to achieve the above objective. Specially, the comparison between the CORA-13 experiment and the MELCOR code calculation was performed.

  12. The intercomparison of aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  13. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  14. Users' guide to CACECO containment analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Peak, R.D.

    1979-06-01

    The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. The code is included in the National Energy Software Center Library at Argonne National Laboratory as Program No. 762. This users' guide describes the CACECO code and its data input requirements. The code description covers the many mathematical models used and the approximations used in their solution. The descriptions are detailed to the extent that the user can modify the code to suit his unique needs, and, indeed, the reader is urged to consider code modification acceptable.

  15. ISODEP, A Fuel Depletion Analysis Code for Predicting Isotopic ...

    African Journals Online (AJOL)

    The trend of results was found to be consistent with those obtained by analytical and other numerical methods. Discovery and Innovation Vol. 13 no. 3/4 December (2001) pp. 184-195. KEY WORDS: depletion analysis, code, research reactor, simultaneous equations, decay of nuclides, radionuclitides, isotope. Résumé

  16. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  17. Radiation transport phenomena and modeling - part A: Codes

    International Nuclear Information System (INIS)

    Lorence, L.J.

    1997-01-01

    The need to understand how particle radiation (high-energy photons and electrons) from a variety of sources affects materials and electronics has motivated the development of sophisticated computer codes that describe how radiation with energies from 1.0 keV to 100.0 GeV propagates through matter. Predicting radiation transport is the necessary first step in predicting radiation effects. The radiation transport codes that are described here are general-purpose codes capable of analyzing a variety of radiation environments including those produced by nuclear weapons (x-rays, gamma rays, and neutrons), by sources in space (electrons and ions) and by accelerators (x-rays, gamma rays, and electrons). Applications of these codes include the study of radiation effects on electronics, nuclear medicine (imaging and cancer treatment), and industrial processes (food disinfestation, waste sterilization, manufacturing.) The primary focus will be on coupled electron-photon transport codes, with some brief discussion of proton transport. These codes model a radiation cascade in which electrons produce photons and vice versa. This coupling between particles of different types is important for radiation effects. For instance, in an x-ray environment, electrons are produced that drive the response in electronics. In an electron environment, dose due to bremsstrahlung photons can be significant once the source electrons have been stopped

  18. PopCORN: Hunting down the differences between binary population synthesis codes

    Science.gov (United States)

    Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.

    2014-02-01

    Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough

  19. Prediction Capability of SPACE Code about the Loop Seal Clearing on ATLAS SBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Sung Won; Lee, Jong Hyuk; Chung, Bub Dong; Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The most possible break size for loop seal reforming has been decided as 4 inch by the pre-calculation conducted by the RELAP5 and MARS. Many organizations have participated with various system analysis codes: for examples, RELAP5, MARS, TRACE. KAERI also anticipated with SPACE code. SPACE code has been developed for the use of design and safety analysis of nuclear thermal hydraulics system. KHNP and other organizations have collaborated during last 10 years. And it is currently under the certification procedures. SPACE has the capability to analyze the droplet field with full governing equation set: continuity, momentum, and energy. The SPACE code has been participated in PKL- 3 benchmark program for the international activity. The DSP-04 benchmark problem is also the application of SPACE as the domestic activities. The cold leg top slot break accident of APR1400 reactor has been modeled and surveyed by SPACE code. Benchmark experiment as a program of DSP-04 has been performed with ATLAS facility. The break size has been selected as 4 inch in APR1400 and the corresponding scale down break size has been modeled in SPACE code. The loop seal reforming has been occurred at all 4 loops. But the PCT shows no significant behaviors.

  20. MVP utilization for PWR design code

    International Nuclear Information System (INIS)

    Matsumoto, Hideki; Tahara, Yoshihisa

    2001-01-01

    MHI studies the method of the spatially dependent resonance cross sections so as to predict the power distribution in a fuel pellet accurately. For this purpose, the multiband method and the Stoker/Weiss method were implemented to the 2 dimensional transport code PHOENIX-P, and the methods were validated by comparing them with MVP code. Although the appropriate reference was not obtain from the deterministic codes on the resonance cross section study, now the Monte Carlo code MVP result is available and useful as reference. It is shown here how MVP is used to develop the multiband method and the Stoker/Weiss method, and how effective the result of MVP is on the study of the resonance cross sections. (author)

  1. FILM-30: A Heat Transfer Properties Code for Water Coolant

    International Nuclear Information System (INIS)

    MARSHALL, THERON D.

    2001-01-01

    A FORTRAN computer code has been written to calculate the heat transfer properties at the wetted perimeter of a coolant channel when provided the bulk water conditions. This computer code is titled FILM-30 and the code calculates its heat transfer properties by using the following correlations: (1) Sieder-Tate: forced convection, (2) Bergles-Rohsenow: onset to nucleate boiling, (3) Bergles-Rohsenow: partially developed nucleate boiling, (4) Araki: fully developed nucleate boiling, (5) Tong-75: critical heat flux (CHF), and (6) Marshall-98: transition boiling. FILM-30 produces output files that provide the heat flux and heat transfer coefficient at the wetted perimeter as a function of temperature. To validate FILM-30, the calculated heat transfer properties were used in finite element analyses to predict internal temperatures for a water-cooled copper mockup under one-sided heating from a rastered electron beam. These predicted temperatures were compared with the measured temperatures from the author's 1994 and 1998 heat transfer experiments. There was excellent agreement between the predicted and experimentally measured temperatures, which confirmed the accuracy of FILM-30 within the experimental range of the tests. FILM-30 can accurately predict the CHF and transition boiling regimes, which is an important advantage over current heat transfer codes. Consequently, FILM-30 is ideal for predicting heat transfer properties for applications that feature high heat fluxes produced by one-sided heating

  2. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc

    International Nuclear Information System (INIS)

    Banner, D.

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs

  3. Reaction path of energetic materials using THOR code

    Science.gov (United States)

    Durães, L.; Campos, J.; Portugal, A.

    1998-07-01

    The method of predicting reaction path, using THOR code, allows for isobar and isochor adiabatic combustion and CJ detonation regimes, the calculation of the composition and thermodynamic properties of reaction products of energetic materials. THOR code assumes the thermodynamic equilibria of all possible products, for the minimum Gibbs free energy, using HL EoS. The code allows the possibility of estimating various sets of reaction products, obtained successively by the decomposition of the original reacting compound, as a function of the released energy. Two case studies of thermal decomposition procedure were selected, calculated and discussed—pure Ammonium Nitrate and its based explosive ANFO, and Nitromethane—because their equivalence ratio is respectively lower, near and greater than the stoicheiometry. Predictions of reaction path are in good correlation with experimental values, proving the validity of proposed method.

  4. CBP Phase I Code Integration

    International Nuclear Information System (INIS)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-01-01

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown and Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown and Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface

  5. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  6. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-01-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the open-quotes standardclose quotes κ-ε transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels

  7. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    Energy Technology Data Exchange (ETDEWEB)

    Banas, A.O.; Carver, M.B. [Chalk River Laboratories (Canada); Unrau, D. [Univ. of Toronto (Canada)

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  8. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  9. Development of a tracer transport option for the NAPSAC fracture network computer code

    International Nuclear Information System (INIS)

    Herbert, A.W.

    1990-06-01

    The Napsac computer code predicts groundwater flow through fractured rock using a direct fracture network approach. This paper describes the development of a tracer transport algorithm for the NAPSAC code. A very efficient particle-following approach is used enabling tracer transport to be predicted through large fracture networks. The new algorithm is tested against three test examples. These demonstrations confirm the accuracy of the code for simple networks, where there is an analytical solution to the transport problem, and illustrates the use of the computer code on a more realistic problem. (author)

  10. A CFD code comparison of wind turbine wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Storey, R. C.; Sørensen, Niels N.

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds-stresses fo...

  11. User effects on the transient system code calculations. Final report

    International Nuclear Information System (INIS)

    Aksan, S.N.; D'Auria, F.

    1995-01-01

    Large thermal-hydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants to optimize operational procedures and the plant design itself. Evaluation of the capabilities of these codes are accomplished by comparing the code predictions with the measured experimental data obtained from various types of separate effects and integral test facilities. In recent years, some attempts have been made to establish methodologies to evaluate the accuracy and the uncertainty of the code predictions and consequently judgement on the acceptability of the codes. In none of the methodologies has the influence of the code user on the calculated results been directly addressed. In this paper, the results of the investigations on the user effects for the thermal-hydraulic transient system codes is presented and discussed on the basis of some case studies. The general findings of the investigations show that in addition to user effects, there are other reasons that affect the results of the calculations and which are hidden under user effects. Both the hidden factors and the direct user effects are discussed in detail and general recommendations and conclusions are presented to control and limit them

  12. Qualification of ARROTTA code for LWR accident analysis

    International Nuclear Information System (INIS)

    Huang, P.-H.; Peng, K.Y.; Lin, W.-C.; Wu, J.-Y.

    2004-01-01

    This paper presents the qualification efforts performed by TPC and INER for the 3-D spatial kinetics code ARROTTA for LWR core transient analysis. TPC and INER started a joint 5 year project in 1989 to establish independent capabilities to perform reload design and transient analysis utilizing state-of-the-art computer programs. As part of the effort, the ARROTTA code was chosen to perform multi-dimensional kinetics calculations such as rod ejection for PWR and rod drop for BWR. To qualify ARROTTA for analysis of FSAR licensing basis core transients, ARROTTA has been benchmarked for the static core analysis against plant measured data and SIMULATE-3 predictions, and for the kinetic analysis against available benchmark problems. The static calculations compared include critical boron concentration, core power distribution, and control rod worth. The results indicated that ARROTTA predictions match very well with plant measured data and SIMULATE-3 predictions. The kinetic benchmark problems validated include NEACRP rod ejection problem, 3-D LMW LWR rod withdrawal/insertion problem, and 3-D LRA BWR transient benchmark problem. The results indicate that ARROTTA's accuracy and stability are excellent as compared to other space-time kinetics codes. It is therefore concluded that ARROTTA provides accurate predictions for multi-dimensional core transient for LWRs. (author)

  13. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    This paper describes the application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third problem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed. (Auth.)

  14. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed

  15. Benchmark Simulation for the Development of the Regulatory Audit Subchannel Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, G. H.; Song, C.; Woo, S. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-05-15

    For the safe and reliable operation of a reactor, it is important to predict accurately the flow and temperature distributions in the thermal-hydraulic design of a reactor core. A subchannel approach can give the reasonable flow and temperature distributions with the short computing time. Korea Institute of Nuclear Safety (KINS) is presently reviewing new subchannel code, THALES, which will substitute for both THINC-IV and TORC code. To assess the prediction performance of THALES, KINS is developing the subchannel analysis code for the independent audit calculation. The code is based on workstation version of COBRA-IV-I. The main objective of the present study is to assess the performance of COBRA-IV-I code by comparing the simulation results with experimental ones for the sample problems

  16. Truncation Depth Rule-of-Thumb for Convolutional Codes

    Science.gov (United States)

    Moision, Bruce

    2009-01-01

    In this innovation, it is shown that a commonly used rule of thumb (that the truncation depth of a convolutional code should be five times the memory length, m, of the code) is accurate only for rate 1/2 codes. In fact, the truncation depth should be 2.5 m/(1 - r), where r is the code rate. The accuracy of this new rule is demonstrated by tabulating the distance properties of a large set of known codes. This new rule was derived by bounding the losses due to truncation as a function of the code rate. With regard to particular codes, a good indicator of the required truncation depth is the path length at which all paths that diverge from a particular path have accumulated the minimum distance of the code. It is shown that the new rule of thumb provides an accurate prediction of this depth for codes of varying rates.

  17. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-15

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and err000.

  18. A Deformation Analysis Code of CANDU Fuel under the Postulated Accident: ELOCA

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jung, Jong Yeob

    2006-11-01

    Deformations of the fuel element or fuel channel might be the main cause of the fuel failure. Therefore, the accurate prediction of the deformation and the analysis capabilities are closely related to the increase of the safety margin of the reactor. In this report, among the performance analysis or the transient behavior prediction computer codes, the analysis codes for deformation such as the ELOCA, HOTSPOT, CONTACT-1, and PTDFORM are briefly introduced and each code's objectives, applicability, and relations are explained. Especially, the user manual for ELOCA code which is the analysis code for the fuel deformation and the release of fission product during the transient period after the postulated accidents is provided so that it can be the guidance to the potential users of the code and save the time and economic loss by reducing the trial and error

  19. A computer code for the prediction of mill gases and hot air distribution between burners sections as input parameters for 3D CFD furnace calculation

    International Nuclear Information System (INIS)

    Tucakovic, Dragan; Zivanovic, Titoslav; Beloshevic, Srdjan

    2006-01-01

    Current computer technology development enables application of powerful software packages that can provide a reliable insight into real operating conditions of a steam boiler in the Thermal Power Plant. Namely, an application of CFD code to the 3D analysis of combustion and heat transfer in a furnace provides temperature, velocity and concentration fields in both cross sectional and longitudinal planes of the observed furnace. In order to obtain reliable analytical results, which corresponds to real furnace conditions, it is necessary to accurately predict a distribution of mill gases and hot air between burners' sections, because these parameters are input values for the furnace 3D calculation. Regarding these tasks, the computer code for the prediction of mill gases and hot air distribution has been developed at the Department for steam boilers of the Faculty of Mechanical Engineering in Belgrade. The code is based on simultaneous calculations of material and heat balances for fan mill and air tracts. The aim of this paper is to present a methodology of performed calculations and results obtained for the steam boiler furnace of 350 MWe Thermal Power Plant equipped with eight fan mills. Key words: mill gases, hot air, aerodynamic calculation, air tract, mill tract.

  20. MARS-KS code validation activity through the atlas domestic standard problem

    International Nuclear Information System (INIS)

    Choi, K. Y.; Kim, Y. S.; Kang, K. H.; Park, H. S.; Cho, S.

    2012-01-01

    The 2 nd Domestic Standard Problem (DSP-02) exercise using the ATLAS integral effect test data was executed to transfer the integral effect test data to domestic nuclear industries and to contribute to improving the safety analysis methodology for PWRs. A small break loss of coolant accident of a 6-inch break at the cold leg was determined as a target scenario by considering its technical importance and by incorporating interests from participants. Ten calculation results using MARS-KS code were collected, major prediction results were described qualitatively and code prediction accuracy was assessed quantitatively using the FFTBM. In addition, special code assessment activities were carried out to find out the area where the model improvement is required in the MARS-KS code. The lessons from this DSP-02 and recommendations to code developers are described in this paper. (authors)

  1. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  3. User Effect on Code Application and Qualification Needs

    International Nuclear Information System (INIS)

    D'Auria, F.; Salah, A.B.

    2008-01-01

    Experience with some code assessment case studies and also additional ISPs have shown the dominant effect of the code user on the predicted system behavior. The general findings of the user effect investigations on some of the case studies indicate, specifically, that in addition to user effects, there are other reasons which affect the results of the calculations and are hidden under the general title of user effects. The specific characteristics of experimental facilities, i.e. limitations as far as code assessment is concerned; limitations of the used thermal-hydraulic codes to simulate certain system behavior or phenomena; limitations due to interpretation of experimental data by the code user, i.e. interpretation of experimental data base. On the basis of the discussions in this paper, the following conclusions and recommendations can be made: More dialogue appears to be necessary with the experimenters in the planning of code assessment calculations, e.g. ISPs.; User guidelines are not complete for the codes and the lack of sufficient and detailed user guidelines are observed with some of the case studies; More extensive user instruction and training, improved user guidelines, or quality assurance procedures may partially reduce some of the subjective user influence on the calculated results; The discrepancies between experimental data and code predictions are due both to the intrinsic code limit and to the so called user effects. There is a worthful need to quantify the percentage of disagreement due to the poor utilization of the code and due to the code itself. This need especially arises for the uncertainty evaluation studies (e.g. [18]) which do not take into account the mentioned user effects; A much focused investigation, based on the results of comparison calculations e.g. ISPs, analyzing the experimental data and the results of the specific code in order to evaluate the user effects and the related experimental aspects should be integral part of the

  4. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  5. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  6. Development and Application of a Code for Internal Exposure (CINEX) based on the CINDY code

    International Nuclear Information System (INIS)

    Kravchik, T.; Duchan, N.; Sarah, R.; Gabay, Y.; Kol, R.

    2004-01-01

    Internal exposure to radioactive materials at the NRCN is evaluated using the CINDY (Code for Internal Dosimetry) Package. The code was developed by the Pacific Northwest Laboratory to assist the interpretation of bioassay data, provide bioassay projections and evaluate committed and calendar-year doses from intake or bioassay measurement data. It provides capabilities to calculate organ dose and effective dose equivalents using the International Commission on Radiological Protection (ICRP) 30 approach. The CINDY code operates under DOS operating system and consequently its operation needs a relatively long procedure which also includes a lot of manual typing that can lead to personal human mistakes. A new code has been developed at the NRCN, the CINEX (Code for Internal Exposure), which is an Excel application and leads to a significant reduction in calculation time (in the order of 5-10 times) and in the risk of personal human mistakes. The code uses a database containing tables which were constructed by the CINDY and contain the bioassay values predicted by the ICRP30 model after an intake of an activity unit of each isotope. Using the database, the code than calculates the appropriate intake and consequently the committed effective dose and organ dose. Calculations with the CINEX code were compared to similar calculations with the CINDY code. The discrepancies were less than 5%, which is the rounding error of the CINDY code. Attached is a table which compares parameters calculated with the CINEX and the CINDY codes (for a class Y uranium). The CINEX is now used at the NRCN to calculate occupational intakes and doses to workers with radioactive materials

  7. Practical Design of Delta-Sigma Multiple Description Audio Coding

    DEFF Research Database (Denmark)

    Leegaard, Jack Højholt; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    It was recently shown that delta-sigma quantization (DSQ) can be used for optimal multiple description (MD) coding of Gaussian sources. The DSQ scheme combined oversampling, prediction, and noise-shaping in order to trade off side distortion for central distortion in MD coding. It was shown that ...

  8. Dual Coding in Children.

    Science.gov (United States)

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  9. Validating the BISON fuel performance code to integral LWR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gamble, K.A., E-mail: Kyle.Gamble@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Pastore, G., E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gardner, R.J., E-mail: Russell.Gardner@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Liu, W., E-mail: Wenfeng.Liu@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States); Mai, A., E-mail: Anh.Mai@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States)

    2016-05-15

    Highlights: • The BISON multidimensional fuel performance code is being validated to integral LWR experiments. • Code and solution verification are necessary prerequisites to validation. • Fuel centerline temperature comparisons through all phases of fuel life are very reasonable. • Accuracy in predicting fission gas release is consistent with state-of-the-art modeling and the involved uncertainties. • Rod diameter comparisons are not satisfactory and further investigation is underway. - Abstract: BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON's computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to date for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Results demonstrate that (1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, (2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and (3) comparison

  10. TRAC-PF1 code verification with data from the OTIS test facility

    International Nuclear Information System (INIS)

    Childerson, M.T.; Fujita, R.K.

    1985-01-01

    A computer code (TRAC-PF1/MOD1) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the One-Through Integral System (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and loop saturation, intermittent reactor coolant system circulation, boiler-condenser mode, and the initial stages of refill. The TRAC code was successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool and auxiliary-feedwater initiated boiler-condenser mode heat transfer

  11. TRAC-PF1 code verification with data from the OTIS test facility

    International Nuclear Information System (INIS)

    Childerson, M.T.; Fujits, R.K.

    1985-01-01

    A computer code (TRAC-PFI/MODI; denoted as TRAC) developed for predicting transient thermal and hydraulic integral nuclear steam supply system (NSSS) response was benchmarked. Post-small break loss-of-coolant accident (LOCA) data from a scaled, experimental facility, designated the Once-Through Integral Systems (OTIS), were obtained for the Babcock and Wilcox NSSS and compared to TRAC predictions. The OTIS tests provided a challenging small break LOCA data set for TRAC verification. The major phases of a small break LOCA observed in the OTIS tests included pressurizer draining and saturation, intermittent reactor coolant system circulation, boiler-condenser mode and the initial stages of refill. The TRAC code was successful in predicting OTIS loop conditions (system pressures and temperatures) after modification of the steam generator model. In particular, the code predicted both pool- and auxiliary- feedwater initiated boiler-condenser mode heat transfer

  12. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  13. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    Energy Technology Data Exchange (ETDEWEB)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Hirohiko

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author).

  14. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    International Nuclear Information System (INIS)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru; Ishikawa, Hirohiko.

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author)

  15. Development of a subchannel analysis code MATRA (Ver. α)

    International Nuclear Information System (INIS)

    Yoo, Y. J.; Hwang, D. H.

    1998-04-01

    A subchannel analysis code MATRA-α, an interim version of MATRA, has been developed to be run on an IBM PC or HP WS based on the existing CDC CYBER mainframe version of COBRA-IV-I. This MATRA code is a thermal-hydraulic analysis code based on the subchannel approach for calculating the enthalpy and flow distribution in fuel assemblies and reactor cores for both steady-state and transient conditions. MATRA-α has been provided with an improved structure, various functions, and models to give the more convenient user environment and to increase the code accuracy, various functions, and models to give the more convenient user environment and to increase the code accuracy. Among them, the pressure drop model has been improved to be applied to non-square-lattice rod arrays, and the lateral transport models between adjacent subchannels have been improved to increase the accuracy in predicting two-phase flow phenomena. Also included in this report are the detailed instructions for input data preparation and for auxiliary pre-processors to serve as a guide to those who want to use MATRA-α. In addition, we compared the predictions of MATRA-α with the experimental data on the flow and enthalpy distribution in three sample rod-bundle cases to evaluate the performance of MATRA-α. All the results revealed that the prediction of MATRA-α were better than those of COBRA-IV-I. (author). 16 refs., 1 tab., 13 figs

  16. Quantifying reactor safety margins: Application of CSAU [Code Scalability, Applicability and Uncertainty] methodology to LBLOCA: Part 3, Assessment and ranging of parameters for the uncertainty analysis of LBLOCA codes

    International Nuclear Information System (INIS)

    Wulff, W.; Boyack, B.E.; Duffey, R.B.

    1988-01-01

    Comparisons of results from TRAC-PF1/MOD1 code calculations with measurements from Separate Effects Tests, and published experimental data for modeling parameters have been used to determine the uncertainty ranges of code input and modeling parameters which dominate the uncertainty in predicting the Peak Clad Temperature for a postulated Large Break Loss of Coolant Accident (LBLOCA) in a four-loop Westinghouse Pressurized Water Reactor. The uncertainty ranges are used for a detailed statistical analysis to calculate the probability distribution function for the TRAC code-predicted Peak Clad Temperature, as is described in an attendant paper. Measurements from Separate Effects Tests and Integral Effects Tests have been compared with results from corresponding TRAC-PF1/MOD1 code calculations to determine globally the total uncertainty in predicting the Peak Clad Temperature for LBLOCAs. This determination is in support of the detailed statistical analysis mentioned above. The analyses presented here account for uncertainties in input parameters, in modeling and scaling, in computing and in measurements. The analyses are an important part of the work needed to implement the Code Scalability, Applicability and Uncertainty (CSAU) methodology. CSAU is needed to determine the suitability of a computer code for reactor safety analyses and the uncertainty in computer predictions. The results presented here are used to estimate the safety margin of a particular nuclear reactor power plant for a postulated accident. 25 refs., 10 figs., 11 tabs

  17. nocoRNAc: Characterization of non-coding RNAs in prokaryotes

    Directory of Open Access Journals (Sweden)

    Nieselt Kay

    2011-01-01

    Full Text Available Abstract Background The interest in non-coding RNAs (ncRNAs constantly rose during the past few years because of the wide spectrum of biological processes in which they are involved. This led to the discovery of numerous ncRNA genes across many species. However, for most organisms the non-coding transcriptome still remains unexplored to a great extent. Various experimental techniques for the identification of ncRNA transcripts are available, but as these methods are costly and time-consuming, there is a need for computational methods that allow the detection of functional RNAs in complete genomes in order to suggest elements for further experiments. Several programs for the genome-wide prediction of functional RNAs have been developed but most of them predict a genomic locus with no indication whether the element is transcribed or not. Results We present NOCORNAc, a program for the genome-wide prediction of ncRNA transcripts in bacteria. NOCORNAc incorporates various procedures for the detection of transcriptional features which are then integrated with functional ncRNA loci to determine the transcript coordinates. We applied RNAz and NOCORNAc to the genome of Streptomyces coelicolor and detected more than 800 putative ncRNA transcripts most of them located antisense to protein-coding regions. Using a custom design microarray we profiled the expression of about 400 of these elements and found more than 300 to be transcribed, 38 of them are predicted novel ncRNA genes in intergenic regions. The expression patterns of many ncRNAs are similarly complex as those of the protein-coding genes, in particular many antisense ncRNAs show a high expression correlation with their protein-coding partner. Conclusions We have developed NOCORNAc, a framework that facilitates the automated characterization of functional ncRNAs. NOCORNAc increases the confidence of predicted ncRNA loci, especially if they contain transcribed ncRNAs. NOCORNAc is not restricted to

  18. Double blind post-test prediction for LOBI-MOD2 small break experiment A2-81 using RELAP5/MOD1/19 computer code as contribution to international CSNI-standardproblem no. 18

    International Nuclear Information System (INIS)

    Jacobs, G.; Mansoor, S.H.

    1986-06-01

    The first small break experiment A2-81 performed in the LOBI-MOD2 test facility was the base of the 18th international CSNI standard problem (ISP 18). Taking part in this exercise, a blind post-test prediction was performed using the light water reactor transient analysis code RELAP5/MOD1. This paper describes the input model preparation and summarizes the findings of the pre-calculation comparing the calculational results with the experimental data. The results show that there was a good agreement between prediction and experiment in the initial stage (up to 250 sec) of the transient and an adequate prediction of the global behaviour (thermal response of the core), which is important for safety related considerations. However, the prediction confirmed some deficiencies of the models in the code concerning vertical and horizontal stratification resulting in a high break mass flow and an erroneous distribution of mass over the primary loops. (orig.) [de

  19. Improvement of a combustion model in MELCOR code

    International Nuclear Information System (INIS)

    Ogino, Masao; Hashimoto, Takashi

    1999-01-01

    NUPEC has been improving a hydrogen combustion model in MELCOR code for severe accident analysis. In the proposed combustion model, the flame velocity in a node was predicted using five different flame front shapes of fireball, prism, bubble, spherical jet, and plane jet. For validation of the proposed model, the results of the Battelle multi-compartment hydrogen combustion test were used. The selected test cases for the study were Hx-6, 13, 14, 20 and Ix-2 which had two, three or four compartments under homogeneous hydrogen concentration of 5 to 10 vol%. The proposed model could predict well the combustion behavior in multi-compartment containment geometry on the whole. MELCOR code, incorporating the present combustion model, can simulate combustion behavior during severe accident with acceptable computing time and some degree of accuracy. The applicability study of the improved MELCOR code to the actual reactor plants will be further continued. (author)

  20. The WINCON programme - validation of fast reactor primary containment codes

    International Nuclear Information System (INIS)

    Sidoli, J.E.A.; Kendall, K.C.

    1988-01-01

    In the United Kingdom safety studies for the Commercial Demonstration Fast Reactor (CDFR) include an assessment of the capability of the primary containment in providing an adequate containment for defence against the hazards resulting from a hypothetical Whole Core Accident (WCA). The assessment is based on calculational estimates using computer codes supported by measured evidence from small-scale experiments. The hydrodynamic containment code SEURBNUK-EURDYN is capable of representing a prescribed energy release, the sodium coolant and cover gas, and the main containment and safety related internal structures. Containment loadings estimated using SEURBNUK-EURDYN are used in the structural dynamic code EURDYN-03 for the prediction of the containment response. The experiments serve two purposes, they demonstrate the response of the CDFR containment to accident loadings and provide data for the validation of the codes. This paper summarises the recently completed WINfrith CONtainment (WINCON) experiments that studied the response of specific features of current CDFR design options to WCA loadings. The codes have been applied to some of the experiments and a satisfactory prediction of the global response of the model containment is obtained. This provides confidence in the use of the codes in reactor assessments. (author)

  1. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    Science.gov (United States)

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  2. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  3. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  4. User's manual for DSTAR MOD1: A comprehensive tokamak disruption code

    International Nuclear Information System (INIS)

    Merrill, B.J.; Jardin, S.J.

    1986-01-01

    A computer code, DSTAR, has recently been developed to quantify the surface erosion and induced forces that can occur during major tokamak plasma disruptions. The DSTAR code development effort has been accomplished by coupling a recently developed free boundary tokamak plasma transport computational model with other models developed to predict impurity transport and radiation, and the electromagnetic and thermal dynamic response of vacuum vessel components. The combined model, DSTAR, is a unique tool for predicting the consequences of tokamak disruptions. This informal report discusses the sequence of events of a resistive disruption, models developed to predict plasma transport and electromagnetic field evolution, the growth of the stochastic region of the plasma, the transport and nonequilibrium ionization/emitted radiation of the ablated vacuum vessel material, the vacuum vessel thermal and magnetic response, and user input and code output

  5. Analysis of CSNI benchmark test on containment using the code CONTRAN

    International Nuclear Information System (INIS)

    Haware, S.K.; Ghosh, A.K.; Raj, V.V.; Kakodkar, A.

    1994-01-01

    A programme of experimental as well as analytical studies on the behaviour of nuclear reactor containment is being actively pursued. A large number ol' experiments on pressure and temperature transients have been carried out on a one-tenth scale model vapour suppression pool containment experimental facility, simulating the 220 MWe Indian Pressurised Heavy Water Reactors. A programme of development of computer codes is underway to enable prediction of containment behaviour under accident conditions. This includes codes for pressure and temperature transients, hydrogen behaviour, aerosol behaviour etc. As a part of this ongoing work, the code CONTRAN (CONtainment TRansient ANalysis) has been developed for predicting the thermal hydraulic transients in a multicompartment containment. For the assessment of the hydrogen behaviour, the models for hydrogen transportation in a multicompartment configuration and hydrogen combustion have been incorporated in the code CONTRAN. The code also has models for the heat and mass transfer due to condensation and convection heat transfer. The structural heat transfer is modeled using the one-dimensional transient heat conduction equation. Extensive validation exercises have been carried out with the code CONTRAN. The code CONTRAN has been successfully used for the analysis of the benchmark test devised by Committee on the Safety of Nuclear Installations (CSNI) of the Organisation for Economic Cooperation and Development (OECD), to test the numerical accuracy and convergence errors in the computation of mass and energy conservation for the fluid and in the computation of heat conduction in structural walls. The salient features of the code CONTRAN, description of the CSNI benchmark test and a comparison of the CONTRAN predictions with the benchmark test results are presented and discussed in the paper. (author)

  6. Preliminary Analysis of Rapid Condensation Experiment with MARS-KS Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Jae Ho; Jun, Hwang Yong; Jeong, Hae Yong [Sejong University, Seoul (Korea, Republic of)

    2016-05-15

    In the present study, the rapid condensation experiment performed in MANOTEA facility is analyzed with the MARS-KS code. It is known that there exists some limitation with a system code to predict this kind of a very active condensation due to direct mixing of cold injection flow and steam. Through the analysis we investigated the applicability of MARS-KS code for the design of various passive safety systems in the future. The configuration of the experimental facility MANOTEA, which has been constructed at the University of Maryland - United States Naval Academy, is described and the modeling approach using the MARS-KS code is also provided. The preliminary result shows that the MARS-KS predicts the general trend of pressure and temperature in the condensing part correctly. However, it is also found that there exist some limitations in the simulation such as an unexpected pressure peak or a sudden temperature change.

  7. RADTRAN II: revised computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1982-10-01

    A revised and updated version of the RADTRAN computer code is presented. This code has the capability to predict the radiological impacts associated with specific schemes of radioactive material shipments and mode specific transport variables

  8. Cavitation Modeling in Euler and Navier-Stokes Codes

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  9. Aeroelastic code development activities in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.D. [National Renewable Energy Lab., Golden, Colorado (United States)

    1996-09-01

    Designing wind turbines to be fatigue resistant and to have long lifetimes at minimal cost is a major goal of the federal wind program and the wind industry in the United States. To achieve this goal, we must be able to predict critical loads for a wide variety of different wind turbines operating under extreme conditions. The codes used for wind turbine dynamic analysis must be able to analyze a wide range of different wind turbine configurations as well as rapidly predict the loads due to turbulent wind inflow with a minimal set of degrees of freedom. Code development activities in the US have taken a two-pronged approach in order to satisfy both of these criteria: (1) development of a multi-purpose code which can be used to analyze a wide variety of wind turbine configurations without having to develop new equations of motion with each configuration change, and (2) development of specialized codes with minimal sets of specific degrees of freedom for analysis of two- and three-bladed horizontal axis wind turbines and calculation of machine loads due to turbulent inflow. In the first method we have adapted a commercial multi-body dynamics simulation package for wind turbine analysis. In the second approach we are developing specialized codes with limited degrees of freedom, usually specified in the modal domain. This paper will summarize progress to date in the development, validation, and application of these codes. (au) 13 refs.

  10. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    International Nuclear Information System (INIS)

    Peterson, P.F.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors

  11. Development of input data to energy code for analysis of reactor fuel bundles

    International Nuclear Information System (INIS)

    Carre, F.O.; Todreas, N.E.

    1975-05-01

    The ENERGY 1 code is a semi-empirical method for predicting temperature distributions in wire wrapped rod bundles of a LMFBR. A comparison of ENERGY 1 and MISTRAL 2 is presented. The predictions of ENERGY 1 for special sets of data taken under geometric conditions at the limits of the code are analyzed. 14 references

  12. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    International Nuclear Information System (INIS)

    Raimondi, L.; Svetina, C.; Mahne, N.; Cocco, D.; Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M.; De Ninno, G.; Zeitoun, P.; Dovillaire, G.; Lambert, G.; Boutu, W.; Merdji, H.; Gonzalez, A.I.; Gauthier, D.

    2013-01-01

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization

  13. Microfocusing of the FERMI@Elettra FEL beam with a K–B active optics system: Spot size predictions by application of the WISE code

    Energy Technology Data Exchange (ETDEWEB)

    Raimondi, L., E-mail: lorenzo.raimondi@elettra.trieste.it [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Svetina, C.; Mahne, N. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); Cocco, D. [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS-19 Menlo Park, CA 94025 (United States); Abrami, A.; De Marco, M.; Fava, C.; Gerusina, S.; Gobessi, R.; Capotondi, F.; Pedersoli, E.; Kiskinova, M. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); De Ninno, G. [Sincrotrone Trieste ScpA, S.S. 14 km 163.5 in Area Science Park, 34149 Trieste (Italy); University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); Zeitoun, P. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Dovillaire, G. [Imagine Optic, 18 Rue Charles de Gaulle, 91400 Orsay (France); Lambert, G. [Laboratoire d' Optique Appliquée, CNRS-ENSTA-École Polytechnique, Chemin de la Humiére, 91761 Palaiseau (France); Boutu, W.; Merdji, H.; Gonzalez, A.I. [Service des Photons, Atomes et Molécules, IRAMIS, CEA-Saclay, Btiment 522, 91191 Gif-sur-Yvette (France); Gauthier, D. [University of Nova Gorica, Vipavska 13, Rozna Dolina, SI-5000 Nova Gorica (Slovenia); and others

    2013-05-11

    FERMI@Elettra, the first seeded EUV-SXR free electron laser (FEL) facility located at Elettra Sincrotrone Trieste has been conceived to provide very short (10–100 fs) pulses with ultrahigh peak brightness and wavelengths from 100 nm to 4 nm. A section fully dedicated to the photon transport and analysis diagnostics, named PADReS, has already been installed and commissioned. Three of the beamlines, EIS-TIMEX, DiProI and LDM, installed after the PADReS section, are in advanced commissioning state and will accept the first users in December 2012. These beam lines employ active X-ray optics in order to focus the FEL beam as well as to perform a controlled beam-shaping at focus. Starting from mirror surface metrology characterization, it is difficult to predict the focal spot shape applying only methods based on geometrical optics such as the ray tracing. Within the geometrical optics approach one cannot take into account the diffraction effect from the optics edges, i.e. the aperture diffraction, and the impact of different surface spatial wavelengths to the spot size degradation. Both these effects are strongly dependent on the photon beam energy and mirror incident angles. We employed a method based on physical optics, which applies the Huygens–Fresnel principle to reflection (on which the WISE code is based). In this work we report the results of the first measurements of the focal spot in the DiProI beamline end-station and compare them to the predictions computed with Shadow code and WISE code, starting from the mirror surface profile characterization.

  14. Development and verification test of integral reactor major components - Development of MCP impeller design, performance prediction code and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung Kyoon; Oh, Woo Hyoung; Song, Jae Wook [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-03-01

    The present study is aimed at developing a computational code for design and performance prediction of an axial-flow pump. The proposed performance prediction method is tested against a model axial-flow pump streamline curvature method. The preliminary design is made by using the ideal velocity triangles at inlet and exit and the three dimensional blade shape is calculated by employing the free vortex design method. Then the detailed blading design is carried out by using experimental database of double circular arc cambered hydrofoils. To computationally determine the design incidence, deviation, blade camber, solidity and stagger angle, a number of correlation equations are developed form the experimental database and a theorical formula for the lift coefficient is adopted. A total of 8 equations are solved iteratively using an under-relaxation factor. An experimental measurement is conducted under a non-cavitating condition to obtain the off-design performance curve and also a cavitation test is carried out by reducing the suction pressure. The experimental results are very satisfactorily compared with the predictions by the streamline curvature method. 28 refs., 26 figs., 11 tabs. (Author)

  15. The MELTSPREAD Code for Modeling of Ex-Vessel Core Debris Spreading Behavior, Code Manual – Version3-beta

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M. T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-01

    MELTSPREAD3 is a transient one-dimensional computer code that has been developed to predict the gravity-driven flow and freezing behavior of molten reactor core materials (corium) in containment geometries. Predictions can be made for corium flowing across surfaces under either dry or wet cavity conditions. The spreading surfaces that can be selected are steel, concrete, a user-specified material (e.g., a ceramic), or an arbitrary combination thereof. The corium can have a wide range of compositions of reactor core materials that includes distinct oxide phases (predominantly Zr, and steel oxides) plus metallic phases (predominantly Zr and steel). The code requires input that describes the containment geometry, melt “pour” conditions, and cavity atmospheric conditions (i.e., pressure, temperature, and cavity flooding information). For cases in which the cavity contains a preexisting water layer at the time of RPV failure, melt jet breakup and particle bed formation can be calculated mechanistically given the time-dependent melt pour conditions (input data) as well as the heatup and boiloff of water in the melt impingement zone (calculated). For core debris impacting either the containment floor or previously spread material, the code calculates the transient hydrodynamics and heat transfer which determine the spreading and freezing behavior of the melt. The code predicts conditions at the end of the spreading stage, including melt relocation distance, depth and material composition profiles, substrate ablation profile, and wall heatup. Code output can be used as input to other models such as CORQUENCH that evaluate long term core-concrete interaction behavior following the transient spreading stage. MELTSPREAD3 was originally developed to investigate BWR Mark I liner vulnerability, but has been substantially upgraded and applied to other reactor designs (e.g., the EPR), and more recently to the plant accidents at Fukushima Daiichi. The most recent round of

  16. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  17. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    Science.gov (United States)

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  18. Simulation of transport in the ignited ITER with 1.5-D predictive code

    International Nuclear Information System (INIS)

    Becker, G.

    1995-01-01

    The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (C υ = 1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s. In the reference cases, the steady state helium fraction is 7%. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral. But neon is found to cause high fuel dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. The bootstrap current has only a small impact on the current profile. The sawtooth dominated region is found to cover 35% of the plasma cross-section. Local stability analysis of ideal ballooning modes shows that the plasma is everywhere well below the stability limit. 23 refs, 34 figs, 3 tabs

  19. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  20. Mode-dependent templates and scan order for H.264/AVC-based intra lossless coding.

    Science.gov (United States)

    Gu, Zhouye; Lin, Weisi; Lee, Bu-Sung; Lau, Chiew Tong; Sun, Ming-Ting

    2012-09-01

    In H.264/advanced video coding (AVC), lossless coding and lossy coding share the same entropy coding module. However, the entropy coders in the H.264/AVC standard were original designed for lossy video coding and do not yield adequate performance for lossless video coding. In this paper, we analyze the problem with the current lossless coding scheme and propose a mode-dependent template (MD-template) based method for intra lossless coding. By exploring the statistical redundancy of the prediction residual in the H.264/AVC intra prediction modes, more zero coefficients are generated. By designing a new scan order for each MD-template, the scanned coefficients sequence fits the H.264/AVC entropy coders better. A fast implementation algorithm is also designed. With little computation increase, experimental results confirm that the proposed fast algorithm achieves about 7.2% bit saving compared with the current H.264/AVC fidelity range extensions high profile.

  1. Data processing codes for fatigue and tensile tests

    International Nuclear Information System (INIS)

    Sanchez Sarmiento, Gustavo; Iorio, A.F.; Crespi, J.C.

    1981-01-01

    The processing of fatigue and tensile tests data in order to obtain several parameters of engineering interest requires a considerable effort of numerical calculus. In order to reduce the time spent in this work and to establish standard data processing from a set of similar type tests, it is very advantageous to have a calculation code for running in a computer. Two codes have been developed in FORTRAN language; one of them predicts cyclic properties of materials from the monotonic and incremental or multiple cyclic step tests (ENSPRED CODE), and the other one reduces data coming from strain controlled low cycle fatigue tests (ENSDET CODE). Two examples are included using Zircaloy-4 material from different manufacturers. (author) [es

  2. Analysis of ATLAS FLB-EC6 Experiment using SPACE Code

    International Nuclear Information System (INIS)

    Lee, Donghyuk; Kim, Yohan; Kim, Seyun

    2013-01-01

    The new code is named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). As a part of code validation effort, simulation of ATLAS FLB(Feedwater Line Break) experiment using SPACE code has been performed. The FLB-EC6 experiment is economizer break of a main feedwater line. The calculated results using the SPACE code are compared with those from the experiment. The ATLAS FLB-EC6 experiment, which is economizer feedwater line break, was simulated using the SPACE code. The calculated results were compared with those from the experiment. The comparisons of break flow rate and steam generator water level show good agreement with the experiment. The SPACE code is capable of predicting physical phenomena occurring during ATLAS FLB-EC6 experiment

  3. Efficient depth intraprediction method for H.264/AVC-based three-dimensional video coding

    Science.gov (United States)

    Oh, Kwan-Jung; Oh, Byung Tae

    2015-04-01

    We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.

  4. TESLA: Large Signal Simulation Code for Klystrons

    International Nuclear Information System (INIS)

    Vlasov, Alexander N.; Cooke, Simon J.; Chernin, David P.; Antonsen, Thomas M. Jr.; Nguyen, Khanh T.; Levush, Baruch

    2003-01-01

    TESLA (Telegraphist's Equations Solution for Linear Beam Amplifiers) is a new code designed to simulate linear beam vacuum electronic devices with cavities, such as klystrons, extended interaction klystrons, twistrons, and coupled cavity amplifiers. The model includes a self-consistent, nonlinear solution of the three-dimensional electron equations of motion and the solution of time-dependent field equations. The model differs from the conventional Particle in Cell approach in that the field spectrum is assumed to consist of a carrier frequency and its harmonics with slowly varying envelopes. Also, fields in the external cavities are modeled with circuit like equations and couple to fields in the beam region through boundary conditions on the beam tunnel wall. The model in TESLA is an extension of the model used in gyrotron code MAGY. The TESLA formulation has been extended to be capable to treat the multiple beam case, in which each beam is transported inside its own tunnel. The beams interact with each other as they pass through the gaps in their common cavities. The interaction is treated by modification of the boundary conditions on the wall of each tunnel to include the effect of adjacent beams as well as the fields excited in each cavity. The extended version of TESLA for the multiple beam case, TESLA-MB, has been developed for single processor machines, and can run on UNIX machines and on PC computers with a large memory (above 2GB). The TESLA-MB algorithm is currently being modified to simulate multiple beam klystrons on multiprocessor machines using the MPI (Message Passing Interface) environment. The code TESLA has been verified by comparison with MAGIC for single and multiple beam cases. The TESLA code and the MAGIC code predict the same power within 1% for a simple two cavity klystron design while the computational time for TESLA is orders of magnitude less than for MAGIC 2D. In addition, recently TESLA was used to model the L-6048 klystron, code

  5. Probabilistic evaluations for CANTUP computer code analysis improvement

    International Nuclear Information System (INIS)

    Florea, S.; Pavelescu, M.

    2004-01-01

    Structural analysis with finite element method is today an usual way to evaluate and predict the behavior of structural assemblies subject to hard conditions in order to ensure their safety and reliability during their operation. A CANDU 600 fuel channel is an example of an assembly working in hard conditions, in which, except the corrosive and thermal aggression, long time irradiation, with implicit consequences on material properties evolution, interferes. That leads inevitably to material time-dependent properties scattering, their dynamic evolution being subject to a great degree of uncertainness. These are the reasons for developing, in association with deterministic evaluations with computer codes, the probabilistic and statistical methods in order to predict the structural component response. This work initiates the possibility to extend the deterministic thermomechanical evaluation on fuel channel components to probabilistic structural mechanics approach starting with deterministic analysis performed with CANTUP computer code which is a code developed to predict the long term mechanical behavior of the pressure tube - calandria tube assembly. To this purpose the structure of deterministic calculus CANTUP computer code has been reviewed. The code has been adapted from LAHEY 77 platform to Microsoft Developer Studio - Fortran Power Station platform. In order to perform probabilistic evaluations, it was added a part to the deterministic code which, using a subroutine from IMSL library from Microsoft Developer Studio - Fortran Power Station platform, generates pseudo-random values of a specified value. It was simulated a normal distribution around the deterministic value and 5% standard deviation for Young modulus material property in order to verify the statistical calculus of the creep behavior. The tube deflection and effective stresses were the properties subject to probabilistic evaluation. All the values of these properties obtained for all the values for

  6. Future trends in image coding

    Science.gov (United States)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  7. Erosion corrosion in power plant piping systems - Calculation code for predicting wall thinning

    International Nuclear Information System (INIS)

    Kastner, W.; Erve, M.; Henzel, N.; Stellwag, B.

    1990-01-01

    Extensive experimental and theoretical investigations have been performed to develop a calculation code for wall thinning due to erosion corrosion in power plant piping systems. The so-called WATHEC code can be applied to single-phase water flow as well as to two-phase water/steam flow. Only input data which are available to the operator of the plant are taken into consideration. Together with a continuously updated erosion corrosion data base the calculation code forms one element of a weak point analysis for power plant piping systems which can be applied to minimize material loss due to erosion corrosion, reduce non-destructive testing and curtail monitoring programs for piping systems, recommend life-extending measures. (author). 12 refs, 17 figs

  8. Heterogeneous fuels for minor actinides transmutation: Fuel performance codes predictions in the EFIT case study

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, R., E-mail: rolando.calabrese@enea.i [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Vettraino, F.; Artioli, C. [ENEA, Innovative Nuclear Reactors and Fuel Cycle Closure Division, via Martiri di Monte Sole 4, 40129 Bologna (Italy); Sobolev, V. [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, B-2400 Mol (Belgium); Thetford, R. [Serco Technical and Assurance Services, 150 Harwell Business Centre, Didcot OX11 0QB (United Kingdom)

    2010-06-15

    . Presented results were used for testing newly-developed models installed in the TRANSURANUS code to deal with such innovative fuels and T91 steel cladding. Agreement among codes predictions was satisfactory for fuel and cladding temperatures, pellet-cladding gap and mechanical stresses.

  9. NASA space radiation transport code development consortium

    International Nuclear Information System (INIS)

    Townsend, L. W.

    2005-01-01

    Recently, NASA established a consortium involving the Univ. of Tennessee (lead institution), the Univ. of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking. (authors)

  10. International assessment of PCA codes

    International Nuclear Information System (INIS)

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE

  11. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  12. Comparison of sodium aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.

    1984-01-01

    Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)

  13. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  14. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  15. Positive predictive value between medical-chart body-mass-index category and obesity versus codes in a claims-data warehouse.

    Science.gov (United States)

    Caplan, Eleanor O; Kamble, Pravin S; Harvey, Raymond A; Smolarz, B Gabriel; Renda, Andrew; Bouchard, Jonathan R; Huang, Joanna C

    2018-01-01

    To evaluate the positive predictive value of claims-based V85 codes for identifying individuals with varying degrees of BMI relative to their measured BMI obtained from medical record abstraction. This was a retrospective validation study utilizing administrative claims and medical chart data from 1 January 2009 to 31 August 2015. Randomly selected samples of patients enrolled in a Medicare Advantage Prescription Drug (MAPD) or commercial health plan and with a V85 claim were identified. The claims-based BMI category (underweight, normal weight, overweight, obese class I-III) was determined via corresponding V85 codes and compared to the BMI category derived from chart abstracted height, weight and/or BMI. The positive predictive values (PPVs) of the claims-based BMI categories were calculated with the corresponding 95% confidence intervals (CIs). The overall PPVs (95% CIs) in the MAPD and commercial samples were 90.3% (86.3%-94.4%) and 91.1% (87.3%-94.9%), respectively. In each BMI category, the PPVs (95% CIs) for the MAPD and commercial samples, respectively, were: underweight, 71.0% (55.0%-87.0%) and 75.9% (60.3%-91.4%); normal, 93.8% (85.4%-100%) and 87.8% (77.8%-97.8%); overweight, 97.4% (92.5%-100%) and 93.5% (84.9%-100%); obese class I, 96.9 (90.9%-100%) and 97.2% (91.9%-100%); obese class II, 97.0% (91.1%-100%) and 93.0% (85.4%-100%); and obese class III, 85.0% (73.3%-96.1%) and 97.1% (91.4%-100%). BMI categories derived from administrative claims, when available, can be used successfully particularly in the context of obesity research.

  16. SWAAM code development, verification and application to steam generator design

    International Nuclear Information System (INIS)

    Shin, Y.W.; Valentin, R.A.

    1990-01-01

    This paper describes the family of SWAAM codes developed by Argonne National Laboratory to analyze the effects of sodium/water reactions on LMR steam generators. The SWAAM codes were developed as design tools for analyzing various phenomena related to steam generator leaks and to predict the resulting thermal and hydraulic effects on the steam generator and the intermediate heat transport system (IHTS). The theoretical foundations and numerical treatments on which the codes are based are discussed, followed by a description of code capabilities and limitations, verification of the codes by comparison with experiment, and applications to steam generator and IHTS design. (author). 25 refs, 14 figs

  17. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  18. Studies of fast reactor disassembly using a Bethe-Tait computer code

    International Nuclear Information System (INIS)

    Ludwig, J.C.

    1978-10-01

    The advantages of the fast reactor are given and the general design outlined. Loss of Flow and Transient Overpower faults are possible; the potential consequences of such incidents are analysed using a deterministic approach. The course of an incident is split into several stages; of these only predisassembly and disassembly are considered. Predisassembly computer codes are described in general, and several particular codes are examined in more detail, based on a literature survey. The results and implications of disassembly calculations using the code EXTRA are presented. Here, the effects of several factors, such as the presence of retained fission gases and possible restraints on fuel motion, are investigated. Some comparisons are made with published results from the VENUS-II disassembly code. A general conclusion is that under some circumstances, the yield predicted during disassembly is relatively insensitive to modelling assumptions, and a simple code such as EXTRA may prove adequate if explicit core displacements are not required. A major factor in determining the yield of the disassembly phase is confirmed as being the rate of reactivity insertion during disassembly, as predicted by a predisassembly code. (U.K.)

  19. Evaluation of Advanced Models for PAFS Condensation Heat Transfer in SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Byoung-Uhn; Kim, Seok; Park, Yu-Sun; Kang, Kyung Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ahn, Tae-Hwan; Yun, Byong-Jo [Pusan National University, Busan (Korea, Republic of)

    2015-10-15

    The PAFS (Passive Auxiliary Feedwater System) is operated by the natural circulation to remove the core decay heat through the PCHX (Passive Condensation Heat Exchanger) which is composed of the nearly horizontal tubes. For validation of the cooling and operational performance of the PAFS, PASCAL (PAFS Condensing Heat Removal Assessment Loop) facility was constructed and the condensation heat transfer and natural convection phenomena in the PAFS was experimentally investigated at KAERI (Korea Atomic Energy Research Institute). From the PASCAL experimental result, it was found that conventional system analysis code underestimated the condensation heat transfer. In this study, advanced condensation heat transfer models which can treat the heat transfer mechanisms with the different flow regimes in the nearly horizontal heat exchanger tube were analyzed. The models were implemented in a thermal hydraulic safety analysis code, SPACE (Safety and Performance Analysis Code for Nuclear Power Plant), and it was evaluated with the PASCAL experimental data. With an aim of enhancing the prediction capability for the condensation phenomenon inside the PCHX tube of the PAFS, advanced models for the condensation heat transfer were implemented into the wall condensation model of the SPACE code, so that the PASCAL experimental result was utilized to validate the condensation models. Calculation results showed that the improved model for the condensation heat transfer coefficient enhanced the prediction capability of the SPACE code. This result confirms that the mechanistic modeling for the film condensation in the steam phase and the convection in the condensate liquid contributed to enhance the prediction capability of the wall condensation model of the SPACE code and reduce conservatism in prediction of condensation heat transfer.

  20. Development of code SFINEL (Spent fuel integrity evaluator)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Soo; Min, Chin Young; Ohk, Young Kil; Yang, Yong Sik; Kim, Dong Ju; Kim, Nam Ku [Hanyang University, Seoul (Korea)

    1999-01-01

    SFINEL code, an integrated computer program for predicting the spent fuel rod integrity based on burn-up history and major degradation mechanisms, has been developed through this project. This code can sufficiently simulate the power history of a fuel rod during the reactor operation and estimate the degree of deterioration of spent fuel cladding using the recently-developed models on the degradation mechanisms. SFINEL code has been thoroughly benchmarked against the collected in-pile data and operating experiences: deformation and rupture, and cladding oxidation, rod internal pressure creep, then comprehensive whole degradation process. (author). 75 refs., 51 figs., 5 tabs.

  1. HETFIS: High-Energy Nucleon-Meson Transport Code with Fission

    International Nuclear Information System (INIS)

    Barish, J.; Gabriel, T.A.; Alsmiller, F.S.; Alsmiller, R.G. Jr.

    1981-07-01

    A model that includes fission for predicting particle production spectra from medium-energy nucleon and pion collisions with nuclei (Z greater than or equal to 91) has been incorporated into the nucleon-meson transport code, HETC. This report is primarily concerned with the programming aspects of HETFIS (High-Energy Nucleon-Meson Transport Code with Fission). A description of the program data and instructions for operating the code are given. HETFIS is written in FORTRAN IV for the IBM computers and is readily adaptable to other systems

  2. The CFEST-INV stochastic hydrology code: Mathematical formulation, application, and user's manual

    International Nuclear Information System (INIS)

    Devary, J.L.

    1987-06-01

    Performance assessments of a nuclear waste repository must consider the hydrologic, thermal, mechanical, and geochemical environments of a candidate site. Predictions of radionuclide transport requires estimating water movement as a function of pressure, temperature, and solute concentration. CFEST (Coupled Fluid, Energy, and Solute Transport), is a finite-element based groundwater code that can be used to simultaneously solve the partial differential equations for pressure head, solute temperature, and solute concentration. The CFEST code has been designed to support site, repository, and waste package subsystem assessments. CFEST-INV is a stochastic hydrology code that was developed to augment the CFEST code in data processing; model calibration; performance prediction; error propagation; and data collection guidance. The CFEST-INV code utilizes kriging, finite-element modeling, adjoint-sensitivity, statistical-inverse, first-order variance, and Monte-Carlo techniques to develop performance (measure) driven data collection schemes and to determine the waste isolation capabilities (including uncertainties) of candidate repository sites. This report contains the basic physical and numerical principles of the CFEST-INV code, its input parameters, verification exercises, a user's manual, and the code's application history. 18 refs., 16 figs., 6 tabs

  3. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    Science.gov (United States)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  4. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  5. Performance Prediction of Centrifugal Compressor for Drop-In Testing Using Low Global Warming Potential Alternative Refrigerants and Performance Test Codes

    Directory of Open Access Journals (Sweden)

    Joo Hoon Park

    2017-12-01

    Full Text Available As environmental regulations to stall global warming are strengthened around the world, studies using newly developed low global warming potential (GWP alternative refrigerants are increasing. In this study, substitute refrigerants, R-1234ze (E and R-1233zd (E, were used in the centrifugal compressor of an R-134a 2-stage centrifugal chiller with a fixed rotational speed. Performance predictions and thermodynamic analyses of the centrifugal compressor for drop-in testing were performed. A performance prediction method based on the existing ASME PTC-10 performance test code was proposed. The proposed method yielded the expected operating area and operating point of the centrifugal compressor with alternative refrigerants. The thermodynamic performance of the first and second stages of the centrifugal compressor was calculated as the polytropic state. To verify the suitability of the proposed method, the drop-in test results of the two alternative refrigerants were compared. The predicted operating range based on the permissible deviation of ASME PTC-10 confirmed that the temperature difference was very small at the same efficiency. Because the drop-in test of R-1234ze (E was performed within the expected operating range, the centrifugal compressor using R-1234ze (E is considered well predicted. However, the predictions of the operating point and operating range of R-1233zd (E were lower than those of the drop-in test. The proposed performance prediction method will assist in understanding thermodynamic performance at the expected operating point and operating area of a centrifugal compressor using alternative gases based on limited design and structure information.

  6. Subchannel analysis of a boiloff experiment by a system thermalhydraulic code

    International Nuclear Information System (INIS)

    Bousbia-Salah, A.; D'Auria, F.

    2001-01-01

    This paper presents the results of system thermalhydraulic code using the sub-channel analysis approach in predicting the Neptun boil off experiments. This approach will be suitable for further works in view of coupling the system code with a 3D neutron kinetic one. The boil off tests were conducted in order to simulate the consequences of loss of coolant inventory leading to uncovery and heat up of fuel elements of a nuclear reactor core. In this framework, the Neptun low pressure test No5002, which is a good repeat experiment, is considered. The calculations were carried out using the system transient analysis code Relap5/Mod3.2. A detailed nodalization of the Neptun test section was developed. A reference case was run, and the overall data comparison shows good agreement between calculated and experimental thermalhydraulic parameters. A series of sensitivity analyses were also performed in order to assess the code prediction capabilities. The obtained results were almost satisfactory, this demonstrates, as well, the reasonable success of the subchannel analysis approach adopted in the present context for a system thermalhydraulic code.(author)

  7. Development and validation of computer codes for analysis of PHWR containment behaviour

    International Nuclear Information System (INIS)

    Markandeya, S.G.; Haware, S.K.; Ghosh, A.K.; Venkat Raj, V.

    1997-01-01

    In order to ensure that the design intent of the containment of Indian Pressurised Heavy Water Reactors (IPHWRs) is met, both analytical and experimental studies are being pursued at BARC. As a part of analytical studies, computer codes for predicting the behaviour of containment under various accident scenarios are developed/adapted. These include codes for predicting 1) pressure, temperature transients in the containment following either Loss of Coolant Accident (LOCA) or Main Steam Line Break (MSLB), 2) hydrogen behaviour in respect of its distribution, combustion and the performance of proposed mitigation systems, and 3) behaviour of fission product aerosols in the piping circuits of the primary heat transport system and in the containment. All these codes have undergone thorough validation using data obtained from in-house test facilities or from international sources. Participation in the International Standard Problem (ISP) exercises has also helped in validation of the codes. The present paper briefly describes some of these codes and the various exercises performed for their validation. (author)

  8. Assessment of TRAC-PF1/MOD1 code for large break LOCA in PWR

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio; Abe, Yutaka.

    1993-03-01

    As the first step of the REFLA/TRAC code development, the TRAC/PF1/MOD1 code has been assessed for various experiments that simulate postulated large-break loss-of-coolant accident (LBLOCA) in PWR to understand the predictive capability and to identify the problem areas of the code. The assessment calculations were performed for separate effect tests for critical flow, counter current flow, condensation at cold leg and reflood as well as integral tests to understand predictability for individual phenomena. This report summarizes results from the assessment calculations of the TRAC-PF1/MOD1 code for LBLOCA in PWR. The assessment calculations made clear the predictive capability and problem areas of the TRAC-PF1/MOD1 code for LBLOCA in PWR. The areas, listed below, should be improved for more realistic and effective simulation of LBLOCA in PWR: (1) core heat transfer model during blowdown, (2) ECC bypass model at downcomer during refill, (3) condensation model during accumulator injection, and (4) core thermal hydraulic model during reflood. (author) 57 refs

  9. Application of advanced validation concepts to oxide fuel performance codes: LIFE-4 fast-reactor and FRAPCON thermal-reactor fuel performance codes

    Energy Technology Data Exchange (ETDEWEB)

    Unal, C., E-mail: cu@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Williams, B.J. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Yacout, A. [Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States); Higdon, D.M. [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States)

    2013-10-15

    Highlights: ► The application of advanced validation techniques (sensitivity, calibration and prediction) to nuclear performance codes FRAPCON and LIFE-4 is the focus of the paper. ► A sensitivity ranking methodology narrows down the number of selected modeling parameters from 61 to 24 for FRAPCON and from 69 to 35 for LIFE-4. ► Fuel creep, fuel thermal conductivity, fission gas transport/release, crack/boundary, and fuel gap conductivity models of LIFE-4 are identified for improvements. ► FRAPCON sensitivity results indicated the importance of the fuel thermal conduction and the fission gas release models. -- Abstract: Evolving nuclear energy programs expect to use enhanced modeling and simulation (M and S) capabilities, using multiscale, multiphysics modeling approaches, to reduce both cost and time from the design through the licensing phases. Interest in the development of the multiscale, multiphysics approach has increased in the last decade because of the need for predictive tools for complex interacting processes as a means of eliminating the limited use of empirically based model development. Complex interacting processes cannot be predicted by analyzing each individual component in isolation. In most cases, the mathematical models of complex processes and their boundary conditions are nonlinear. As a result, the solutions of these mathematical models often require high-performance computing capabilities and resources. The use of multiscale, multiphysics (MS/MP) models in conjunction with high-performance computational software and hardware introduces challenges in validating these predictive tools—traditional methodologies will have to be modified to address these challenges. The advanced MS/MP codes for nuclear fuels and reactors are being developed within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program of the US Department of Energy (DOE) – Nuclear Energy (NE). This paper does not directly address challenges in calibration

  10. Development of REFLA/TRAC code for engineering work station

    International Nuclear Information System (INIS)

    Ohnuki, Akira; Akimoto, Hajime; Murao, Yoshio

    1994-03-01

    The REFLA/TRAC code is a best-estimate code which is expected to check reactor safety analysis codes for light water reactors (LWRs) and to perform accident analyses for LWRs and also for an advanced LWR. Therefore, a high predictive capability is required and the assessment of each physical model becomes important because the models govern the predictive capability. In the case of the assessment of three-dimensional models in REFLA/TRAC code, a conventional large computer is being used and it is difficult to perform the assessment efficiently because the turnaround time for the calculation and the analysis is long. Then, a REFLA/TRAC code which can run on an engineering work station (EWS) was developed. Calculational speed of the current EWS is the same order as that of large computers and the EWS has an excellent function for multidimensional graphical drawings. Besides, the plotting processors for X-Y drawing and for two-dimensional graphical drawing were developed in order to perform efficient analyses for three-dimensional calculations. In future, we can expect that the assessment of three-dimensional models becomes more efficient by introducing an EWS with higher calculational speed and with improved graphical drawings. In this report, each outline for the following three programs is described: (1) EWS version of REFLA/TRAC code, (2) Plot processor for X-Y drawing and (3) Plot processor for two-dimensional graphical drawing. (author)

  11. Application of the MELCOR code to design basis PWR large dry containment analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Jesse; Notafrancesco, Allen (USNRC, Office of Nuclear Regulatory Research, Rockville, MD); Tills, Jack Lee (Jack Tills & Associates, Inc., Sandia Park, NM)

    2009-05-01

    The MELCOR computer code has been developed by Sandia National Laboratories under USNRC sponsorship to provide capability for independently auditing analyses submitted by reactor manufactures and utilities. MELCOR is a fully integrated code (encompassing the reactor coolant system and the containment building) that models the progression of postulated accidents in light water reactor power plants. To assess the adequacy of containment thermal-hydraulic modeling incorporated in the MELCOR code for application to PWR large dry containments, several selected demonstration designs were analyzed. This report documents MELCOR code demonstration calculations performed for postulated design basis accident (DBA) analysis (LOCA and MSLB) inside containment, which are compared to other code results. The key processes when analyzing the containment loads inside PWR large dry containments are (1) expansion and transport of high mass/energy releases, (2) heat and mass transfer to structural passive heat sinks, and (3) containment pressure reduction due to engineered safety features. A code-to-code benchmarking for DBA events showed that MELCOR predictions of maximum containment loads were equivalent to similar predictions using a qualified containment code known as CONTAIN. This equivalency was found to apply for both single- and multi-cell containment models.

  12. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1983-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  13. Status of the CONTAIN computer code for LWR containment analysis

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Murata, K.K.; Rexroth, P.E.; Clauser, M.J.; Senglaub, M.E.; Sciacca, F.W.; Trebilcock, W.

    1982-01-01

    The current status of the CONTAIN code for LWR safety analysis is reviewed. Three example calculations are discussed as illustrations of the code's capabilities: (1) a demonstration of the spray model in a realistic PWR problem, and a comparison with CONTEMPT results; (2) a comparison of CONTAIN results for a major aerosol experiment against experimental results and predictions of the HAARM aerosol code; and (3) an LWR sample problem, involving a TMLB' sequence for the Zion reactor containment

  14. OECD International Standard Problem number 34. Falcon code comparison report

    International Nuclear Information System (INIS)

    Williams, D.A.

    1994-12-01

    ISP-34 is the first ISP to address fission product transport issues and has been strongly supported by a large number of different countries and organisations. The ISP is based on two experiments, FAL-ISP-1 and FAL-ISP-2, which were conducted in AEA's Falcon facility. Specific features of the experiments include quantification of chemical effects and aerosol behaviour. In particular, multi-component aerosol effects and vapour-aerosol interactions can all be investigated in the Falcon facility. Important parameters for participants to predict were the deposition profiles and composition, key chemical species and reactions, evolution of suspended material concentrations, and the effects of steam condensation onto aerosols and particle hygroscopicity. The results of the Falcon ISP support the belief that aerosol physics is generally well modelled in primary circuit codes, but the chemistry models in many of the codes need to be improved, since chemical speciation is one of the main factors which controls transport and deposition behaviour. The importance of chemical speciation, aerosol nucleation, and the role of multi-component aerosols in determining transport and deposition behaviour are evident. The role of re-vaporization in these Falcon experiments is not clear; it is not possible to compare those codes which predicted re-vaporization with quantitative data. The evidence from this ISP exercise indicates that the containment codes can predict thermal-hydraulics conditions satisfactorily. However, the differences in the predicted aerosol locations in the Falcon tests had shown that aerosol behaviour was very susceptible to parameters such as particle size distribution

  15. Optimising Boltzmann codes for the PLANCK era

    International Nuclear Information System (INIS)

    Hamann, Jan; Lesgourgues, Julien; Balbi, Amedeo; Quercellini, Claudia

    2009-01-01

    High precision measurements of the Cosmic Microwave Background (CMB) anisotropies, as can be expected from the PLANCK satellite, will require high-accuracy theoretical predictions as well. One possible source of theoretical uncertainty is the numerical error in the output of the Boltzmann codes used to calculate angular power spectra. In this work, we carry out an extensive study of the numerical accuracy of the public Boltzmann code CAMB, and identify a set of parameters which determine the error of its output. We show that at the current default settings, the cosmological parameters extracted from data of future experiments like Planck can be biased by several tenths of a standard deviation for the six parameters of the standard ΛCDM model, and potentially more seriously for extended models. We perform an optimisation procedure that leads the code to achieve sufficient precision while at the same time keeping the computation time within reasonable limits. Our conclusion is that the contribution of numerical errors to the theoretical uncertainty of model predictions is well under control—the main challenges for more accurate calculations of CMB spectra will be of an astrophysical nature instead

  16. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  17. Implementation of JAERI's reflood model into TRAC-PF1/MOD1 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-02-01

    Selected physical models of REFLA code, that is a reflood analysis code developed at JAERI, were implemented into the TRAC-PF1/MOD1 code in order to improve the predictive capability of the TRAC-PF1/MOD1 code for the core thermal hydraulic behaviors during the reflood phase in a PWR LOCA. Through comparisons of physical models between both codes, (1) Murao-Iguchi void fraction correlation, (2) the drag coefficient correlation acting to drops, (3) the correlation for wall heat transfer coefficient in the film boiling regime, (4) the quench velocity correlation and (5) heat transfer correlations for the dispersed flow regime were selected from the REFLA code to be implemented into the TRAC-PF1/MOD1 code. A method for the transformation of the void fraction correlation to the equivalent interfacial friction model was developed and the effect of the transformation method on the stability of the solution was discussed. Through assessment calculation using data from CCTF (Cylindrical Core Test Facility) flat power test, it was confirmed that the predictive capability of the TRAC code for the core thermal hydraulic behaviors during the reflood can be improved by the implementation of selected physical models of the REFLA code. Several user guidelines for the modified TRAC code were proposed based on the sensitivity studies on fluid cell number in the hydraulic calculation and on node number and effect of axial heat conduction in the heat conduction calculation of fuel rod. (author)

  18. Analyses and computer code developments for accident-induced thermohydraulic transients in water-cooled nuclear reactor systems

    International Nuclear Information System (INIS)

    Wulff, W.

    1977-01-01

    A review is presented on the development of analyses and computer codes for the prediction of thermohydraulic transients in nuclear reactor systems. Models for the dynamics of two-phase mixtures are summarized. Principles of process, reactor component and reactor system modeling are presented, as well as the verification of these models by comparing predicted results with experimental data. Codes of major importance are described, which have recently been developed or are presently under development. The characteristics of these codes are presented in terms of governing equations, solution techniques and code structure. Current efforts and problems of code verification are discussed. A summary is presented of advances which are necessary for reducing the conservatism currently implied in reactor hydraulics codes for safety assessment

  19. The Barrier code for predicting long-term concrete performance

    International Nuclear Information System (INIS)

    Shuman, R.; Rogers, V.C.; Shaw, R.A.

    1989-01-01

    There are numerous features incorporated into a LLW disposal facility that deal directly with critical safety objectives required by the NRC in 10 CFR 61. Engineered barriers or structures incorporating concrete are commonly being considered for waste disposal facilities. The Barrier computer code calculates the long-term degradation of concrete structures in LLW disposal facilities. It couples this degradation with water infiltration into the facility, nuclide leaching from the waste, contaminated water release from the facility, and associated doses to members of the critical population group. The concrete degradation methodology of Barrier is described

  20. Thermal-hydraulic analysis of water cooled breeding blanket of K-DEMO using MARS-KS code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jeong-Hun; Park, Il Woong; Kim, Geon-Woo; Park, Goon-Cherl [Seoul National University, Seoul (Korea, Republic of); Cho, Hyoung-Kyu, E-mail: chohk@snu.ac.kr [Seoul National University, Seoul (Korea, Republic of); Im, Kihak [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Highlights: • The thermal design of breeding blanket for the K-DEMO is evaluated using MARS-KS. • To confirm the prediction capability of MARS, the results were compared with the CFD. • The results of MARS-KS calculation and CFD prediction are in good agreement. • A transient simulation was carried out so as to show the applicability of MARS-KS. • A methodology to simulate the entire blanket system is proposed. - Abstract: The thermal design of a breeding blanket for the Korean Fusion DEMOnstration reactor (K-DEMO) is evaluated using the Multidimensional Analysis of Reactor Safety (MARS-KS) code in this study. The MARS-KS code has advantages in simulating transient two-phase flow over computational fluid dynamics (CFD) codes. In order to confirm the prediction capability of the code for the present blanket system, the calculation results were compared with the CFD prediction. The results of MARS-KS calculation and CFD prediction are in good agreement. Afterwards, a transient simulation for a conceptual problem was carried out so as to show the applicability of MARS-KS for a transient or accident condition. Finally, a methodology to simulate the multiple blanket modules is proposed.

  1. Hydrogen burn assessment with the CONTAIN code

    International Nuclear Information System (INIS)

    van Rij, H.M.

    1986-01-01

    The CONTAIN computer code was developed at Sandia National Laboratories, under contract to the US Nuclear Regulatory Commission (NRC). The code is intended for calculations of containment loads during severe accidents and for prediction of the radioactive source term in the event that the containment leaks or fails. A strong point of the CONTAIN code is the continuous interaction of the thermal-hydraulics phenomena, aerosol behavior and fission product behavior. The CONTAIN code can be used for Light Water Reactors as well as Liquid Metal Reactors. In order to evaluate the CONTAIN code on its merits, comparisons between the code and experiments must be made. In this paper, CONTAIN calculations for the hydrogen burn experiments, carried out at the Nevada Test Site (NTS), are presented and compared with the experimental data. In the Large-Scale Hydrogen Combustion Facility at the NTS, 21 tests have been carried out. These tests were sponsored by the NRC and the Electric Power Research Institute (EPRI). The tests, carried out by EG and G, were performed in a spherical vessel 16 m in diameter with a design pressure of 700 kPa, substantially higher than that of most commercial nuclear containment buildings

  2. Parallelization of 2-D lattice Boltzmann codes

    International Nuclear Information System (INIS)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo.

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author)

  3. Parallelization of 2-D lattice Boltzmann codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Soichiro; Kaburaki, Hideo; Yokokawa, Mitsuo

    1996-03-01

    Lattice Boltzmann (LB) codes to simulate two dimensional fluid flow are developed on vector parallel computer Fujitsu VPP500 and scalar parallel computer Intel Paragon XP/S. While a 2-D domain decomposition method is used for the scalar parallel LB code, a 1-D domain decomposition method is used for the vector parallel LB code to be vectorized along with the axis perpendicular to the direction of the decomposition. High parallel efficiency of 95.1% by the vector parallel calculation on 16 processors with 1152x1152 grid and 88.6% by the scalar parallel calculation on 100 processors with 800x800 grid are obtained. The performance models are developed to analyze the performance of the LB codes. It is shown by our performance models that the execution speed of the vector parallel code is about one hundred times faster than that of the scalar parallel code with the same number of processors up to 100 processors. We also analyze the scalability in keeping the available memory size of one processor element at maximum. Our performance model predicts that the execution time of the vector parallel code increases about 3% on 500 processors. Although the 1-D domain decomposition method has in general a drawback in the interprocessor communication, the vector parallel LB code is still suitable for the large scale and/or high resolution simulations. (author).

  4. Identifying Psoriasis and Psoriatic Arthritis Patients in Retrospective Databases When Diagnosis Codes Are Not Available: A Validation Study Comparing Medication/Prescriber Visit-Based Algorithms with Diagnosis Codes.

    Science.gov (United States)

    Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M

    2018-01-01

    Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Assessment on the characteristics of the analysis code for KALIMER PSDRS

    Energy Technology Data Exchange (ETDEWEB)

    Eoh, Jae Hyuk; Sim, Yoon Sub; Kim, Seong O.; Kim, Yeon Sik; Kim, Eui Kwang; Wi, Myung Hwan [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-04-01

    The PARS2 code was developed to analyze the RHR(Residual Heat Removal) system, especially PSDRS(Passive Safety Decay Heat Removal System), of KALIMER. In this report, preliminary verification and sensitivity analyses for PARS2 code were performed. From the results of the analyses, the PARS2 code has a good agreement with the experimental data of CRIEPI in the range of turbulent airside flow, and also the radiation heat transfer mode was well predicted. In this verification work, it was founded that the code calculation stopped in a very low air flowrate, and the numerical scheme related to the convergence of PARS2 code was adjusted to solve this problem. Through the sensitivity analysis on the PARS2 calculation results from the change of the input parameters, the pool-mixing coefficient related to the heat capacity of the structure in the system was improved such that the physical phenomenon can be well predicted. Also the initial conditions for the code calculation such as the hot and cold pool temperatures at the PSDRS commencing time were set up by using the transient analysis of the COMMIX code, and the surface emissivity of PSDRS was investigated and its permitted variation rage was set up. From this study, overall sensitivity characteristics of the PARS2 code were investigated and the results of the sensitivity analyses can be used in the design of the RHR system of KALIMER. 14 refs., 28 figs., 2 tabs. (Author)

  6. Construction of Protograph LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  7. Improvement of MARS code reflood model

    International Nuclear Information System (INIS)

    Hwang, Moonkyu; Chung, Bub-Dong

    2011-01-01

    A specifically designed heat transfer model for the reflood process which normally occurs at low flow and low pressure was originally incorporated in the MARS code. The model is essentially identical to that of the RELAP5/MOD3.3 code. The model, however, is known to have under-estimated the peak cladding temperature (PCT) with earlier turn-over. In this study, the original MARS code reflood model is improved. Based on the extensive sensitivity studies for both hydraulic and wall heat transfer models, it is found that the dispersed flow film boiling (DFFB) wall heat transfer is the most influential process determining the PCT, whereas the interfacial drag model most affects the quenching time through the liquid carryover phenomenon. The model proposed by Bajorek and Young is incorporated for the DFFB wall heat transfer. Both space grid and droplet enhancement models are incorporated. Inverted annular film boiling (IAFB) is modeled by using the original PSI model of the code. The flow transition between the DFFB and IABF, is modeled using the TRACE code interpolation. A gas velocity threshold is also added to limit the top-down quenching effect. Assessment calculations are performed for the original and modified MARS codes for the Flecht-Seaset test and RBHT test. Improvements are observed in terms of the PCT and quenching time predictions in the Flecht-Seaset assessment. In case of the RBHT assessment, the improvement over the original MARS code is found marginal. A space grid effect, however, is clearly seen from the modified version of the MARS code. (author)

  8. RELAP5/MOD2 code assessment using a LOFT L2-3 loss of coolant experiment

    International Nuclear Information System (INIS)

    Bang, Young Seok; Chung, Bub Dong; Kim, Hho Jung

    1990-01-01

    The LOFT LOCE L2-3 was simulated using the RELAP5/MOD2 Cycle 36.04 code to assess its capability in predicting the thermal-hydraulic phenomena in LBLOCA of the PWR. The reactor vessel was simulated with two core channels and split downcomer modeling for a base case calculation using the frozen code. The result of the base calculation showed that the code predicted the hydraulic behavior, and the blowdown thermal response at high power region of the core in a reasonable range and that the code had deficiencies in the critical flow model during subcooled-two-phase transition period, in the CHF correlation at high mass flux and in the blowdown rewet criteria. An overprediction of coolant inventory due to the deficiencies yielded the poor prediction of reflood thermal response. A Sensitivity calculation with an updated version from RELAP5/MOD2 Cycle 36.04 improved the prediction of the rewet phenomena

  9. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    Science.gov (United States)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  10. Capability of the RELAP5 code to simulate natural circulation behaviour in test facilities

    International Nuclear Information System (INIS)

    Mangal, Amit; Jain, Vikas; Nayak, A.K.

    2011-01-01

    In the present study, one of the extensively used best estimate code RELAP5 has been used for simulation of steady state, transient and stability behavior of natural circulation based experimental facilities, such as the High-Pressure Natural Circulation Loop (HPNCL) and the Parallel Channel Loop (PCL) installed and operating at BARC. The test data have been generated for a range of pressure, power and subcooling conditions. The computer code RELAP5/MOD3.2 was applied to predict the transient natural circulation characteristics under single-phase and two-phase conditions, thresholds of flow instability, amplitude and frequency of flow oscillations for different operating conditions of the loops. This paper presents the effect of nodalisation in prediction of natural circulation behavior in test facilities and a comparison of experimental data in with that of code predictions. The errors associated with the predictions are also characterized

  11. Code-B-1 for stress/strain calculation for TRISO fuel particle (Contract research)

    International Nuclear Information System (INIS)

    Aihara, Jun; Ueta, Shohei; Shibata, Taiju; Sawa, Kazuhiro

    2011-12-01

    We have developed Code-B-1 for the prediction of the failure probabilities of the coated fuel particles for the high temperature gas-cooled reactors (HTGRs) under operation by modification of an existing code. A finite element method (FEM) is employed for the stress calculation part and Code-B-1 can treat the plastic deformation of the coating layer of the coated fuel particles which the existing code cannot treat. (author)

  12. Linking the plasma code EDGE2D to the neutral code NIMBUS for a self consistent transport model of the boundary

    International Nuclear Information System (INIS)

    De Matteis, A.

    1987-01-01

    This report describes the fully automatic linkage between the finite difference, two-dimensional code EDGE2D, based on the classical Braginskii partial differential equations of ion transport, and the Monte Carlo code NIMBUS, which solves the integral form of the stationary, linear Boltzmann equation for neutral transport in a plasma. The coupling has been performed for the real poloidal geometry of JET with two belt-limiters and real magnetic configurations with or without a single-null point. The new integrated system starts from the magnetic geometry computed by predictive or interpretative equilibrium codes and yields the plasma and neutrals characteristics in the edge

  13. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  14. PHEBUS FP release analysis using a microstructure-based code

    International Nuclear Information System (INIS)

    Carlucci, L.N.

    1992-03-01

    The results of pre-test fission-product (FP) release analyses of the first two PHEBUS FP experiments, FPT0 and FPT1, indicate that the FREEDOM microstructure-based code predicts significant differences in both the timing and percent of gaseous FP releases for the two tests. To provide an indication of its predictive capability, FREEDOM was also used to model the high-burnup fuel tested in the Oak Ridge National Laboratory experiments VI-2 and VI-3. For these, the code was found to overpredict releases during the early stages of the tests and to underpredict releases during the later stages. The release kinetics in both tests were reasonably predicted, however. In view of the above, it is likely that the FREEDOM predictions of the final cumulative releases for the first two PHEBUS FP tests are lower-bound estimates. However, the significant difference in the predicted timing of initial releases for the two tests is felt to be indicative of what will occur. Therefore, this difference should be considered in the planning and conduct of the two tests, particularly aspects related to on-line measurements

  15. Fusion safety codes International modeling with MELCOR and ATHENA- INTRA

    CERN Document Server

    Marshall, T; Topilski, L; Merrill, B

    2002-01-01

    For a number of years, the world fusion safety community has been involved in benchmarking their safety analyses codes against experiment data to support regulatory approval of a next step fusion device. This paper discusses the benchmarking of two prominent fusion safety thermal-hydraulic computer codes. The MELCOR code was developed in the US for fission severe accident safety analyses and has been modified for fusion safety analyses. The ATHENA code is a multifluid version of the US-developed RELAP5 code that is also widely used for fusion safety analyses. The ENEA Fusion Division uses ATHENA in conjunction with the INTRA code for its safety analyses. The INTRA code was developed in Germany and predicts containment building pressures, temperatures and fluid flow. ENEA employs the French-developed ISAS system to couple ATHENA and INTRA. This paper provides a brief introduction of the MELCOR and ATHENA-INTRA codes and presents their modeling results for the following breaches of a water cooling line into the...

  16. A zero-dimensional EXTRAP computer code

    International Nuclear Information System (INIS)

    Karlsson, P.

    1982-10-01

    A zero-dimensional computer code has been designed for the EXTRAP experiment to predict the density and the temperature and their dependence upon paramenters such as the plasma current and the filling pressure of neutral gas. EXTRAP is a Z-pinch immersed in a vacuum octupole field and could be either linear or toroidal. In this code the density and temperature are assumed to be constant from the axis up to a breaking point from where they decrease linearly in the radial direction out to the plasma radius. All quantities, however, are averaged over the plasma volume thus giving the zero-dimensional character of the code. The particle, momentum and energy one-fluid equations are solved including the effects of the surrounding neutral gas and oxygen impurities. The code shows that the temperature and density are very sensitive to the shape of the plasma, flatter profiles giving higher temperatures and densities. The temperature, however, is not strongly affected for oxygen concentration less than 2% and is well above the radiation barrier even for higher concentrations. (Author)

  17. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  18. The development of the code package PERMAK--3D//SC--1

    International Nuclear Information System (INIS)

    Bolobov, P. A.; Oleksuk, D. A.

    2011-01-01

    Code package PERMAK-3D//SC-1 was developed for performing pin-by-pin coupled neutronic and thermal hydraulic calculation of the core fragment of seven fuel assemblies and was designed on the basis of 3D multigroup pin-by-pin code PERMAK-3D and 3D (subchannel) thermal hydraulic code SC-1 The code package predicts axial and radial pin-by-pin power distribution and coolant parameters in stimulated region (enthalpies,, velocities,, void fractions,, boiling and DNBR margins).. The report describes some new steps in code package development. Some PERMAK-3D//SC-1 outcomes of WWER calculations are presented in the report. (Authors)

  19. Roadmap for the Future of Commercial Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Jian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-01

    Building energy codes have significantly increased building efficiency over the last 38 years, since the first national energy code was published in 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, the inability to handle optimization that is specific to building type and use, the inability to account for project-specific energy costs, and the lack of follow-through or accountability after a certificate of occupancy is granted. It is likely that an approach that considers the building as an integrated system will be necessary to achieve the next real gains in building efficiency. This report provides a high-level review of different formats for commercial building energy codes, including prescriptive, prescriptive packages, capacity constrained, outcome based, and predictive performance approaches. This report also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria.

  20. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  1. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  2. Subchannel analysis code development for CANDU fuel channel

    International Nuclear Information System (INIS)

    Park, J. H.; Suk, H. C.; Jun, J. S.; Oh, D. J.; Hwang, D. H.; Yoo, Y. J.

    1998-07-01

    Since there are several subchannel codes such as COBRA and TORC codes for a PWR fuel channel but not for a CANDU fuel channel in our country, the subchannel analysis code for a CANDU fuel channel was developed for the prediction of flow conditions on the subchannels, for the accurate assessment of the thermal margin, the effect of appendages, and radial/axial power profile of fuel bundles on flow conditions and CHF and so on. In order to develop the subchannel analysis code for a CANDU fuel channel, subchannel analysis methodology and its applicability/pertinence for a fuel channel were reviewed from the CANDU fuel channel point of view. Several thermalhydraulic and numerical models for the subchannel analysis on a CANDU fuel channel were developed. The experimental data of the CANDU fuel channel were collected, analyzed and used for validation of a subchannel analysis code developed in this work. (author). 11 refs., 3 tabs., 50 figs

  3. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  4. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    Science.gov (United States)

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  5. Analytical considerations in the code qualification of piping systems

    International Nuclear Information System (INIS)

    Antaki, G.A.

    1995-01-01

    The paper addresses several analytical topics in the design and qualification of piping systems which have a direct bearing on the prediction of stresses in the pipe and hence on the application of the equations of NB, NC and ND-3600 of the ASME Boiler and Pressure Vessel Code. For each of the analytical topics, the paper summarizes the current code requirements, if any, and the industry practice

  6. Light water reactor fuel analysis code FEMAXI-V (Ver.1)

    International Nuclear Information System (INIS)

    Suzuki, Motoe

    2000-09-01

    A light water fuel analysis code FEMAXI-V is an advanced version which has been produced by integrating FEMAXI-IV(Ver.2), high burn-up fuel code EXBURN-I, and a number of functional improvements and extensions, to predict fuel rod behavior in normal and transient (not accident) conditions. The present report describes in detail the basic theories and structure, models and numerical solutions applied, improvements and extensions, and the material properties adopted in FEMAXI-V(Ver.1). FEMAXI-V deals with a single fuel rod. It predicts thermal and mechanical response of fuel rod to irradiation, including FP gas release. The thermal analysis predicts rod temperature distribution on the basis of pellet heat generation, changes in pellet thermal conductivity and gap thermal conductance, (transient) change in surface heat transfer to coolant, using radial one-dimensional geometry. The heat generation density profile of pellet can be determined by adopting the calculated results of burning analysis code. The mechanical analysis performs elastic/plastic, creep and PCMI calculations by FEM. The FP gas release model calculates diffusion of FP gas atoms and accumulation in bubbles, release and increase in internal pressure of rod. In every analysis, it is possible to allow some materials properties and empirical equations to depend on the local burnup or heat flux, which enables particularly analysis of high burnup fuel behavior and boiling transient of BWR rod. In order to facilitate effective and wide-ranging application of the code, formats and methods of input/output of the code are also described, and a sample output in an actual form is included. (author)

  7. Developmental assessment of the SCDAP/RELAP5 code

    International Nuclear Information System (INIS)

    Harvego, E.A.; Slefken, L.J.; Coryell, E.W.

    1997-01-01

    The development and assessment of the late-phase damage progression models in the current version (designated MOD3.2) of the SCDAP/RELAP5 code are described. The SCDAP/RELAP5 code is being developed at the Idaho National Engineering and Environmental Laboratory under the primary sponsorship of the US Nuclear Regulatory Commission (NRC) to provide best-estimate transient simulations of light water reactor coolant systems (RCS) during severe accident conditions. Recent modeling improvements made to the MOD3.2 version of the code include (1) molten pool formation and heat up, including the transient start-up of natural circulation heat transfer, (2) in-core molten pool thermal-mechanical crust failure, (3) the melting and relocation of upper plenum structures, and (4) improvements in the modeling of lower plenum debris behavior and the potential for failure of the lower head. Finally, to eliminate abrupt transitions between core damage states and provide more realistic predictions of late phase accident progression phenomena, a transition smoothing methodology was developed and implemented that results in the calculation of a gradual transition from an intact core geometry through the different core damage states leading to molten pool formation. A wide range of experiments and modeling tools were used to assess the capabilities of MOD3.2. The results of the SCDAP/RELAP5/MOD3.2 assessment indicate that modeling improvements have significantly enhanced the code capabilities and performance in several areas compared to the earlier code version. New models for transition smoothing between core damage states, and modeling improvements/additions for cladding oxide failure, molten pool behavior, and molten pool crust failure have significantly improved the code usability for a wide range of applications and have significantly improved the prediction of hydrogen production, molten pool melt mass and core melt relocation time

  8. Positive predictive values of the International Classification of Disease, 10th edition diagnoses codes for diverticular disease in the Danish National Registry of Patients

    Directory of Open Access Journals (Sweden)

    Rune Erichsen

    2010-10-01

    Full Text Available Rune Erichsen1, Lisa Strate2, Henrik Toft Sørensen1, John A Baron31Department of Clinical Epidemiology, Aarhus University Hospital, Denmark; 2Division of Gastroenterology, University of Washington, Seattle, WA, USA; 3Departments of Medicine and of Community and Family Medicine, Dartmouth Medical School, NH, USAObjective: To investigate the accuracy of diagnostic coding for diverticular disease in the Danish National Registry of Patients (NRP.Study design and setting: At Aalborg Hospital, Denmark, with a catchment area of 640,000 inhabitants, we identified 100 patients recorded in the NRP with a diagnosis of diverticular disease (International Classification of Disease codes, 10th revision [ICD-10] K572–K579 during the 1999–2008 period. We assessed the positive predictive value (PPV as a measure of the accuracy of discharge codes for diverticular disease using information from discharge abstracts and outpatient notes as the reference standard.Results: Of the 100 patients coded with diverticular disease, 49 had complicated diverticular disease, whereas 51 had uncomplicated diverticulosis. For the overall diagnosis of diverticular disease (K57, the PPV was 0.98 (95% confidence intervals [CIs]: 0.93, 0.99. For the more detailed subgroups of diagnosis indicating the presence or absence of complications (K573–K579 the PPVs ranged from 0.67 (95% CI: 0.09, 0.99 to 0.92 (95% CI: 0.52, 1.00. The diagnosis codes did not allow accurate identification of uncomplicated disease or any specific complication. However, the combined ICD-10 codes K572, K574, and K578 had a PPV of 0.91 (95% CI: 0.71, 0.99 for any complication.Conclusion: The diagnosis codes in the NRP can be used to identify patients with diverticular disease in general; however, they do not accurately discern patients with uncomplicated diverticulosis or with specific diverticular complications.Keywords: diverticulum, colon, diverticulitis, validation studies

  9. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  10. Development and verification of the LIFE-GCFR computer code for predicting gas-cooled fast-reactor fuel-rod performance

    International Nuclear Information System (INIS)

    Hsieh, T.C.; Billone, M.C.; Rest, J.

    1982-03-01

    The fuel-pin modeling code LIFE-GCFR has been developed to predict the thermal, mechanical, and fission-gas behavior of a Gas-Cooled Fast Reactor (GCFR) fuel rod under normal operating conditions. It consists of three major components: thermal, mechanical, and fission-gas analysis. The thermal analysis includes calculations of coolant, cladding, and fuel temperature; fuel densification; pore migration; fuel grain growth; and plenum pressure. Fuel mechanical analysis includes thermal expansion, elasticity, creep, fission-product swelling, hot pressing, cracking, and crack healing of fuel; and thermal expansion, elasticity, creep, and irradiation-induced swelling of cladding. Fission-gas analysis simultaneously treats all major mechanisms thought to influence fission-gas behavior, which include bubble nucleation, resolution, diffusion, migration, and coalescence; temperature and temperature gradients; and fission-gas interaction with structural defects

  11. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  12. Electronic manual of the nuclear characteristics analysis code-set for FBR

    International Nuclear Information System (INIS)

    Makino, Tohru

    2001-03-01

    Reactor Physics Gr., System Engineering Technology Division, O-arai Engineering Center has consolidated the nuclear design database to improve analytical methods and prediction accuracy for large fast breeder cores such as demonstration or commercial FBRs from the previous research. The up-to-date information about usage of the nuclear characteristics analysis code-set was compiled as a part of the improvement of basic design data base for FBR core. The outlines of the electronic manual are as follows; (1) The electronic manual includes explanations of following codes: JOINT : Code Interface Program. SLAROM, CASUP : Effective Cross Section Calculation Code. CITATION-FBR : Diffusion Analysis Code. PERKY : Perturbative Diffusion Analysis Code. SNPERT, SNPERT-3D : Perturbative Transport Analysis Code. SAGEP, SAGEP-3D : Sensitivity Coefficient Calculation Code. NSHEX : Transport Analysis Code using Nodal Method. ABLE : Cross Section Adjustment Calculation Code. ACCEPT : Predicting Accuracy Evaluation Code. (2) The electronic manual is described using HTML file format and PDF file for easy maintenance, updating and for easy referring through JNC Intranet. User can refer manual pages by usual Web browser software without any special setup. (3) Many of manual pages include link-tags to jump to related pages. String search is available in both HTML and PDF documents. (4) User can download source code, sample input data and shell script files to carry out each analysis from download page of each code (JNC inside only). (5) Usage of the electronic manual and maintenance/updating process are described in this report and it makes possible to enroll new code or new information in the electronic manual. Since the information has been taken into account about modifications and error fixings, added to each code after the last consolidation in 1994, the electronic manual would cover most recent status of the nuclear characteristics analysis code-set. One of other advantages of use

  13. Introduction and immigration of TRAC-PF1 code

    International Nuclear Information System (INIS)

    Yan Yuhua; Gao Zuying; Gao Cheng; Li Jincai

    1997-01-01

    TRAC-PF1 code performs best-estimate predictions of postulated accidents for pressurized light water reactors. It is one of the few system analyse codes which use two fluid model to treat two phase problems in nuclear system. In order to use this advanced software in China and make it possible to be run in different compute systems, IBM version of TRAC-PF1 code, imported from USA National Energy Software Center, is immigrated to CDC NOS/VE system and SUN workstation. The differences in computer languages from IBM 370 to CDC NOS/VE and to SUN workstation are modified properly. All the benchmark problems are calculated, and the results show that the immigration is successful

  14. Quality assurance procedures for the CONTAIN severe reactor accident computer code

    International Nuclear Information System (INIS)

    Russell, N.A.; Washington, K.E.; Bergeron, K.D.; Murata, K.K.; Carroll, D.E.; Harris, C.L.

    1991-01-01

    The CONTAIN quality assurance program follows a strict set of procedures designed to ensure the integrity of the code, to avoid errors in the code, and to prolong the life of the code. The code itself is maintained under a code-configuration control system that provides a historical record of changes. All changes are incorporated using an update processor that allows separate identification of improvements made to each successive code version. Code modifications and improvements are formally reviewed and checked. An exhaustive, multilevel test program validates the theory and implementation of all codes changes through assessment calculations that compare the code-predicted results to standard handbooks of idealized test cases. A document trail and archive establish the problems solved by the software, the verification and validation of the software, software changes and subsequent reverification and revalidation, and the tracking of software problems and actions taken to resolve those problems. This document describes in detail the CONTAIN quality assurance procedures. 4 refs., 21 figs., 4 tabs

  15. A CFD code comparison of wind turbine wakes

    International Nuclear Information System (INIS)

    Van der Laan, M P; Sørensen, N N; Storey, R C; Cater, J E; Norris, S E

    2014-01-01

    A comparison is made between the EllipSys3D and SnS CFD codes. Both codes are used to perform Large-Eddy Simulations (LES) of single wind turbine wakes, using the actuator disk method. The comparison shows that both LES models predict similar velocity deficits and stream-wise Reynolds-stresses for four test cases. A grid resolution study, performed in EllipSys3D and SnS, shows that a minimal uniform cell spacing of 1/30 of the rotor diameter is necessary to resolve the wind turbine wake. In addition, the LES-predicted velocity deficits are also compared with Reynolds-Averaged Navier Stokes simulations using EllipSys3D for a test case that is based on field measurements. In these simulations, two eddy viscosity turbulence models are employed: the k-ε model and the k-ε-f p model. Where the k-ε model fails to predict the velocity deficit, the results of the k-ε-f P model show good agreement with both LES models and measurements

  16. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    International Nuclear Information System (INIS)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-01-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  17. Modeling RIA scenarios with the FRAPTRAN and SCANAIR codes

    Energy Technology Data Exchange (ETDEWEB)

    Sagrado Garcia, I. C.; Vallejo, I.; Herranz, L. E.

    2013-07-01

    The need of defining new RIA safety criteria has pointed out the importance of performing a rigorous assessment of the transient codes capabilities. The present work is a comparative exercise devoted to identify the origin of the key deviations found between the predictions of FRAPTRAN-1.4 and SCANAIR-7.1. To do so, the calculations submitted by CIEMAT to the OECD/NEA RIA benchmark have been exploited. This work shows that deviations in clad temperatures mainly come from the treatment of the oxide layer. The systematically higher deformations calculated by FRAPTRAN-1.4 in early failed tests are caused by the different gap closure estimation. Besides, the dissimilarities observed in the FGR predictions are inherent to the different modeling strategies adopted in each code.

  18. HELIOS/DRAGON/NESTLE codes' simulation of void reactivity in a CANDU core

    International Nuclear Information System (INIS)

    Sarsour, H.N.; Rahnema, F.; Mosher, S.; Turinsky, P.J.; Serghiuta, D.; Marleau, G.; Courau, T.

    2002-01-01

    This paper presents results of simulation of void reactivity in a CANDU core using the NESTLE core simulator, cross sections from the HELIOS lattice physics code in conjunction with incremental cross sections from the DRAGON lattice physics code. First, a sub-region of a CANDU6 core is modeled using the NESTLE core simulator and predictions are contrasted with predictions by the MCNP Monte Carlo simulation code utilizing a continuous energy model. In addition, whole core modeling results are presented using the NESTLE finite difference method (FDM), NESTLE nodal method (NM) without assembly discontinuity factors (ADF), and NESTLE NM with ADF. The work presented in this paper has been performed as part of a project sponsored by the Canadian Nuclear Safety Commission (CNSC). The purpose of the project was to gather information and assess the accuracy of best estimate methods using calculational methods and codes developed independently from the CANDU industry. (author)

  19. Development, validation and application of NAFA 2D-CFD code

    International Nuclear Information System (INIS)

    Vaidya, A.M.; Maheshwari, N.K.; Vijayan, P.K.; Saha, D.

    2010-01-01

    A 2D axi-symmetric code named NAFA (Version 1.0) is developed for studying the pipe flow under various conditions. It can handle laminar/ turbulent flows, with or without heat transfer, under sub-critical/super-critical conditions. The code solves for momentum, energy equations with standard k-ε turbulence model (with standard wall functions). It solves pipe flow subjected to 'velocity inlet', 'wall', 'axis' and 'pressure outlet' boundary conditions. It is validated for several cases by comparing its results with experimental data/analytical solutions/correlations. The code has excellent convergence characteristics as verified from fall of equation residual in each case. It has proven capability of generating mesh independent results for laminar as well as turbulent flows. The code is applied to supercritical flows. For supercritical flows, the effect of mesh size on prediction of heat transfer coefficient is studied. With grid refinement, the Y + reduces and reaches the limiting value of 11.63. Hence the accuracy is found to increase with grid refinement. NAFA is able to qualitatively predict the effect of heat flux and operating pressure on heat transfer coefficient. The heat transfer coefficient matches well with experimental values under various conditions. (author)

  20. Development of migration prediction system (MIGSTEM) for cationic species of radionuclides through soil layers

    International Nuclear Information System (INIS)

    Ohnuki, Toshihiko; Takebe, Shinichi; Yamamoto, Tadatoshi

    1989-01-01

    The migration prediction system (MIGSTEM) has been developed for estimating the migration of cationic species of radionuclides through soil layers systematically. The MIGSTEM consists of the migration experiments, the one-dimensional fitting code (inverse analysis code) for determining retardation factor and dispersivity (migration factors) and the three-dimensional differential code (prediction code) for estimating the migration of the radionuclides. The migration experiments are carried out for obtaining the concentration profiles of the radionuclides in unsaturated and saturated soil layers. Using the inverse analysis code, the migration factors are obtained at one time by fitting the concentration profiles calculated to those observed. The prediction code can give the contours of concentration and the one-dimensional concentration profiles at selected time, as well as the changing in the concentration at a selected position with time. The validity of the MIGSTEM was obtained by the benchmark test on the prediction and inverse analysis codes. The MIGSTEM was applied to estimate the migration of Sr 2+ through the sandy soil. (author)

  1. Compendium of computer codes for the safety analysis of fast breeder reactors

    International Nuclear Information System (INIS)

    1977-10-01

    The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available

  2. Papa, a Particle Tracing Code in Pascal

    NARCIS (Netherlands)

    Haselhoff, E.H.; Haselhoff, Eltjo H.; Ernst, G.J.

    1990-01-01

    During the design of a 10 ¿m high-gain FEL oscillator (TEUFEL Project) we developed a new particle-tracing code to perform simulations of thermionic- and photo-cathode electron injectors/accelerators. The program allows predictions of current, energy and beam emittance in a user-specified linac

  3. Experimental validation for combustion analysis of GOTHIC code in 2-dimensional combustion chamber

    International Nuclear Information System (INIS)

    Lee, J. W.; Yang, S. Y.; Park, K. C.; Jung, S. H.

    2002-01-01

    In this study, the prediction capability of GOTHIC code for hydrogen combustion phenomena was validated with the results of two-dimensional premixed hydrogen combustion experiment executed by Seoul National University. The experimental chamber has about 24 liter free volume (1x0.024x1 m 3 ) and 2-dimensional rectangular shape. The test were preformed with 10% hydrogen/air gas mixture and conducted with combination of two igniter positions (top center, top corner) and two boundary conditions (bottom full open, bottom right half open). Using the lumped parameter and mechanistic combustion model in GOTHIC code, the SNU experiments were simulated under the same conditions. The GOTHIC code prediction of the hydrogen combustion phenomena did not compare well with the experimental results. In case of lumped parameter simulation, the combustion time was predicted appropriately. But any other local information related combustion phenomena could not be obtained. In case of mechanistic combustion analysis, the physical combustion phenomena of gas mixture were not matched experimental ones. In boundary open cases, the GOTHIC predicted very long combustion time and the flame front propagation could not simulate appropriately. Though GOTHIC showed flame propagation phenomenon in adiabatic calculation, the induction time of combustion was still very long compare with experimental results. Also, it was found that the combustion model of GOTHIC code had some weak points in low concentration of hydrogen combustion simulation

  4. Upgrade and benchmarking of the NIFS physics-engineering-cost code

    International Nuclear Information System (INIS)

    Dolan, T.J.; Yamazaki, K.

    2004-07-01

    The NIFS Physics-Engineering-Cost (PEC) code for helical and tokamak fusion reactors is upgraded by adding data from three blanket-shield designs, a new cost section based on the ARIES cost schedule, more recent unit costs, and improved algorithms for various computations. The PEC code is also benchmarked by modeling the ARIES-AT (advanced technology) tokamak and the ARIES-SPPS (stellarator power plant system). The PEC code succeeds in predicting many of the pertinent plasma parameters and reactor component masses within about 10%. There are cost differences greater than 10% for some fusion power core components, which may be attributed to differences of unit costs used by the codes. The COEs estimated by the PEC code differ from the COEs of the ARIES-AT and ARIES-SPPS studies by 5%. (author)

  5. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  6. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  7. Use of GOTHIC Code for Assessment of Equipment Environmental Qualification

    International Nuclear Information System (INIS)

    Cavlina, N.; Feretic, D.; Grgic, D.; Spalj, S.; Spiler, J.

    1996-01-01

    Environmental qualification (EQ) of equipment important to safety in nuclear power plants ensures its capability to perform designated safety function on demand under postulated service conditions, including harsh accident environment (e. g. LOCA, HELB). The computer code GOTHIC was used to calculate pressure and temperature profiles inside NPP Krsko containment during limiting LOCA and MSLB accidents. The results of the new best-estimate containment code are compared to the older CONTEMPT code using the same input data and assumptions. The predictions obtained by both codes are very similar. As a result of the calculation the envelopes of the LOCA and MSLB pressures and temperatures, as used in FSAR/USAR Chapter 6, can be used in EQ project. (author)

  8. Independent peer review of nuclear safety computer codes

    International Nuclear Information System (INIS)

    Boyack, B.E.; Jenks, R.P.

    1993-01-01

    A structured, independent computer code peer-review process has been developed to assist the US Nuclear Regulatory Commission (NRC) and the US Department of Energy in their nuclear safety missions. This paper describes a structured process of independent code peer review, benefits associated with a code-independent peer review, as well as the authors' recent peer-review experience. The NRC adheres to the principle that safety of plant design, construction, and operation are the responsibility of the licensee. Nevertheless, NRC staff must have the ability to independently assess plant designs and safety analyses submitted by license applicants. According to Ref. 1, open-quotes this requires that a sound understanding be obtained of the important physical phenomena that may occur during transients in operating power plants.close quotes The NRC concluded that computer codes are the principal products to open-quotes understand and predict plant response to deviations from normal operating conditionsclose quotes and has developed several codes for that purpose. However, codes cannot be used blindly; they must be assessed and found adequate for the purposes they are intended. A key part of the qualification process can be accomplished through code peer reviews; this approach has been adopted by the NRC

  9. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  10. Thermal hydraulic codes for LWR safety analysis - present status and future perspective

    Energy Technology Data Exchange (ETDEWEB)

    Staedtke, H. [Commission of the European Union, Ispra (Italy)

    1997-07-01

    The aim of the present paper is to give a review on the current status and future perspective of present best-estimate Thermal Hydraulic codes. Reference is made to internationally well-established codes which have reached a certain state of maturity. The first part of the paper deals with the common basic code features with respect to the physical modelling and their numerical methods used to describe complex two-phase flow and heat transfer processes. The general predictive capabilities are summarized identifying some remaining code deficiencies and their underlying limitations. The second part discusses various areas including physical modelling, numerical techniques and informatic structure where the codes could be substantially improved.

  11. Thermal hydraulic codes for LWR safety analysis - present status and future perspective

    International Nuclear Information System (INIS)

    Staedtke, H.

    1997-01-01

    The aim of the present paper is to give a review on the current status and future perspective of present best-estimate Thermal Hydraulic codes. Reference is made to internationally well-established codes which have reached a certain state of maturity. The first part of the paper deals with the common basic code features with respect to the physical modelling and their numerical methods used to describe complex two-phase flow and heat transfer processes. The general predictive capabilities are summarized identifying some remaining code deficiencies and their underlying limitations. The second part discusses various areas including physical modelling, numerical techniques and informatic structure where the codes could be substantially improved

  12. FRAP-T1: a computer code for the transient analysis of oxide fuel rods

    International Nuclear Information System (INIS)

    Dearien, J.A.; Miller, R.L.; Hobbins, R.R.; Siefken, L.J.; Baston, V.F.; Coleman, D.R.

    1977-02-01

    FRAP-T is a FORTRAN IV computer code which can be used to solve for the transient response of a light water reactor (LWR) fuel rod during accident transients such as loss-of-coolant accident (LOCA) or a power-cooling-mismatch (PCM). The coupled effects of mechanical, thermal, internal gas, and material property response on the behavior of the fuel rod are considered. FRAP-T is a modular code with each major computational model isolated within the code and coupled to the main code by subroutine calls and data transfer through argument lists. FRAP-T is coupled to a materials properties subcode (MATPRO) which is used to provide gas, fuel, and cladding properties to the FRAP-T computational subcodes. No material properties need be supplied by the code user. The needed water properties are stored in tables built into the code. Critical heat flux (CHF) and heat transfer correlations for a wide range of coolant conditions are contained in modular subroutines. FRAP-T has been evaluated by making extensive comparisons between predictions of the code and experimental data. Comparison of predicted and experimental results are presented for a range of FRAP-T calculated parameters. The code is presently programmed and running on an IBM-360/75 and a CDC 7600 computer

  13. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  14. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    International Nuclear Information System (INIS)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya

    2018-01-01

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  15. Sensitivity analysis of MIDAS tests using SPACE code. Effect of nodalization

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Shin; Oh, Seung-Jong; Diab, Aya [KEPCO International Nuclear Graduate School (KINGS), Ulsan (Korea, Republic of). Dept. of NPP Engineering

    2018-02-15

    The nodalization sensitivity analysis for the ECCS (Emergency Core Cooling System) bypass phe�nomena was performed using the SPACE (Safety and Performance Analysis CodE) thermal hydraulic analysis computer code. The results of MIDAS (Multi-�dimensional Investigation in Downcomer Annulus Simulation) test were used. The MIDAS test was conducted by the KAERI (Korea Atomic Energy Research Institute) for the performance evaluation of the ECC (Emergency Core Cooling) bypass phenomenon in the DVI (Direct Vessel Injection) system. The main aim of this study is to examine the sensitivity of the SPACE code results to the number of thermal hydraulic channels used to model the annulus region in the MIDAS experiment. The numerical model involves three nodalization cases (4, 6, and 12 channels) and the result show that the effect of nodalization on the bypass fraction for the high steam flow rate MIDAS tests is minimal. For computational efficiency, a 4 channel representation is recommended for the SPACE code nodalization. For the low steam flow rate tests, the SPACE code over-�predicts the bypass fraction irrespective of the nodalization finesse. The over-�prediction at low steam flow may be attributed to the difficulty to accurately represent the flow regime in the vicinity of the broken cold leg.

  16. Administrative database code accuracy did not vary notably with changes in disease prevalence.

    Science.gov (United States)

    van Walraven, Carl; English, Shane; Austin, Peter C

    2016-11-01

    Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Application of the RESRAD computer code to VAMP scenario S

    International Nuclear Information System (INIS)

    Gnanapragasam, E.K.; Yu, C.

    1997-03-01

    The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety

  18. Assessment of ASSERT-PV for prediction of critical heat flux in CANDU bundles

    International Nuclear Information System (INIS)

    Rao, Y.F.; Cheng, Z.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for CHF prediction. • CANDU 28-, 37- and 43-element bundle CHF experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of CHF model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against five full-scale CANDU bundle experiments conducted in 1990s and in 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element (CANFLEX) bundles. A total of 15 CHF test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for CANDU reactors. Code predictions of channel dryout power and axial and radial CHF locations were compared against measurements from the SL CHF tests to quantify the code prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, the sensitivity studies evaluated the contribution of each CHF model change or enhancement to the improvement in CHF prediction. Overall, the assessment demonstrated significant improvement in prediction of channel dryout power and axial and radial CHF locations in horizontal fuel channels containing CANDU bundles

  19. Assessment of RELAP5/Mod3 system thermal hydraulic code using power test data of a BWR6 reactor

    International Nuclear Information System (INIS)

    Lee, M.; Chiang, C.S.

    1997-01-01

    The power test data of Kuosheng Nuclear Power Plant were used to assess RELAP5/Mod3 system thermal hydraulic analysis code. The plant employed a General Electric designed Boiling Water Reactor (BWR6) with rated power of 2894 MWth. The purpose of the assessment is to verify the validity of the plant specific RELAP5/Mod3 input deck for transient analysis. The power tests considered in the assessment were 100% power generator load rejection, the closure of main steam isolation valves (MSIVs) at 96% power, and the trip of recirculation pumps at 68% power. The major parameters compared in the assessment were steam dome pressure, steam flow rate, core flow rate, and downcomer water level. The comparisons of the system responses predicted by the code and the power test data were reasonable which demonstrated the capabilities of the code and the validity of the input deck. However, it was also identified that the separator model of the code may cause energy imbalance problem in the transient calculation. In the assessment, the steam separators were modeled using time-dependent junctions. In the approach, a complete separation of steam and water was predicted. The system responses predicted by RELAP5/Mod3 code were also compared with those from the calculations of RETRAN code. When these results were compared with the power test data, the predictions of the RETRAN code were better than those of RELAP5/Mod3. In the simulation of 100% power generator load rejection, it was believed that the difference in the steam separator model of these two codes was one of the reason of the difference in the prediction of power test data. The predictions of RELAP/Mod3 code can also be improved by the incorporation of one-dimensional kinetic model. There was also some margin for the improvement of the input related to the feedwater control system. (author)

  20. Validation of ICD-9-CM coding algorithm for improved identification of hypoglycemia visits

    Directory of Open Access Journals (Sweden)

    Lieberman Rebecca M

    2008-04-01

    Full Text Available Abstract Background Accurate identification of hypoglycemia cases by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM codes will help to describe epidemiology, monitor trends, and propose interventions for this important complication in patients with diabetes. Prior hypoglycemia studies utilized incomplete search strategies and may be methodologically flawed. We sought to validate a new ICD-9-CM coding algorithm for accurate identification of hypoglycemia visits. Methods This was a multicenter, retrospective cohort study using a structured medical record review at three academic emergency departments from July 1, 2005 to June 30, 2006. We prospectively derived a coding algorithm to identify hypoglycemia visits using ICD-9-CM codes (250.3, 250.8, 251.0, 251.1, 251.2, 270.3, 775.0, 775.6, and 962.3. We confirmed hypoglycemia cases by chart review identified by candidate ICD-9-CM codes during the study period. The case definition for hypoglycemia was documented blood glucose 3.9 mmol/l or emergency physician charted diagnosis of hypoglycemia. We evaluated individual components and calculated the positive predictive value. Results We reviewed 636 charts identified by the candidate ICD-9-CM codes and confirmed 436 (64% cases of hypoglycemia by chart review. Diabetes with other specified manifestations (250.8, often excluded in prior hypoglycemia analyses, identified 83% of hypoglycemia visits, and unspecified hypoglycemia (251.2 identified 13% of hypoglycemia visits. The absence of any predetermined co-diagnosis codes improved the positive predictive value of code 250.8 from 62% to 92%, while excluding only 10 (2% true hypoglycemia visits. Although prior analyses included only the first-listed ICD-9 code, more than one-quarter of identified hypoglycemia visits were outside this primary diagnosis field. Overall, the proposed algorithm had 89% positive predictive value (95% confidence interval, 86–92 for

  1. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Cheng, X.

    2015-01-01

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  2. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)

    2015-04-15

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  3. A statistical methodology for quantification of uncertainty in best estimate code physical models

    International Nuclear Information System (INIS)

    Vinai, Paolo; Macian-Juan, Rafael; Chawla, Rakesh

    2007-01-01

    A novel uncertainty assessment methodology, based on a statistical non-parametric approach, is presented in this paper. It achieves quantification of code physical model uncertainty by making use of model performance information obtained from studies of appropriate separate-effect tests. Uncertainties are quantified in the form of estimated probability density functions (pdf's), calculated with a newly developed non-parametric estimator. The new estimator objectively predicts the probability distribution of the model's 'error' (its uncertainty) from databases reflecting the model's accuracy on the basis of available experiments. The methodology is completed by applying a novel multi-dimensional clustering technique based on the comparison of model error samples with the Kruskall-Wallis test. This takes into account the fact that a model's uncertainty depends on system conditions, since a best estimate code can give predictions for which the accuracy is affected by the regions of the physical space in which the experiments occur. The final result is an objective, rigorous and accurate manner of assigning uncertainty to coded models, i.e. the input information needed by code uncertainty propagation methodologies used for assessing the accuracy of best estimate codes in nuclear systems analysis. The new methodology has been applied to the quantification of the uncertainty in the RETRAN-3D void model and then used in the analysis of an independent separate-effect experiment. This has clearly demonstrated the basic feasibility of the approach, as well as its advantages in yielding narrower uncertainty bands in quantifying the code's accuracy for void fraction predictions

  4. Thermal hydraulic calculation of STORM facility using GOTHIC code

    International Nuclear Information System (INIS)

    Pevec, D.; Grgic, D.; Prah, M.

    1995-01-01

    Benchmark calculation CTI defined in frame of STORM experimental programme is used to prove that the GOTHIC code is capable to predict behaviour of experimental facility with reasonable accuracy. GOTHIC code is developed mainly for containment calculation. In this situation it is successfully used for calculation of one dimensional flow of steam and noncondensable mixture. Steady state distributions of pressure, temperature and the velocity of gas along facility are consistent with results obtained by other benchmark participants. (author)

  5. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  6. Calculations of Fission Gas Release During Ramp Tests Using Copernic Code

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Liu [Nuclear Fuel R and D Center, China Nuclear Power Technology Research Institute (CNPRI) (China)

    2013-03-15

    The report performed under IAEA research contract No.15951 describes the results of fuel performance evaluation of LWR fuel rods operated at ramp conditions using the COPERNIC code developed by AREVA. The experimental data from the Third Riso Fission Gas Project and the Studsvik SUPER-RAMP Project presented in the IFPE database of the OECD/NEA has been utilized for assessing the code itself during simulation of fission gas release (FGR). Standard code models for LWR fuel were used in simulations with parameters set properly in accordance with relevant test reports. With the help of data adjustment, the input power histories are restructured to fit the real ones, so as to ensure the validity of FGR prediction. The results obtained by COPERNIC show that different models lead to diverse predictions and discrepancies. By comparison, the COPERNIC V2.2 model (95% Upper bound) is selected as the standard FGR model in this report and the FGR phenomenon is properly simulated by the code. To interpret the large discrepancies of some certain PK rods, the burst effect of FGR which is taken into consideration in COPERNIC is described and the influence of the input power histories is extrapolated. In addition, the real-time tracking capability of COPERNIC is tested against experimental data. In the process of investigation, two main dominant factors influencing the measured gas release rate are described and different mechanisms are analyzed. With the limited predicting capacity, accurate predictions cannot be carried out on abrupt changes of FGR during ramp tests by COPERNIC and improvements may be necessary to some relevant models. (author)

  7. Status of the ASTEC integral code

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)

  8. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491

  9. Systemizers are better code-breakers:Self-reported systemizing predicts code-breaking performance in expert hackers and naïve participants

    Directory of Open Access Journals (Sweden)

    India eHarvey

    2016-05-01

    Full Text Available Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders - attention to detail and systemizing - may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e. crypto-analysis or code-breaking. A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001; Baron-Cohen et al., 2003. They were also tested with behavioural tasks involving code-breaking and a control task involving security x-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing was related with performance in the x-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015. We discuss the theoretical and translational implications of our findings.

  10. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants.

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.

  11. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  12. Evaluation of the General Atomic codes TAP and RECA for HTGR accident analyses

    International Nuclear Information System (INIS)

    Ball, S.J.; Cleveland, J.C.; Sanders, J.P.

    1978-01-01

    The General Atomic codes TAP (Transient Analysis Program) and RECA (Reactor Emergency Cooling Analysis) are evaluated with respect to their capability for predicting the dynamic behavior of high-temperature gas-cooled reactors (HTGRs) for postulated accident conditions. Several apparent modeling problems are noted, and the susceptibility of the codes to misuse and input errors is discussed. A critique of code verification plans is also included. The several cases where direct comparisons could be made between TAP/RECA calculations and those based on other independently developed codes indicated generally good agreement, thus contributing to the credibility of the codes

  13. RADTRAN: a computer code to analyze transportation of radioactive material

    International Nuclear Information System (INIS)

    Taylor, J.M.; Daniel, S.L.

    1977-04-01

    A computer code is presented which predicts the environmental impact of any specific scheme of radioactive material transportation. Results are presented in terms of annual latent cancer fatalities and annual early fatility probability resulting from exposure, during normal transportation or transport accidents. The code is developed in a generalized format to permit wide application including normal transportation analysis; consideration of alternatives; and detailed consideration of specific sectors of industry

  14. GRAYSKY-A new gamma-ray skyshine code

    International Nuclear Information System (INIS)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY are as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors

  15. Verification and implications of the multiple pin treatment in the SASSYS-1 LMR systems analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1994-01-01

    As part of a program to obtain realistic, as opposed to excessively conservative, analysis of reactor transients, a multiple pin treatment for the analysis of intra-subassembly thermal hydraulics has been included in the SASSYS-1 liquid metal reactor systems analysis code. This new treatment has made possible a whole new level of verification for the code. The code can now predict the steady-state and transient responses of individual thermocouples within instrumented subassemlies in a reactor, rather than just predicting average temperatures for a subassembly. Very good agreement has been achieved between code predictions and the experimental measurements of steady-state and transient temperatures and flow rates in the Shutdown Heat Removal Tests in the EBR-II Reactor. Detailed multiple pin calculations for blanket subassemblies in the EBR-II reactor demonstrate that the actual steady-state and transient peak temperatures in these subassemblies are significantly lower than those that would be calculated by simpler models

  16. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    Science.gov (United States)

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  17. Development of a Computer Code for the Estimation of Fuel Rod Failure

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, I.H.; Ahn, H.J. [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    1997-12-31

    Much research has already been performed to obtain the information on the degree of failed fuel rods from the primary coolant activities of operating PWRs in the last few decades. The computer codes that are currently in use for domestic nuclear power plants, such as CADE code and ABB-CE codes developed by Westinghouse and ABB-CE, respectively, still give significant overall errors in estimating the failed fuel rods. In addition, with the CADE code, it is difficult to predict the degree of fuel rod failures during the transient period of nuclear reactor operation, where as the ABB-CE codes are relatively more difficult to use for end-users. In particular, the rapid progresses made recently in the area of the computer hardware and software systems that their computer programs be more versatile and user-friendly. While the MS windows system that is centered on the graphic user interface and multitasking is now in widespread use, the computer codes currently employed at the nuclear power plants, such as CADE and ABB-CE codes, can only be run on the DOS system. Moreover, it is desirable to have a computer code for the fuel rod failure estimation that can directly use the radioactivity data obtained from the on-line monitoring system of the primary coolant activity. The main purpose of this study is, therefore, to develop a Windows computer code that can predict the location, the number of failed fuel rods,and the degree of failures using the radioactivity data obtained from the primary coolant activity for PWRs. Another objective is to combine this computer code with the on-line monitoring system of the primary coolant radioactivity at Kori 3 and 4 operating nuclear power plants and enable their combined use for on-line evaluation of the number and degree of fuel rod failures. (author). 49 refs., 85 figs., 30 tabs.

  18. A Monte Carlo code for ion beam therapy

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    Initially developed for applications in detector and accelerator physics, the modern Fluka Monte Carlo code is now used in many different areas of nuclear science. Over the last 25 years, the code has evolved to include new features, such as ion beam simulations. Given the growing use of these beams in cancer treatment, Fluka simulations are being used to design treatment plans in several hadron-therapy centres in Europe.   Fluka calculates the dose distribution for a patient treated at CNAO with proton beams. The colour-bar displays the normalized dose values. Fluka is a Monte Carlo code that very accurately simulates electromagnetic and nuclear interactions in matter. In the 1990s, in collaboration with NASA, the code was developed to predict potential radiation hazards received by space crews during possible future trips to Mars. Over the years, it has become the standard tool to investigate beam-machine interactions, radiation damage and radioprotection issues in the CERN accelerator com...

  19. NASA Lewis Steady-State Heat Pipe Code Architecture

    Science.gov (United States)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  20. STAT, GAPS, STRAIN, DRWDIM: a system of computer codes for analyzing HTGR fuel test element metrology data. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Saurwein, J.J.

    1977-08-01

    A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.

  1. Simulation of small break loss of coolant accident using relap 5/ MOD 2 computer code

    International Nuclear Information System (INIS)

    Megahed, M.M.

    1992-01-01

    An assessment of relap 5 / MOD 2/Cycle 36.05 best estimate computer code capabilities in predicting the thermohydraulic response of a PWR following a small break loss of coolant accident is presented. The experimental data base for the evaluation is the results of Test S-N H-3 performed in the semi scale MOD-2 c Test facility which modeled a 0.5% small break loss of coolant accident with an accompanying failure of the high pressure injection emergency core cooling system. A conclusion was reached that the code is capable of making small break loss of coolant accident calculations efficiently. However, some of the small break loss of coolant accident related phenomena were not properly predicted by the code, suggesting a need for code improvement.9 fig., 3 tab

  2. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  3. Development of the CAT code - YGN 5 and 6 CVCS analysis tool

    International Nuclear Information System (INIS)

    Kim, S.W.; Sohn, S.H.; Seo, J.T.; Lee, S.K.

    1996-01-01

    The CAT code has been developed for the analysis of the Chemical and Volume Control System (CVCS) of the Yonggwang Nuclear Power Plant Units 5 and 6(YGN 5 and 6). The code is able to simulate the system behaviors in the operating conditions which should be considered in the design of the system. It has been developed as a stand alone code which can simulate CVCS in detail whenever correct system boundary conditions are provided. The code consists of two modules, i.e. control and process modules. The control module includes the models for the Pressurizer Level Control System, the Letdown Backpressure Control System, the Charging Backpressure Control System, and the Seal Injection Control System. Thermal-hydraulic responses of the system are simulated by the process module. The modeling of the system is based on a node and flowpath network. The thermal-hydraulic model is based on the assumption of homogeneous equilibrium mixture. The major system components such as valves, orifices, pumps, heat exchangers and the volume control tank are explicitly modeled in the code. The code was validated against the measured data from the letdown system test performed during the Hot Functional Testing at YGN 3. The comparison between the measured and predicted data demonstrated that the present model can predict the observed phenomena with sufficient accuracy. (author)

  4. Feasibility Study for Applicability of the Wavelet Transform to Code Accuracy Quantification

    International Nuclear Information System (INIS)

    Kim, Jong Rok; Choi, Ki Yong

    2012-01-01

    A purpose of the assessment process of large thermal-hydraulic system codes is verifying their quality by comparing code predictions against experimental data. This process is essential for reliable safety analysis of nuclear power plants. Extensive experimental programs have been conducted in order to support the development and validation activities of best estimate thermal-hydraulic codes. So far, the Fast Fourier Transform Based Method (FFTBM) has been used widely for quantification of the prediction accuracy regardless of its limitation that it does not provide any time resolution for a local event. As alternative options, several time windowing methods (running average, short time Fourier transform, and etc.) can be utilized, but such time windowing methods also have a limitation of a fixed resolution. This limitation can be overcome by a wavelet transform because the resolution of the wavelet transform effectively varies in the time-frequency plane depending on choice of basic functions which are not necessarily sinusoidal. In this study, a feasibility of a new code accuracy quantification methodology using the wavelet transform is pursued

  5. LWR containment thermal hydraulic codes benchmark demona B3 exercise

    International Nuclear Information System (INIS)

    Della Loggia, E.; Gauvain, J.

    1988-01-01

    Recent discussion about the aerosol codes currently used for the analysis of containment retention capabilities have revealed a number of questions concerning the reliabilities and verifications of the thermal-hydraulic modules of these codes with respect to the validity of implemented physical models and the stability and effectiveness of numerical schemes. Since these codes are used for the calculation of the Source Term for the assessment of radiological consequences of severe accidents, they are an important part of reactor safety evaluation. For this reason the Commission of European Communities (CEC), following the recommendation mode by experts from Member Stades, is promoting research in this field with the aim also of establishing and increasing collaboration among Research Organisations of member countries. In view of the results of the studies, the CEC has decided to carry out a Benchmark exercise for severe accident containment thermal hydraulics codes. This exercise is based on experiment B3 in the DEMONA programme. The main objective of the benchmark exercise has been to assess the ability of the participating codes to predict atmosphere saturation levels and bulk condensation rates under conditions similar to those predicted to follow a severe accident in a PWR. This exercise follows logically on from the LA-4 exercise, which, is related to an experiment with a simpler internal geometry. We present here the results obtained so far and from them preliminary conclusions are drawn, concerning condensation temperature, pressure, flow rates, in the reactor containment

  6. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  7. Promoter Analysis Reveals Globally Differential Regulation of Human Long Non-Coding RNA and Protein-Coding Genes

    KAUST Repository

    Alam, Tanvir

    2014-10-02

    Transcriptional regulation of protein-coding genes is increasingly well-understood on a global scale, yet no comparable information exists for long non-coding RNA (lncRNA) genes, which were recently recognized to be as numerous as protein-coding genes in mammalian genomes. We performed a genome-wide comparative analysis of the promoters of human lncRNA and protein-coding genes, finding global differences in specific genetic and epigenetic features relevant to transcriptional regulation. These two groups of genes are hence subject to separate transcriptional regulatory programs, including distinct transcription factor (TF) proteins that significantly favor lncRNA, rather than coding-gene, promoters. We report a specific signature of promoter-proximal transcriptional regulation of lncRNA genes, including several distinct transcription factor binding sites (TFBS). Experimental DNase I hypersensitive site profiles are consistent with active configurations of these lncRNA TFBS sets in diverse human cell types. TFBS ChIP-seq datasets confirm the binding events that we predicted using computational approaches for a subset of factors. For several TFs known to be directly regulated by lncRNAs, we find that their putative TFBSs are enriched at lncRNA promoters, suggesting that the TFs and the lncRNAs may participate in a bidirectional feedback loop regulatory network. Accordingly, cells may be able to modulate lncRNA expression levels independently of mRNA levels via distinct regulatory pathways. Our results also raise the possibility that, given the historical reliance on protein-coding gene catalogs to define the chromatin states of active promoters, a revision of these chromatin signature profiles to incorporate expressed lncRNA genes is warranted in the future.

  8. Modification and validation of the natural heat convection and subcooled void formation models in the code PARET

    International Nuclear Information System (INIS)

    Hainoun, A.; Alhabit, F.; Ghazi, N.

    2008-01-01

    Two new modifications have been included in the current PARET code that is widely applied in the dynamic and safety analysis of research reactors. A new model was implemented for the simulation of void formation in the subcooled boiling regime, the other modification dealt with the implementation of a new approach to improve the prediction of heat transfer coefficient under natural circulation condition. The modified code was successfully validated using adequate single effect tests covering the physical phenomena of interest for both natural circulation and subcooled void formation at low pressure and low heat flux. The validation results indicate significant improvement of the code compared to the default version. Additionally, to simplify the code application an interactive user interface was developed enabling pre and post-processing of the code predictions. (author)

  9. Consistent Code Qualification Process and Application to WWER-1000 NPP

    International Nuclear Information System (INIS)

    Berthon, A.; Petruzzi, A.; Giannotti, W.; D'Auria, F.; Reventos, F.

    2006-01-01

    Calculation analysis by application of the system codes are performed to evaluate the NPP or the facility behavior during a postulated transient or to evaluate the code capability. The calculation analysis constitutes a process that involves the code itself, the data of the reference plant, the data about the transient, the nodalization, and the user. All these elements affect one each other and affect the results. A major issue in the use of mathematical model is constituted by the model capability to reproduce the plant or facility behavior under steady state and transient conditions. These aspects constitute two main checks that must be satisfied during the qualification process. The first of them is related to the realization of a scheme of the reference plant; the second one is related to the capability to reproduce the transient behavior. The aim of this paper is to describe the UMAE (Uncertainty Method based on Accuracy Extrapolation) methodology developed at University of Pisa for qualifying a nodalization and analysing the calculated results and to perform the uncertainty evaluation of the system code by the CIAU code (Code with the capability of Internal Assessment of Uncertainty). The activity consists with the re-analysis of the Experiment BL-44 (SBLOCA) performed in the LOBI facility and the analysis of a Kv-scaling calculation of the WWER-1000 NPP nodalization taking as reference the test BL-44. Relap5/Mod3.3 has been used as thermal-hydraulic system code and the standard procedure adopted at University of Pisa has been applied to show the capability of the code to predict the significant aspects of the transient and to obtain a qualified nodalization of the WWER-1000 through a systematic qualitative and quantitative accuracy evaluation. The qualitative accuracy evaluation is based on the selection of Relevant Thermal-hydraulic Aspects (RTAs) and is a prerequisite to the application of the Fast Fourier Transform Based Method (FFTBM) which quantifies

  10. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  11. Assessment of ASSERT-PV for prediction of post-dryout heat transfer in CANDU bundles

    International Nuclear Information System (INIS)

    Cheng, Z.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the new Canadian subchannel code ASSERT-PV 3.2 for PDO sheath temperature prediction. • CANDU 28-, 37- and 43-element bundle PDO experiments. • Prediction improvement of ASSERT-PV 3.2 over previous code versions. • Sensitivity study of the effect of PDO model options. - Abstract: Atomic Energy of Canada Limited (AECL) has developed the subchannel thermalhydraulics code ASSERT-PV for the Canadian nuclear industry. The recently released ASSERT-PV 3.2 provides enhanced models for improved predictions of subchannel flow distribution, critical heat flux (CHF), and post-dryout (PDO) heat transfer in horizontal CANDU fuel channels. This paper presents results of an assessment of the new code version against PDO tests performed during five full-size CANDU bundle experiments conducted between 1992 and 2009 by Stern Laboratories (SL), using 28-, 37- and 43-element bundles. A total of 10 PDO test series with varying pressure-tube creep and/or bearing-pad height were analyzed. The SL experiments encompassed the bundle geometries and range of flow conditions for the intended ASSERT-PV applications for existing CANDU reactors. Code predictions of maximum PDO fuel-sheath temperature were compared against measurements from the SL PDO tests to quantify the code's prediction accuracy. The prediction statistics using the recommended model set of ASSERT-PV 3.2 were compared to those from previous code versions. Furthermore, separate-effects sensitivity studies quantified the contribution of each PDO model change or enhancement to the improvement in PDO heat transfer prediction. Overall, the assessment demonstrated significant improvement in prediction of PDO sheath temperature in horizontal fuel channels containing CANDU bundles

  12. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  13. A FACSIMILE code for calculating void swelling, version VS1

    International Nuclear Information System (INIS)

    Windsor, M.; Bullough, R.; Wood, M.H.

    1979-11-01

    VS1 is the first of a series of FACSIMILE codes that are being made available to predict the swelling of materials under irradiation at different temperatures, using chemical rate equations for the point defect losses to voids, interstitial loops, dislocation network, grain boundaries and foil surfaces. In this report the rate equations used in the program are given together with a detailed description of the code and directions for its use. (author)

  14. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  15. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  16. Sandia National Laboratories analysis code data base

    Science.gov (United States)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  17. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  18. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Kwi Hyun; Song, Wan Jung [ENERGEO Inc. Sungnam, (Korea, Republic of); Song, Dong Soo; Byun, Choong Sup [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2006-07-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences.

  19. Benchmarking Analysis between CONTEMPT and COPATTA Containment Codes

    International Nuclear Information System (INIS)

    Seo, Kwi Hyun; Song, Wan Jung; Song, Dong Soo; Byun, Choong Sup

    2006-01-01

    The containment is the requirement that the releases of radioactive materials subsequent to an accident do not result in doses in excess of the values specified in 10 CFR 100. The containment must withstand the pressure and temperature of the DBA(Design Basis Accident) including margin without exceeding the design leakage rate. COPATTA as Bechtel's vendor code is used for the containment pressure and temperature prediction in power uprating project for Kori 3,4 and Yonggwang 1,2 nuclear power plants(NPPs). However, CONTEMPTLT/ 028 is used for calculating the containment pressure and temperatures in equipment qualification project for the same NPPs. During benchmarking analysis between two codes, it is known two codes have model differences. This paper show the performance evaluation results because of the main model differences

  20. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  1. Development of MATRA-LMR code α-version for LMR subchannel analysis

    International Nuclear Information System (INIS)

    Kim, Won Seok; Kim, Young Gyun; Kim, Young Gin

    1998-05-01

    Since the sodium boiling point is very high, maximum cladding and pin temperature are used for design limit condition in sodium cooled liquid metal reactor. It is necessary to predict accurately the core temperature distribution to increase the sodium coolant efficiency. Based on the MATRA code, which is developed for PWR analysis, MATRA-LMR is being developed for LMR. The major modification are as follows : A) The sodium properties table is implemented as subprogram in the code. B) Heat transfer coefficients are changed for LMR C) The pressure drop correlations are changed for more accurate calculations, which are Novendstern, Chiu-Rohsenow-Todreas, and Cheng-Todreas correlations. To assess the development status of MATRA-LMR code, calculations have been performed for ORNL 19 pin and EBR-II 61 pin tests. MATRA-LMR calculation results are also compared with the results obtained by the ALTHEN code, which uses more simplied thermal hydraulic model. The MATRA-LMR predictions are found to agree well to the measured values. The differences in results between MATRA-LMR and SLTHEN have occurred because SLTHEN code uses the very simplied thermal-hydraulic model to reduce computing time. MATRA-LMR can be used only for single assembly analysis, but it is planned to extend for multi-assembly calculation. (author). 18 refs., 8 tabs., 14 figs

  2. Large break LOCA analysis for retrofitted ECCS at MAPS using modified computer code ATMIKA

    International Nuclear Information System (INIS)

    Singhal, Mukesh; Khan, T.A.; Yadav, S.K.; Pramod, P.; Rammohan, H.P.; Bajaj, S.S.

    2002-01-01

    Full text: Computer code ATMIKA which has been used for thermal hydraulic analysis is based on unequal velocity equal temperature (UVET) model. Thermal hydraulic transient was predicted using three conservation equations and drift flux model. The modified drift flux model is now able to predict counter current flow and the relative velocity in vertical channel more accurately. Apart from this, stratification model is also introduced to predict the fuel behaviour under stratified condition. Many more improvements were carried out with respect to solution of conservation equation, heat transfer package and frictional pressure drop model. All these modifications have been well validated with published data on RD-12/RD-14 experiments. This paper describes the code modifications and also deals with the application of the code for the large break LOCA analysis for retrofitted emergency core cooling system (ECCS) being implemented at Madras Atomic Power Station (MAPS). This paper also brings out the effect of accumulator on stratification and fuel behaviour

  3. Final Report for National Transport Code Collaboration PTRANSP

    International Nuclear Information System (INIS)

    Kritz, Arnold H.

    2012-01-01

    PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year period FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly

  4. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  5. Development of Fuel ROd Behavior Analysis code (FROBA) and its application to AP1000

    International Nuclear Information System (INIS)

    Yu, Hongxing; Tian, Wenxi; Yang, Zhen; SU, G.H.; Qiu, Suizheng

    2012-01-01

    Highlights: ► A Fuel ROd Behavior Analysis code (FROBA) has been developed. ► The effects irradiation and burnup has been considered in FROBA. ► The comparison with INL’s results shows a good agreement. ► The FROBA code was applied to AP1000. ► Peak fuel temperature, gap width, hoop strain, etc. were obtained. -- Abstract: The reliable prediction of nuclear fuel rod behavior is of great importance for safety evaluation of nuclear reactors. In the present study, a thermo-mechanical coupling code FROBA (Fuel ROd Behavior Analysis) has been independently developed with consideration of irradiation and burnup effects. The thermodynamic, geometrical and mechanical behaviors have been predicted and were compared with the results obtained by Idaho National Laboratory to validate the reliability and accuracy of the FROBA code. The validated code was applied to analyze the fuel behavior of AP1000 at different burnup levels. The thermal results show that the predicted peak fuel temperature experiences three stages in the fuel lifetime. The mechanical results indicate that hoop strain at high power is greater than that at low power, which means that gap closure phenomenon will occur earlier at high power rates. The maximum cladding stress meets the requirement of yield strength limitation in the entire fuel lifetime. All results show that there are enough safety margins for fuel rod behavior of AP1000 at rated operation conditions. The FROBA code is expected to be applied to deal with more complicated fuel rod scenarios after some modifications.

  6. Comparison of Computational Electromagnetic Codes for Prediction of Low-Frequency Radar Cross Section

    National Research Council Canada - National Science Library

    Lash, Paul C

    2006-01-01

    .... The goal of this research is to compare the capabilities of three computational electromagnetic codes for use in production of RCS signature assessments at low frequencies in terms of performance...

  7. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    Directory of Open Access Journals (Sweden)

    Eiji Watanabe

    2018-03-01

    Full Text Available The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  8. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  9. The drift flux model in the ASSERT subchannel code

    International Nuclear Information System (INIS)

    Carver, M.B.; Judd, R.A.; Kiteley, J.C.; Tahir, A.

    1987-01-01

    The ASSERT subchannel code has been developed specifically to model flow and phase distributions within CANDU fuel bundles. ASSERT uses a drift-flux model that permits the phases to have unequal velocities, and can thus model phase separation tendencies that may occur in horizontal flow. The basic principles of ASSERT are outlined, and computed results are compared against data from various experiments for validation purposes. The paper concludes with an example of the use of the code to predict critical heat flux in CANDU geometries

  10. Steam condensation modelling in aerosol codes

    International Nuclear Information System (INIS)

    Dunbar, I.H.

    1986-01-01

    The principal subject of this study is the modelling of the condensation of steam into and evaporation of water from aerosol particles. These processes introduce a new type of term into the equation for the development of the aerosol particle size distribution. This new term faces the code developer with three major problems: the physical modelling of the condensation/evaporation process, the discretisation of the new term and the separate accounting for the masses of the water and of the other components. This study has considered four codes which model the condensation of steam into and its evaporation from aerosol particles: AEROSYM-M (UK), AEROSOLS/B1 (France), NAUA (Federal Republic of Germany) and CONTAIN (USA). The modelling in the codes has been addressed under three headings. These are the physical modelling of condensation, the mathematics of the discretisation of the equations, and the methods for modelling the separate behaviour of different chemical components of the aerosol. The codes are least advanced in area of solute effect modelling. At present only AEROSOLS/B1 includes the effect. The effect is greater for more concentrated solutions. Codes without the effect will be more in error (underestimating the total airborne mass) the less condensation they predict. Data are needed on the water vapour pressure above concentrated solutions of the substances of interest (especially CsOH and CsI) if the extent to which aerosols retain water under superheated conditions is to be modelled. 15 refs

  11. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  12. Reactor Fuel Isotopics and Code Validation for Nuclear Applications

    Energy Technology Data Exchange (ETDEWEB)

    Francis, Matthew W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Weber, Charles F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pigni, Marco T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gauld, Ian C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-02-01

    Experimentally measured isotopic concentrations of well characterized spent nuclear fuel (SNF) samples have been collected and analyzed by previous researchers. These sets of experimental data have been used extensively to validate the accuracy of depletion code predictions for given sets of burnups, initial enrichments, and varying power histories for different reactor types. The purpose of this report is to present the diversity of data in a concise manner and summarize the current accuracy of depletion modeling. All calculations performed for this report were done using the Oak Ridge Isotope GENeration (ORIGEN) code, an internationally used irradiation and decay code solver within the SCALE comprehensive modeling and simulation code. The diversity of data given in this report includes key actinides, stable fission products, and radioactive fission products. In general, when using the current ENDF/B-VII.0 nuclear data libraries in SCALE, the major actinides are predicted to within 5% of the measured values. Large improvements were seen for several of the curium isotopes when using improved cross section data found in evaluated nuclear data file ENDF/B-VII.0 as compared to ENDF/B-V-based results. The impact of the flux spectrum on the plutonium isotope concentrations as a function of burnup was also shown. The general accuracy noted for the actinide samples for reactor types with burnups greater than 5,000 MWd/MTU was not observed for the low-burnup Hanford B samples. More work is needed in understanding these large discrepancies. The stable neodymium and samarium isotopes were predicted to within a few percent of the measured values. Large improvements were seen in prediction for a few of the samarium isotopes when using the ENDF/B-VII.0 libraries compared to results obtained with ENDF/B-V libraries. Very accurate predictions were obtained for 133Cs and 153Eu. However, the predicted values for the stable ruthenium and rhodium isotopes varied

  13. Computer simulation for prediction of performance and thermodynamic parameters of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Talawar, M.B.; Asthana, S.N.

    2004-01-01

    A new code viz., Linear Output Thermodynamic User-friendly Software for Energetic Systems (LOTUSES) developed during this work predicts the theoretical performance parameters such as density, detonation factor, velocity of detonation, detonation pressure and thermodynamic properties such as heat of detonation, heat of explosion, volume of explosion gaseous products. The same code also assists in the prediction of possible explosive decomposition products after explosion and power index. The developed code has been validated by calculating the parameters of standard explosives such as TNT, PETN, RDX, and HMX. Theoretically predicated parameters are accurate to the order of ±5% deviation. To the best of our knowledge, no such code is reported in literature which can predict a wide range of characteristics of known/unknown explosives with minimum input parameters. The code can be used to obtain thermochemical and performance parameters of high energy materials (HEMs) with reasonable accuracy. The code has been developed in Visual Basic having enhanced windows environment, and thereby advantages over the conventional codes, written in Fortran. The theoretically predicted HEMs performance can be directly printed as well as stored in text (.txt) or HTML (.htm) or Microsoft Word (.doc) or Adobe Acrobat (.pdf) format in the hard disk. The output can also be copied into the Random Access Memory as clipboard text which can be imported/pasted in other software as in the case of other codes

  14. Benchmarking of the PHOENIX-P/ANC [Advanced Nodal Code] advanced nuclear design system

    International Nuclear Information System (INIS)

    Nguyen, T.Q.; Liu, Y.S.; Durston, C.; Casadei, A.L.

    1988-01-01

    At Westinghouse, an advanced neutronic methods program was designed to improve the quality of the predictions, enhance flexibility in designing advanced fuel and related products, and improve design lead time. Extensive benchmarking data is presented to demonstrate the accuracy of the Advanced Nodal Code (ANC) and the PHOENIX-P advanced lattice code. Qualification data to demonstrate the accuracy of ANC include comparison of key physics parameters against a fine-mesh diffusion theory code, TORTISE. Benchmarking data to demonstrate the validity of the PHOENIX-P methodologies include comparison of physics predictions against critical experiments, isotopics measurements and measured power distributions from spatial criticals. The accuracy of the PHOENIX-P/ANC Advanced Design System is demonstrated by comparing predictions of hot zero power physics parameters and hot full power core follow against measured data from operating reactors. The excellent performance of this system for a broad range of comparisons establishes the basis for implementation of these tools for core design, licensing and operational follow of PWR [pressurized water reactor] cores at Westinghouse

  15. Review of the GOTHIC code and trial application

    Energy Technology Data Exchange (ETDEWEB)

    Lacroix, M; Galanis, N; Millette, J [Marcel Lacroix Enr., Sherbrooke, PQ (Canada)

    1996-01-01

    A critical review of the performance of the generic computer code GOTHIC for the generation of thermalhydraulic information for containments was conducted. Several analyses were performed with GOTHIC to predict the flow behaviour and distribution of hydrogen concentration within containments whose geometrical complexity ranged from two simple interconnected rooms to a full scale reactor building. Sensitivity analysis studies were carried out to examine the effect of various modeling parameters. The implementation of physics by the code is reviewed and recommendations on its use for performing blowdown/hydrogen release analyses are made.(author) 5 refs., 9 tabs., 105 figs.

  16. Review of the GOTHIC code and trial application

    International Nuclear Information System (INIS)

    Lacroix, M.; Galanis, N.; Millette, J.

    1996-01-01

    A critical review of the performance of the generic computer code GOTHIC for the generation of thermalhydraulic information for containments was conducted. Several analyses were performed with GOTHIC to predict the flow behaviour and distribution of hydrogen concentration within containments whose geometrical complexity ranged from two simple interconnected rooms to a full scale reactor building. Sensitivity analysis studies were carried out to examine the effect of various modeling parameters. The implementation of physics by the code is reviewed and recommendations on its use for performing blowdown/hydrogen release analyses are made.(author) 5 refs., 9 tabs., 105 figs

  17. Massive Predictive Modeling using Oracle R Enterprise

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    R is fast becoming the lingua franca for analyzing data via statistics, visualization, and predictive analytics. For enterprise-scale data, R users have three main concerns: scalability, performance, and production deployment. Oracle's R-based technologies - Oracle R Distribution, Oracle R Enterprise, Oracle R Connector for Hadoop, and the R package ROracle - address these concerns. In this talk, we introduce Oracle's R technologies, highlighting how each enables R users to achieve scalability and performance while making production deployment of R results a natural outcome of the data analyst/scientist efforts. The focus then turns to Oracle R Enterprise with code examples using the transparency layer and embedded R execution, targeting massive predictive modeling. One goal behind massive predictive modeling is to build models per entity, such as customers, zip codes, simulations, in an effort to understand behavior and tailor predictions at the entity level. Predictions...

  18. SPEEDI: system for prediction of environmental emergency dose information

    International Nuclear Information System (INIS)

    Chino, Masamichi; Ishikawa, Hirohiko; Kai, Michiaki

    1984-03-01

    In this report a computer code system for prediction of environmental emergency dose information , i.e., SPEEDI for short, is presented. In case of an accidental release of radioactive materials from a nuclear plant, it is very important for an emergency planning to predict the concentration and dose caused by the materials. The SPEEDI code system has been developed for this purpose and it has features to predict by calculation the released nuclides, wind fields, concentrations and dose based on release information, actual weather and topographical data. (author)

  19. Harmonic Enhancement in Low Bitrate Audio Coding Using an Efficient Long-Term Predictor

    Directory of Open Access Journals (Sweden)

    Song Jeongook

    2010-01-01

    Full Text Available This paper proposes audio coding using an efficient long-term prediction method to enhance the perceptual quality of audio codecs to speech input signals at low bit-rates. The MPEG-4 AAC-LTP exploited a similar concept, but its improvement was not significant because of small prediction gain due to long prediction lags and aliased components caused by the transformation with a time-domain aliasing cancelation (TDAC technique. The proposed algorithm increases the prediction gain by employing a deharmonizing predictor and a long-term compensation filter. The look-back memory elements are first constructed by applying the de-harmonizing predictor to the input signal, then the prediction residual is encoded and decoded by transform audio coding. Finally, the long-term compensation filter is applied to the updated look-back memory of the decoded prediction residual to obtain synthesized signals. Experimental results show that the proposed algorithm has much lower spectral distortion and higher perceptual quality than conventional approaches especially for harmonic signals, such as voiced speech.

  20. Compendium of computer codes for the researcher in magnetic fusion energy

    International Nuclear Information System (INIS)

    Porter, G.D.

    1989-01-01

    This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs

  1. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P.; Vranca, L.; Vaclav, E. [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1995-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  2. Application of the thermal-hydraulic codes in VVER-440 steam generators modelling

    Energy Technology Data Exchange (ETDEWEB)

    Matejovic, P; Vranca, L; Vaclav, E [Nuclear Power Plant Research Inst. VUJE (Slovakia)

    1996-12-31

    Performances with the CATHARE2 V1.3U and RELAP5/MOD3.0 application to the VVER-440 SG modelling during normal conditions and during transient with secondary water lowering are described. Similar recirculation model was chosen for both codes. In the CATHARE calculation, no special measures were taken with the aim to optimize artificially flow rate distribution coefficients for the junction between SG riser and steam dome. Contrary to RELAP code, the CATHARE code is able to predict reasonable the secondary swell level in nominal conditions. Both codes are able to model properly natural phase separation on the SG water level. 6 refs.

  3. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  4. Comparison of the ENIGMA code with experimental data on thermal performance, stable fission gas and iodine release at high burnup

    Energy Technology Data Exchange (ETDEWEB)

    Killeen, J C [Nuclear Electric plc, Barnwood (United Kingdom)

    1997-08-01

    The predictions of the ENIGMA code have been compared with data from high burn-up fuel experiments from the Halden and RISO reactors. The experiments modelled were IFA-504 and IFA-558 from Halden and the test II-5 from the RISO power burnup test series. The code has well modelled the fuel thermal performance and has provided a good measure of iodine release from pre-interlinked fuel. After interlinkage the iodine predictions remain a good fit for one experiment, but there is significant overprediction for a second experiment (IFA-558). Stable fission gas release is also well modelled and the predictions are within the expected uncertainly band throughout the burn-up range. This report presents code predictions for stable fission gas release to 40GWd/tU, iodine release measurements to 50GWd/tU and thermal performance (fuel centre temperature) to 55GWd/tU. Fuel ratings of up to 38kW/m were modelled at the high burn-up levels. The code is shown to accurately or conservatively predict all these parameters. (author). 1 ref., 6 figs.

  5. Hauser*5, a computer code to calculate nuclear cross sections

    International Nuclear Information System (INIS)

    Mann, F.M.

    1979-07-01

    HAUSER*5 is a computer code that uses the statistical (Hauser-Feshbach) model, the pre-equilibrium model, and a statistical model of direct reactions to predict nuclear cross sections. The code is unrestricted as to particle type, includes fission and capture, makes width-fluctuation corrections, and performs three-body calculations - all in minimum computer time. Transmission coefficients can be generated internally or supplied externally. This report describes equations used, necessary input, and resulting output. 2 figures, 4 tables

  6. The modelling of wall condensation with noncondensable gases for the containment codes

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, C.; Coste, P.; Barthel, V.; Deslandes, H. [Commissariat a l`Energi Atomique, Grenoble (France)

    1995-09-01

    This paper presents several approaches in the modelling of wall condensation in the presence of noncondensable gases for containment codes. The lumped-parameter modelling and the local modelling by 3-D codes are discussed. Containment analysis codes should be able to predict the spatial distributions of steam, air, and hydrogen as well as the efficiency of cooling by wall condensation in both natural convection and forced convection situations. 3-D calculations with a turbulent diffusion modelling are necessary since the diffusion controls the local condensation whereas the wall condensation may redistribute the air and hydrogen mass in the containment. A fine mesh modelling of film condensation in forced convection has been in the developed taking into account the influence of the suction velocity at the liquid-gas interface. It is associated with the 3-D model of the TRIO code for the gas mixture where a k-{xi} turbulence model is used. The predictions are compared to the Huhtiniemi`s experimental data. The modelling of condensation in natural convection or mixed convection is more complex. As no universal velocity and temperature profile exist for such boundary layers, a very fine nodalization is necessary. More simple models integrate equations over the boundary layer thickness, using the heat and mass transfer analogy. The model predictions are compared with a MIT experiment. For the containment compartments a two node model is proposed using the lumped parameter approach. Heat and mass transfer coefficients are tested on separate effect tests and containment experiments. The CATHARE code has been adapted to perform such calculations and shows a reasonable agreement with data.

  7. International benchmark study of advanced thermal hydraulic safety analysis codes against measurements on IEA-R1 research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hainoun, A., E-mail: pscientific2@aec.org.sy [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Doval, A. [Nuclear Engineering Department, Av. Cmdt. Luis Piedrabuena 4950, C.P. 8400 S.C de Bariloche, Rio Negro (Argentina); Umbehaun, P. [Centro de Engenharia Nuclear – CEN, IPEN-CNEN/SP, Av. Lineu Prestes 2242-Cidade Universitaria, CEP-05508-000 São Paulo, SP (Brazil); Chatzidakis, S. [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States); Ghazi, N. [Atomic Energy Commission of Syria (AECS), Nuclear Engineering Department, P.O. Box 6091, Damascus (Syrian Arab Republic); Park, S. [Research Reactor Design and Engineering Division, Basic Science Project Operation Dept., Korea Atomic Energy Research Institute (Korea, Republic of); Mladin, M. [Institute for Nuclear Research, Campului Street No. 1, P.O. Box 78, 115400 Mioveni, Arges (Romania); Shokr, A. [Division of Nuclear Installation Safety, Research Reactor Safety Section, International Atomic Energy Agency, A-1400 Vienna (Austria)

    2014-12-15

    Highlights: • A set of advanced system thermal hydraulic codes are benchmarked against IFA of IEA-R1. • Comparative safety analysis of IEA-R1 reactor during LOFA by 7 working teams. • This work covers both experimental and calculation effort and presents new out findings on TH of RR that have not been reported before. • LOFA results discrepancies from 7% to 20% for coolant and peak clad temperatures are predicted conservatively. - Abstract: In the framework of the IAEA Coordination Research Project on “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal hydraulic computational methods and tools for operation and safety analysis of research reactors” the Brazilian research reactor IEA-R1 has been selected as reference facility to perform benchmark calculations for a set of thermal hydraulic codes being widely used by international teams in the field of research reactor (RR) deterministic safety analysis. The goal of the conducted benchmark is to demonstrate the application of innovative reactor analysis tools in the research reactor community, validation of the applied codes and application of the validated codes to perform comprehensive safety analysis of RR. The IEA-R1 is equipped with an Instrumented Fuel Assembly (IFA) which provided measurements for normal operation and loss of flow transient. The measurements comprised coolant and cladding temperatures, reactor power and flow rate. Temperatures are measured at three different radial and axial positions of IFA summing up to 12 measuring points in addition to the coolant inlet and outlet temperatures. The considered benchmark deals with the loss of reactor flow and the subsequent flow reversal from downward forced to upward natural circulation and presents therefore relevant phenomena for the RR safety analysis. The benchmark calculations were performed independently by the participating teams using different thermal hydraulic and safety

  8. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  9. Assessment of horizontal in-tube condensation models using MARS code. Part I: Stratified flow condensation

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Seong-Su [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Hong, Soon-Joon, E-mail: sjhong90@fnctech.com [Department of Engineering Project, FNC Technology Co., Ltd., Bldg. 135-308, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of); Park, Ju-Yeop; Seul, Kwang-Won [Korea Institute of Nuclear Safety, 19 Kuseong-dong, Yuseong-gu, Daejon (Korea, Republic of); Park, Goon-Cherl [Department of Nuclear Engineering, Seoul National University, Gwanak-gu, Seoul 151-744 (Korea, Republic of)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer This study collected 11 horizontal in-tube condensation models for stratified flow. Black-Right-Pointing-Pointer This study assessed the predictive capability of the models for steam condensation. Black-Right-Pointing-Pointer Purdue-PCCS experiments were simulated using MARS code incorporated with models. Black-Right-Pointing-Pointer Cavallini et al. (2006) model predicts well the data for stratified flow condition. Black-Right-Pointing-Pointer Results of this study can be used to improve condensation model in RELAP5 or MARS. - Abstract: The accurate prediction of the horizontal in-tube condensation heat transfer is a primary concern in the optimum design and safety analysis of horizontal heat exchangers of passive safety systems such as the passive containment cooling system (PCCS), the emergency condenser system (ECS) and the passive auxiliary feed-water system (PAFS). It is essential to analyze and assess the predictive capability of the previous horizontal in-tube condensation models for each flow regime using various experimental data. This study assessed totally 11 condensation models for the stratified flow, one of the main flow regime encountered in the horizontal condenser, with the heat transfer data from the Purdue-PCCS experiment using the multi-dimensional analysis of reactor safety (MARS) code. From the assessments, it was found that the models by Akers and Rosson, Chato, Tandon et al., Sweeney and Chato, and Cavallini et al. (2002) under-predicted the data in the main condensation heat transfer region, on the contrary to this, the models by Rosson and Meyers, Jaster and Kosky, Fujii, Dobson and Chato, and Thome et al. similarly- or over-predicted the data, and especially, Cavallini et al. (2006) model shows good predictive capability for all test conditions. The results of this study can be used importantly to improve the condensation models in thermal hydraulic code, such as RELAP5 or MARS code.

  10. Electron and ion cyclotron heating calculations in the tandem-mirror modeling code MERTH

    International Nuclear Information System (INIS)

    Smith, G.R.

    1985-01-01

    To better understand and predict tandem-mirror experiments, we are building a comprehensive Mirror Equilibrium Radial Transport and Heating (MERTH) code. In this paper we first describe our method for developing the code. Then we report our plans for the installation of physics packages for electron- and ion-cyclotron heating of the plasma

  11. NASA Lewis steady-state heat pipe code users manual

    International Nuclear Information System (INIS)

    Tower, L.K.

    1992-06-01

    The NASA Lewis heat pipe code has been developed to predict the performance of heat pipes in the steady state. The code can be used as a design tool on a personal computer or, with a suitable calling routine, as a subroutine for a mainframe radiator code. A variety of wick structures, including a user input option, can be used. Heat pipes with multiple evaporators, condensers, and adiabatic sections in series and with wick structures that differ among sections can be modeled. Several working fluids can be chosen, including potassium, sodium, and lithium, for which the monomer-dimer equilibrium is considered. The code incorporates a vapor flow algorithm that treats compressibility and axially varying heat input. This code facilitates the determination of heat pipe operating temperatures and heat pipe limits that may be encountered at the specified heat input and environment temperature. Data are input to the computer through a user-interactive input subroutine. Output, such as liquid and vapor pressures and temperatures, is printed at equally spaced axial positions along the pipe as determined by the user

  12. FLASH: A finite element computer code for variably saturated flow

    International Nuclear Information System (INIS)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A

  13. Application of the NJOY code for unresolved resonance treatment in the MCNP utility code

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J. . E-mail addresses of corresponding authors: mmilos@vin.bg.ac.yu , vujic@nuc.berkeley.edu ,; Milosevic, M.; Vujic, J.)

    2005-01-01

    There are numerous uncertainties in the prediction of neutronic characteristics of reactor cores, particularly in the case of innovative reactor designs, arising from approximations used in the solution of the transport equation, and in nuclear data processing and cross section libraries generation. This paper describes the problems encountered in the analysis of the Encapsulated Nuclear Heat Source (ENHS) benchmark core and the new procedures and cross section libraries developed to overcome these problems. The ENHS is a new lead-bismuth or lead cooled novel reactor concept that is fuelled with metallic alloy of Pu, U and Zr, and it is designed to operate for 20 effective full power years without refuelling and with very small burnup reactivity swing. The computational tools benchmarked include: MOCUP - a coupled MCNP-4C and ORIGEN2.1 utility codes with MCNP data libraries based on the ENDF/B-VI evaluations; and KWO2 - a coupled KENO-V.a and ORIGEN2.1 code with ENDFB-V.2 based 238 group library. Calculations made for the ENHS benchmark have shown that the differences between the results obtained using different code systems and cross section libraries are significant and should be taken into account in assessing the quality of nuclear data libraries. (author)

  14. Depletion methodology in the 3-D whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Zee, Sung Quun

    2005-02-01

    Three dimensional whole-core transport code DeCART has been developed to include a characteristics of the numerical reactor to replace partly the experiment. This code adopts the deterministic method in simulating the neutron behavior with the least assumption and approximation. This neutronic code is also coupled with the thermal hydraulic code CFD and the thermo mechanical code to simulate the combined effects. Depletion module has been implemented in DeCART code to predict the depleted composition in the fuel. The exponential matrix method of ORIGEN-2 has been used for the depletion calculation. The library of including decay constants, yield matrix and others has been used and greatly simplified for the calculation efficiency. This report summarizes the theoretical backgrounds and includes the verification of the depletion module in DeCART by performing the benchmark calculations.

  15. Developments in the Generation and Interpretation of Wire Codes (invited paper)

    International Nuclear Information System (INIS)

    Ebi, K.L.

    1999-01-01

    Three new developments in the generation and interpretation of wire codes are discussed. First, a method was developed to computer generate wire codes using data gathered from a utility database of the local distribution system and from tax assessor records. This method was used to wire code more than 250,000 residences in the greater Denver metropolitan area. There was an approximate 75% agreement with field wire coding. Other research in Denver suggests that wire codes predict some characteristics of a residence and its neighbourhood, including age, assessed value, street layout and traffic density. A third new development is the case-specular method to study the association between wire codes and childhood cancers. Recent results from applying the method to the Savitz et al and London et al studies suggest that the associations between childhood cancer and VHCC residences were strongest for residences with a backyard rather than street service drop, and for VHCC residences with LCC speculars. (author)

  16. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  17. Protograph LDPC Codes with Node Degrees at Least 3

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher

    2006-01-01

    In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.

  18. A computer code PACTOLE to predict activation and transport of corrosion products in a PWR

    International Nuclear Information System (INIS)

    Beslu, P.; Frejaville, G.; Lalet, A.

    1978-01-01

    Theoretical studies on activation and transport of corrosion products in a PWR primary circuit have been concentrated, at CEA on the development of a computer code : PACTOLE. This code takes into account the major phenomena which govern corrosion products transport: 1. Ion solubility is obtained by usual thermodynamics laws in function of water chemistry: pH at operating temperature is calculated by the code. 2. Release rates of base metals, dissolution rates of deposits, precipitation rates of soluble products are derived from solubility variations. 3. Deposition of solid particles is treated by a model taking into account particle size, brownian and turbulent diffusion and inertial effect. Erosion of deposits is accounted for by a semi-empirical model. After a review of calculational models, an application of PACTOLE is presented in view of analyzing the distribution of in core. (author)

  19. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  20. Predicting tritium movement and inventory in fusion reactor subsystems using the TMAP code

    International Nuclear Information System (INIS)

    Jones, J.L.; Merrill, B.J.; Holland, D.F.

    1986-01-01

    The Fusion Safety Program of EGandG idaho, Inc. at the Idaho National Engineering Laboratory (INEL) is developing a safety analysis code called TMAP (Tritium Migration Analysis Program) to analyze tritium loss from fusion systems during normal and off-normal conditions. TMAP is a one-dimensional code that calculates tritium movement and inventories in a system of interconnected enclosures and wall structures. These wall structures can include composite materials with bulk trapping of the permeating tritium on impurities or radiation induced dislocations within the material. The thermal response of a structure can be modeled to provide temperature information required for tritium movement calculations. Chemical reactions and hydrogen isotope movement can also be included in the calculations. TMAP was used to analyze the movement of tritium implanted into a proposed limiter/first wall structure design

  1. Large scale fire experiments in the HDR containment as a basis for fire code development

    International Nuclear Information System (INIS)

    Hosser, D.; Dobbernack, R.

    1993-01-01

    Between 1984 and 1991 7 different series of large scale fire experiments and related numerical and theoretical investigations have been performed in the containment of a high pressure reactor in Germany (known as HDR plant). The experimental part included: gas burner tests for checking the containment behaviour; naturally ventilated fires with wood cribs; naturally and forced ventilated oil pool fires; naturally and forced ventilated cable fires. Many results of the oil pool and cable fires can directly be applied to predict the impact of real fires at different locations in a containment on mechanical or structural components as well as on plant personnel. But the main advantage of the measurements and observations was to serve as a basis for fire code development and validation. Different types of fire codes have been used to predict in advance or evaluate afterwards the test results: zone models for single room and multiple room configurations; system codes for multiple room configurations; field models for complex single room configurations. Finally, there exist codes of varying degree of specialization which have proven their power and sufficient exactness to predict fire effects as a basis for optimum fire protection design. (author)

  2. Licensing in BE system code calculations. Applications and uncertainty evaluation by CIAU method

    International Nuclear Information System (INIS)

    Petruzzi, Alessandro; D'Auria, Francesco

    2007-01-01

    The evaluation of uncertainty constitutes the necessary supplement of Best Estimate (BE) calculations performed to understand accident scenarios in water cooled nuclear reactors. The needs come from the imperfection of computational tools on the one side and from the interest in using such tool to get more precise evaluation of safety margins. In the present paper the approaches to uncertainty are outlined and the CIAU (Code with capability of Internal Assessment of Uncertainty) method proposed by the University of Pisa is described including ideas at the basis and results from applications. Two approaches are distinguished that are characterized as 'propagation of code input uncertainty' and 'propagation of code output errors'. For both methods, the thermal-hydraulic code is at the centre of the process of uncertainty evaluation: in the former case the code itself is adopted to compute the error bands and to propagate the input errors, in the latter case the errors in code application to relevant measurements are used to derive the error bands. The CIAU method exploits the idea of the 'status approach' for identifying the thermal-hydraulic conditions of an accident in any Nuclear Power Plant (NPP). Errors in predicting such status are derived from the comparison between predicted and measured quantities and, in the stage of the application of the method, are used to compute the uncertainty. (author)

  3. Analysis of the AP600 core makeup tank experiments using the NOTRUMP code

    International Nuclear Information System (INIS)

    Cunningham, J.C.; Haberstroh, R.C.; Hochreiter, L.E.; Jaroszewicz, J.

    1995-01-01

    The AP600 design utilizes passive methods to perform core and containment cooling functions for a postulated loss of coolant. The core makeup tank (CMT) is an important feature of the AP600 passive safety system. The NOTRUMP code has been compared to the 300-series core makeup tank experiments. It has been observed that the code will capture the correct thermal-hydraulic behavior observed in the experiments. The correlations used for wall film condensation and convective heat transfer to the heated CMT liquid appear to be appropriate for these applications. The code will predict the rapid condensation and mixing thermal-hydraulic behavior observed in the 300-series tests. The NOTRUMP predictions can be noding-dependent since the condensation is extremely dependent on the amount of cold CMT liquid that mixes with the incoming steam flow

  4. Novel overlapping coding sequences in Chlamydia trachomatis

    DEFF Research Database (Denmark)

    Jensen, Klaus Thorleif; Petersen, Lise; Falk, Søren

    2006-01-01

    that are in agreement with the primary annotation. Forty two genes from the primary annotation are not predicted by EasyGene. The majority of these genes are listed as hypothetical in the primary annotation. The 15 novel predicted genes all overlap with genes on the complementary strand. We find homologues of several...... of the novel genes in C. trachomatis Serovar A and Chlamydia muridarum. Several of the genes have typical gene-like and protein-like features. Furthermore, we confirm transcriptional activity from 10 of the putative genes. The combined evidence suggests that at least seven of the 15 are protein coding genes...

  5. Development of Multi-Scale Finite Element Analysis Codes for High Formability Sheet Metal Generation

    International Nuclear Information System (INIS)

    Nnakamachi, Eiji; Kuramae, Hiroyuki; Ngoc Tam, Nguyen; Nakamura, Yasunori; Sakamoto, Hidetoshi; Morimoto, Hideo

    2007-01-01

    In this study, the dynamic- and static-explicit multi-scale finite element (F.E.) codes are developed by employing the homogenization method, the crystalplasticity constitutive equation and SEM-EBSD measurement based polycrystal model. These can predict the crystal morphological change and the hardening evolution at the micro level, and the macroscopic plastic anisotropy evolution. These codes are applied to analyze the asymmetrical rolling process, which is introduced to control the crystal texture of the sheet metal for generating a high formability sheet metal. These codes can predict the yield surface and the sheet formability by analyzing the strain path dependent yield, the simple sheet forming process, such as the limit dome height test and the cylindrical deep drawing problems. It shows that the shear dominant rolling process, such as the asymmetric rolling, generates ''high formability'' textures and eventually the high formability sheet. The texture evolution and the high formability of the newly generated sheet metal experimentally were confirmed by the SEM-EBSD measurement and LDH test. It is concluded that these explicit type crystallographic homogenized multi-scale F.E. code could be a comprehensive tool to predict the plastic induced texture evolution, anisotropy and formability by the rolling process and the limit dome height test analyses

  6. Implementation of the chemical PbLi/water reaction in the SIMMER code

    Energy Technology Data Exchange (ETDEWEB)

    Eboli, Marica, E-mail: marica.eboli@for.unipi.it [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Forgione, Nicola [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Del Nevo, Alessandro [ENEA FSN-ING-PAN, CR Brasimone, 40032 Camugnano, BO (Italy)

    2016-11-01

    Highlights: • Updated predictive capabilities of SIMMER-III code. • Verification of the implemented PbLi/Water chemical reactions. • Identification of code capabilities in modelling phenomena relevant to safety. • Validation against BLAST Test No. 5 experimental data successfully completed. • Need for new experimental campaign in support of code validation on LIFUS5/Mod3. - Abstract: The availability of a qualified system code for the deterministic safety analysis of the in-box LOCA postulated accident is of primary importance. Considering the renewed interest for the WCLL breeding blanket, such code shall be multi-phase, shall manage the thermodynamic interaction among the fluids, and shall include the exothermic chemical reaction between lithium-lead and water, generating oxides and hydrogen. The paper presents the implementation of the chemical correlations in SIMMER-III code, the verification of the code model in simple geometries and the first validation activity based on BLAST Test N°5 experimental data.

  7. Cloud prediction of protein structure and function with PredictProtein for Debian.

    Science.gov (United States)

    Kaján, László; Yachdav, Guy; Vicedo, Esmeralda; Steinegger, Martin; Mirdita, Milot; Angermüller, Christof; Böhm, Ariane; Domke, Simon; Ertl, Julia; Mertes, Christian; Reisinger, Eva; Staniewski, Cedric; Rost, Burkhard

    2013-01-01

    We report the release of PredictProtein for the Debian operating system and derivatives, such as Ubuntu, Bio-Linux, and Cloud BioLinux. The PredictProtein suite is available as a standard set of open source Debian packages. The release covers the most popular prediction methods from the Rost Lab, including methods for the prediction of secondary structure and solvent accessibility (profphd), nuclear localization signals (predictnls), and intrinsically disordered regions (norsnet). We also present two case studies that successfully utilize PredictProtein packages for high performance computing in the cloud: the first analyzes protein disorder for whole organisms, and the second analyzes the effect of all possible single sequence variants in protein coding regions of the human genome.

  8. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  9. Predictive values of diagnostic codes for identifying serious hypocalcemia and dermatologic adverse events among women with postmenopausal osteoporosis in a commercial health plan database.

    Science.gov (United States)

    Wang, Florence T; Xue, Fei; Ding, Yan; Ng, Eva; Critchlow, Cathy W; Dore, David D

    2018-04-10

    Post-marketing safety studies of medicines often rely on administrative claims databases to identify adverse outcomes following drug exposure. Valid ascertainment of outcomes is essential for accurate results. We aim to quantify the validity of diagnostic codes for serious hypocalcemia and dermatologic adverse events from insurance claims data among women with postmenopausal osteoporosis (PMO). We identified potential cases of serious hypocalcemia and dermatologic events through ICD-9 diagnosis codes among women with PMO within claims from a large US healthcare insurer (June 2005-May 2010). A physician adjudicated potential hypocalcemic and dermatologic events identified from the primary position on emergency department (ED) or inpatient claims through medical record review. Positive predictive values (PPVs) and 95% confidence intervals (CIs) quantified the fraction of potential cases that were confirmed. Among 165,729 patients with PMO, medical charts were obtained for 40 of 55 (73%) potential hypocalcemia cases; 16 were confirmed (PPV 40%, 95% CI 25-57%). The PPV was higher for ED than inpatient claims (82 vs. 24%). Among 265 potential dermatologic events (primarily urticaria or rash), we obtained 184 (69%) charts and confirmed 128 (PPV 70%, 95% CI 62-76%). The PPV was higher for ED than inpatient claims (77 vs. 39%). Diagnostic codes for hypocalcemia and dermatologic events may be sufficient to identify events giving rise to emergency care, but are less accurate for identifying events within hospitalizations.

  10. Development of LIFE4-CN: a combined code for steady-state and transient analyses of advanced LMFBR fuels

    International Nuclear Information System (INIS)

    Liu, Y.Y.; Zawadzki, S.; Billone, M.C.; Nayak, U.P.; Roth, T.

    1979-01-01

    The methodology used to develop the LMFBR carbide/nitride fuels code, LIFE4-CN, is described in detail along with some subtleties encountered in code development. Fuel primary and steady-state thermal creep have been used as an example to illustrate the need for physical modeling and the need to recognize the importance of the materials characteristics. A self-consistent strategy for LIFE4-CN verification against irradiation data has been outlined with emphasis on the establishment of the gross uncertainty bands. These gross uncertainty bands can be used as an objective measure to gauge the overall success of the code predictions. Preliminary code predictions for sample steady-state and transient cases are given

  11. International Code Assessment and Applications Program: Annual report

    International Nuclear Information System (INIS)

    Ting, P.; Hanson, R.; Jenks, R.

    1987-03-01

    This is the first annual report of the International Code Assessment and Applications Program (ICAP). The ICAP was organized by the Office of Nuclear Regulatory Research, United States Nuclear Regulatory Commission (USNRC) in 1985. The ICAP is an international cooperative reactor safety research program planned to continue over a period of approximately five years. To date, eleven European and Asian countries/organizations have joined the program through bilateral agreements with the USNRC. Seven proposed agreements are currently under negotiation. The primary mission of the ICAP is to provide independent assessment of the three major advanced computer codes (RELAP5, TRAC-PWR, and TRAC-BWR) developed by the USNRC. However, program activities can be expected to enhance the assessment process throughout member countries. The codes were developed to calculate the reactor plant response to transients and loss-of-coolant accidents. Accurate prediction of normal and abnormal plant response using the codes enhances procedures and regulations used for the safe operation of the plant and also provides technical basis for assessing the safety margin of future reactor plant designs. The ICAP is providing required assessment data that will contribute to quantification of the code uncertainty for each code. The first annual report is devoted to coverage of program activities and accomplishments during the period between April 1985 and March 1987

  12. Annotating pathogenic non-coding variants in genic regions.

    Science.gov (United States)

    Gelfman, Sahar; Wang, Quanli; McSweeney, K Melodi; Ren, Zhong; La Carpia, Francesca; Halvorsen, Matt; Schoch, Kelly; Ratzon, Fanni; Heinzen, Erin L; Boland, Michael J; Petrovski, Slavé; Goldstein, David B

    2017-08-09

    Identifying the underlying causes of disease requires accurate interpretation of genetic variants. Current methods ineffectively capture pathogenic non-coding variants in genic regions, resulting in overlooking synonymous and intronic variants when searching for disease risk. Here we present the Transcript-inferred Pathogenicity (TraP) score, which uses sequence context alterations to reliably identify non-coding variation that causes disease. High TraP scores single out extremely rare variants with lower minor allele frequencies than missense variants. TraP accurately distinguishes known pathogenic and benign variants in synonymous (AUC = 0.88) and intronic (AUC = 0.83) public datasets, dismissing benign variants with exceptionally high specificity. TraP analysis of 843 exomes from epilepsy family trios identifies synonymous variants in known epilepsy genes, thus pinpointing risk factors of disease from non-coding sequence data. TraP outperforms leading methods in identifying non-coding variants that are pathogenic and is therefore a valuable tool for use in gene discovery and the interpretation of personal genomes.While non-coding synonymous and intronic variants are often not under strong selective constraint, they can be pathogenic through affecting splicing or transcription. Here, the authors develop a score that uses sequence context alterations to predict pathogenicity of synonymous and non-coding genetic variants, and provide a web server of pre-computed scores.

  13. CENTAR code for extended nonlinear transient analysis of extraterrestrial reactor systems

    International Nuclear Information System (INIS)

    Nassersharif, B.; Peer, J.S.; DeHart, M.D.

    1987-01-01

    Current interest in the application of nuclear reactor-driven power systems to space missions has generated a need for a systems simulation code to model and analyze space reactor systems; such a code has been initiated at Texas A and M, and the first version is nearing completion; release was anticipated in the fall of 1987. This code, named CENTAR (Code for Extended Nonlinear Transient Analysis of Extraterrestrial Reactor Systems), is designed specifically for space systems and is highly vectorizable. CENTAR is composed of several specialized modules. A fluids module is used to model fluid behavior throughout the system. A wall heat transfer module models the heat transfer characteristics of all walls, insulation, and structure around the system. A fuel element thermal analysis module is used to predict the temperature behavior and heat transfer characteristics of the reactor fuel rods. A kinetics module uses a six-group point kinetics formulation to model reactivity feedback and control and the ANS 5.1 decay-heat curve to model shutdown decay-heat production. A pump module models the behavior of thermoelectric-electromagnetic pumps, and a heat exchanger module models not only thermal effects in thermoelectric heat exchangers, but also predicts electrical power production for a given configuration. Finally, an accumulator module models coolant expansion/contraction accumulators

  14. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  15. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  16. Gap conductance model validation in the TASS/SMR-S code

    International Nuclear Information System (INIS)

    Ahn, Sang-Jun; Yang, Soo-Hyung; Chung, Young-Jong; Bae, Kyoo-Hwan; Lee, Won-Jae

    2011-01-01

    An advanced integral pressurized water reactor, SMART (System-Integrated Modular Advanced ReacTor) has been developed by KAERI (Korea Atomic Energy Research and Institute). The purposes of the SMART are sea water desalination and an electricity generation. For the safety evaluation and performance analysis of the SMART, TASS/SMR-S (Transient And Setpoint Simulation/System-integrated Modular Reactor) code, has been developed. In this paper, the gap conductance model for the calculation of gap conductance has been validated by using another system code, MARS code, and experimental results. In the validation, the behaviors of fuel temperature and gap width are selected as the major parameters. According to the evaluation results, the TASS/SMR-S code predicts well the behaviors of fuel temperatures and gap width variation, compared to the MARS calculation results and experimental data. (author)

  17. An Adaptive Motion Estimation Scheme for Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2014-01-01

    Full Text Available The unsymmetrical-cross multihexagon-grid search (UMHexagonS is one of the best fast Motion Estimation (ME algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  18. A comparison of the radiological doses to man predicted by the MILDOS, UDAD, and UMMAC assessment codes

    International Nuclear Information System (INIS)

    Polehn, J.L.; Coleman, J.H.

    1984-01-01

    The output of three computer codes used to estimate the dose to man from uranium mining and milling operations are compared. UMMAC was developed by the Tennessee Valley Authority. UDAD (version 9) was developed at the Argonne National Laboratory. Version 4 of UDAD was modified by the Nuclear Regulatory Commission and renamed MILDOS. Dose estimates vary widely between the three codes. However, it appears any of the codes can be used if care is taken in the analyses of the output

  19. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  20. Validity of Principal Diagnoses in Discharge Summaries and ICD-10 Coding Assessments Based on National Health Data of Thailand.

    Science.gov (United States)

    Sukanya, Chongthawonsatid

    2017-10-01

    This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.