WorldWideScience

Sample records for computer code predictions

  1. Fire aerosol experiment and comparisons with computer code predictions

    Energy Technology Data Exchange (ETDEWEB)

    Gregory, W.S.; Nichols, B.D.; White, B.W.; Smith, P.R.; Leslie, I.H.; Corkran, J.R.

    1988-01-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the US Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300/degree/C. To date, we have used quartz aerosol with a median diameter of about 10 ..mu..m as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 ft/sup 3//min) and three nominal gas temperatures (ambient, 150/degree/C, and 300/degree/C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data. 3 refs., 10 figs., 1 tab.

  2. Computational RNomics:Structure identification and functional prediction of non-coding RNAs in silico

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The eukaryotic genome contains varying numbers of non-coding RNA(ncRNA) genes."Computational RNomics" takes a multidisciplinary approach,like information science,to resolve the structure and function of ncRNAs.Here,we review the main issues in "Computational RNomics" of data storage and management,ncRNA gene identification and characterization,ncRNA target identification and functional prediction,and we summarize the main methods and current content of "computational RNomics".

  3. Computer code to predict the heat of explosion of high energy materials

    Energy Technology Data Exchange (ETDEWEB)

    Muthurajan, H. [Armament Research and Development Establishment, Pashan, Pune 411021 (India)], E-mail: muthurajan_h@rediffmail.com; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B. [High Energy Materials Research Laboratory, Sutarwadi, Pune 411 021 (India)

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion ({delta}H{sub e}) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R{sup 2} = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.

  4. Computer code to predict the heat of explosion of high energy materials.

    Science.gov (United States)

    Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.

  5. Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Michael S [Los Alamos National Laboratory; Higdon, David M [Los Alamos National Laboratory

    2009-01-01

    In this paper, we present a generic example to illustrate various points about making future predictions of population performance using a biased performance computer code, physical performance data, and critical performance parameter data sampled from the population at various times. We show how the actual performance data help to correct the biased computer code and the impact of uncertainty especially when the prediction is made far from where the available data are taken. We also demonstrate how a Bayesian approach allows both inferences about the unknown parameters and predictions to be made in a consistent framework.

  6. Visual Determination of Conjugate Eye Deviation on Computed Tomography Scan Predicts Diagnosis of Stroke Code Patients.

    Science.gov (United States)

    Spokoyny, Ilana; Chen, James Y; Raman, Rema; Ernstrom, Karin; Agrawal, Kunal; Modir, Royya F; Meyer, Dawn M; Meyer, Brett C

    2016-12-01

    Head computed tomography (CT) is critical for stroke code evaluations and often happens prior to completion of the neurological exam. Eye deviation on neuroimaging (DeyeCOM sign) has utility for predicting stroke diagnosis and correlates with National Institutes of Health Stroke Scale (NIHSS) gaze score. We further assessed the utility of the DeyeCOM sign, without complex caliper-based eye deviation calculations, but simply with a visual determination method. Patients with initial head CT and final diagnosis from an institutional review board-approved consecutive prospective registry of stroke codes at the University of California, San Diego, were included. Five stroke specialists and 1 neuroradiologist reviewed each CT. DeyeCOM+ patients were compared to DeyeCOM- patients (baseline characteristics, diagnosis, and NIHSS gaze score). Kappa statistics compared stroke specialists to neuroradiologist reads, and visual determination to caliper measurement of DeyeCOM sign. Of 181 patients, 46 were DeyeCOM+. Ischemic stroke was more commonly diagnosed in DeyeCOM+ patients compared to other diagnoses (P = .039). DeyeCOM+ patients were more likely to have an NIHSS gaze score of 1 or higher (P = .006). The NIHSS score of DeyeCOM+ stroke versus DeyeCOM- stroke patients was 8.3 ± 6.0 versus 6.7 ± 8.0 (P = .065). Functional outcomes were similar (P = .59). Stroke specialists had excellent agreement with the neuroradiologist (Κ = .89). Visual inspection had excellent agreement with the caliper method (Κ = .88). Using a time-sensitive visual determination of gaze deviation on imaging was predictive of ischemic stroke diagnosis and presence of NIHSS gaze score, and was consistent with the more complex caliper method. This study furthers the clinical utility of the DeyeCOM sign for predicting ischemic strokes. Copyright © 2016 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  7. Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions

    Science.gov (United States)

    Allen, D.; Cairelli, J.

    1985-01-01

    A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.

  8. Prediction of detonation and JWL eos parameters of energetic materials using EXPLO5 computer code

    CSIR Research Space (South Africa)

    Peter, Xolani

    2016-09-01

    Full Text Available (Cowperthwaite and Zwisler, 1976), CHEETAH (Fried, 1996), EXPLO5(Sućeska , 2001), BARUT-X (Cengiz et al., 2007). These computer codes describe the detonation on the basis of the solution of Euler’s hydrodynamic equation based on the description of an equation... of detonation products equation of state from cylinder test: Analytical model and numerical analysis. Thermal Science, 19(1), pp. 35-48. Fried, L.E., 1996. CHEETAH 1.39 user’s manual. Lawrence Livermore National Laboratory. Göbel, M., 2009. Energetic...

  9. Test results of a 40-kW Stirling engine and comparison with the NASA Lewis computer code predictions

    Science.gov (United States)

    Allen, David J.; Cairelli, James E.

    1988-01-01

    A Stirling engine was tested without auxiliaries at Nasa-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were: (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations from those predicted by the code. The silicon carbide foam generators appear to be structurally suitable, but the foam matrix showed severely reduced performance.

  10. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  11. Network coding for computing: Linear codes

    CERN Document Server

    Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth

    2011-01-01

    In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...

  12. Computational Account of Spontaneous Activity as a Signature of Predictive Coding.

    Science.gov (United States)

    Koren, Veronika; Denève, Sophie

    2017-01-01

    Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function.

  13. Computational Account of Spontaneous Activity as a Signature of Predictive Coding.

    Directory of Open Access Journals (Sweden)

    Veronika Koren

    2017-01-01

    Full Text Available Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network

  14. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  15. Industrial Computer Codes

    Science.gov (United States)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  16. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  17. Development of a computer code to predict a ventilation requirement for an underground radioactive waste storage tank

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y.J.; Dalpiaz, E.L. [ICF Kaiser Hanford Co., Richland, WA (United States)

    1997-08-01

    Computer code, WTVFE (Waste Tank Ventilation Flow Evaluation), has been developed to evaluate the ventilation requirement for an underground storage tank for radioactive waste. Heat generated by the radioactive waste and mixing pumps in the tank is removed mainly through the ventilation system. The heat removal process by the ventilation system includes the evaporation of water from the waste and the heat transfer by natural convection from the waste surface. Also, a portion of the heat will be removed through the soil and the air circulating through the gap between the primary and secondary tanks. The heat loss caused by evaporation is modeled based on recent evaporation test results by the Westinghouse Hanford Company using a simulated small scale waste tank. Other heat transfer phenomena are evaluated based on well established conduction and convection heat transfer relationships. 10 refs., 3 tabs.

  18. WSPEEDI (worldwide version of SPEEDI): A computer code system for the prediction of radiological impacts on Japanese due to a nuclear accident in foreign countries

    Energy Technology Data Exchange (ETDEWEB)

    Chino, Masamichi; Yamazawa, Hiromi; Nagai, Haruyasu; Moriuchi, Shigeru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Ishikawa, Hirohiko

    1995-09-01

    A computer code system has been developed for near real-time dose assessment during radiological emergencies. The system WSPEEDI, the worldwide version of SPEEDI (System for Prediction of Environmental Emergency Dose Information) aims at predicting the radiological impact on Japanese due to a nuclear accident in foreign countries. WSPEEDI consists of a mass-consistent wind model WSYNOP for large-scale wind fields and a particle random walk model GEARN for atmospheric dispersion and dry and wet deposition of radioactivity. The models are integrated into a computer code system together with a system control software, worldwide geographic database, meteorological data processor and graphic software. The performance of the models has been evaluated using the Chernobyl case with reliable source terms, well-established meteorological data and a comprehensive monitoring database. Furthermore, the response of the system has been examined by near real-time simulations of the European Tracer Experiment (ETEX), carried out over about 2,000 km area in Europe. (author).

  19. Physics codes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Eltgroth, P.G.

    1985-12-04

    An effort is under way to develop physics codes which realize the potential of parallel machines. A new explicit algorithm for the computation of hydrodynamics has been developed which avoids global synchronization entirely. The approach, called the Independent Time Step Method (ITSM), allows each zone to advance at its own pace, determined by local information. The method, coded in FORTRAN, has demonstrated parallelism of greater than 20 on the Denelcor HEP machine. ITSM can also be used to replace current implicit treatments of problems involving diffusion and heat conduction. Four different approaches toward work distribution have been investigated and implemented for the one-dimensional code on the Denelcor HEP. They are ''self-scheduled'', an ASKFOR monitor, a ''queue of queues'' monitor, and a distributed ASKFOR monitor. The self-scheduled approach shows the lowest overhead but the poorest speedup. The distributed ASKFOR monitor shows the best speedup and the lowest execution times on the tested problems. 2 refs., 3 figs.

  20. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  1. Characterizing Video Coding Computing in Conference Systems

    NARCIS (Netherlands)

    Tuquerres, G.

    2000-01-01

    In this paper, a number of coding operations is provided for computing continuous data streams, in particular, video streams. A coding capability of the operations is expressed by a pyramidal structure in which coding processes and requirements of a distributed information system are represented. Th

  2. Predictive depth coding of wavelet transformed images

    Science.gov (United States)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  3. Two Comments on Predictive Picture Coding

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    Two comments on predictive picture coding are given in this paper. 1) In lossy coding, the reconstructed values of picture samples, not its original values, should be used in the prediction formula. 2) In the design of optimum predictors, the minimum entropy or subjective assessment or other criterions, could be used, depending on the applications of the prediction encoder, instead of the minimum mean square error (MMSE) criterion.

  4. Picturewise inter-view prediction selection for multiview video coding

    Science.gov (United States)

    Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao

    2010-11-01

    Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.

  5. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  6. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  7. Gender codes why women are leaving computing

    CERN Document Server

    Misa, Thomas J

    2010-01-01

    The computing profession is facing a serious gender crisis. Women are abandoning the computing field at an alarming rate. Fewer are entering the profession than anytime in the past twenty-five years, while too many are leaving the field in mid-career. With a maximum of insight and a minimum of jargon, Gender Codes explains the complex social and cultural processes at work in gender and computing today. Edited by Thomas Misa and featuring a Foreword by Linda Shafer, Chair of the IEEE Computer Society Press, this insightful collection of essays explores the persisting gender imbalance in computing and presents a clear course of action for turning things around.

  8. Rhythmic complexity and predictive coding

    DEFF Research Database (Denmark)

    Vuust, Peter; Witek, Maria A G

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events,has a remarkable capacity to move our minds and bodies. How does the cognitive systemenable our experiences of rhythmically complex music? In this paper, we describe somecommon forms of rhythmic complexity...... in music and propose the theory of predictivecoding (PC) as a framework for understanding how rhythm and rhythmic complexit y areprocessed in the brain. We also consider why we feel so compelled by rhythmic tensionin music. First, we consider theories of rhythm and meter perception, which...... ofmusic (“meter”). Finally, we review empirical studies of the neural and behavioral effects ofsyncopation, polyrhythm and groove, and propose how these studies can be seen as specialcases of the PC theory.We argue that musical rhythm exploits the brain’s general principlesof prediction and propose...

  9. Computer loss experience and predictions

    Science.gov (United States)

    Parker, Donn B.

    1996-03-01

    The types of losses organizations must anticipate have become more difficult to predict because of the eclectic nature of computers and the data communications and the decrease in news media reporting of computer-related losses as they become commonplace. Total business crime is conjectured to be decreasing in frequency and increasing in loss per case as a result of increasing computer use. Computer crimes are probably increasing, however, as their share of the decreasing business crime rate grows. Ultimately all business crime will involve computers in some way, and we could see a decline of both together. The important information security measures in high-loss business crime generally concern controls over authorized people engaged in unauthorized activities. Such controls include authentication of users, analysis of detailed audit records, unannounced audits, segregation of development and production systems and duties, shielding the viewing of screens, and security awareness and motivation controls in high-value transaction areas. Computer crimes that involve highly publicized intriguing computer misuse methods, such as privacy violations, radio frequency emanations eavesdropping, and computer viruses, have been reported in waves that periodically have saturated the news media during the past 20 years. We must be able to anticipate such highly publicized crimes and reduce the impact and embarrassment they cause. On the basis of our most recent experience, I propose nine new types of computer crime to be aware of: computer larceny (theft and burglary of small computers), automated hacking (use of computer programs to intrude), electronic data interchange fraud (business transaction fraud), Trojan bomb extortion and sabotage (code security inserted into others' systems that can be triggered to cause damage), LANarchy (unknown equipment in use), desktop forgery (computerized forgery and counterfeiting of documents), information anarchy (indiscriminate use of

  10. Code-excited linear predictive coding of multispectral MR images

    Science.gov (United States)

    Hu, Jian-Hong; Wang, Yao; Cahill, Patrick

    1996-02-01

    This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.

  11. Fast prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  12. Prodeto, a computer code for probabilistic fatigue design

    Energy Technology Data Exchange (ETDEWEB)

    Braam, H. [ECN-Solar and Wind Energy, Petten (Netherlands); Christensen, C.J.; Thoegersen, M.L. [Risoe National Lab., Roskilde (Denmark); Ronold, K.O. [Det Norske Veritas, Hoevik (Norway)

    1999-03-01

    A computer code for structural relibility analyses of wind turbine rotor blades subjected to fatigue loading is presented. With pre-processors that can transform measured and theoretically predicted load series to load range distributions by rain-flow counting and with a family of generic distribution models for parametric representation of these distribution this computer program is available for carying through probabilistic fatigue analyses of rotor blades. (au)

  13. Computer Security: is your code sane?

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    How many of us write code? Software? Programs? Scripts? How many of us are properly trained in this and how well do we do it? Do we write functional, clean and correct code, without flaws, bugs and vulnerabilities*? In other words: are our codes sane?   Figuring out weaknesses is not that easy (see our quiz in an earlier Bulletin article). Therefore, in order to improve the sanity of your code, prevent common pit-falls, and avoid the bugs and vulnerabilities that can crash your code, or – worse – that can be misused and exploited by attackers, the CERN Computer Security team has reviewed its recommendations for checking the security compliance of your code. “Static Code Analysers” are stand-alone programs that can be run on top of your software stack, regardless of whether it uses Java, C/C++, Perl, PHP, Python, etc. These analysers identify weaknesses and inconsistencies including: employing undeclared variables; expressions resu...

  14. Computer code applicability assessment for the advanced Candu reactor

    Energy Technology Data Exchange (ETDEWEB)

    Wren, D.J.; Langman, V.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd (Canada)

    2004-07-01

    AECL Technologies, the 100%-owned US subsidiary of Atomic Energy of Canada Ltd. (AECL), is currently the proponents of a pre-licensing review of the Advanced Candu Reactor (ACR) with the United States Nuclear Regulatory Commission (NRC). A key focus topic for this pre-application review is the NRC acceptance of the computer codes used in the safety analysis of the ACR. These codes have been developed and their predictions compared against experimental results over extended periods of time in Canada. These codes have also undergone formal validation in the 1990's. In support of this formal validation effort AECL has developed, implemented and currently maintains a Software Quality Assurance program (SQA) to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. This paper discusses the SQA program used to develop, qualify and maintain the computer codes used in ACR safety analysis, including the current program underway to confirm the applicability of these computer codes for use in ACR safety analyses. (authors)

  15. Analysis on Fuel Thermal Conductivity Model of the Computer Code for Performance Prediction of Fuel Rods%燃料元件性能分析程序中的燃料热导率模型分析

    Institute of Scientific and Technical Information of China (English)

    李海; 黄晨; 杜爱兵; 徐宝玉

    2014-01-01

    The thermal conductivity is one of the most important parameters in the computer code for performance prediction for fuel rods.Several fuel thermal conductivity models used in foreign computer code,including thermal conductivity models for MOX fuel and UO2 fuel were introduced in this paper. Thermal conductivities were calculated by using these models, and the results were compared and analyzed.Finally, the thermal conductivity model for the native computer code for performance prediction for fuel rods in fast reactor was recommended.%热导率是燃料元件性能分析程序最重要的参数之一,本文介绍了各国部分性能分析程序的燃料热导率模型,按照 MOX和 UO2燃料分类,给出了这些性能分析程序热导率模型的计算结果,并进行分析对比,给出了国产快堆性能分析程序的热导率推荐模型。

  16. Development of a predictive code for aircrew radiation exposure (PCAIRE)

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, B.J.; Bennett, L.G.I.; Green, A.R.; McCall, M.J.; Ellaschuk, B.; Pierre, M.; Butler, A.; Desormeaux, M. [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, Ontario (Canada)

    2003-07-01

    Jet aircrew are routinely exposed to levels of natural background radiation (i.e., galactic cosmic radiation) that are significantly higher than those present at ground level. This paper describes the method of collecting and analyzing radiation data from numerous worldwide flights, and the encapsulation of these results into a computer code (PCAIRE) for the prediction of the aircrew radiation exposure on any flight in the world at any period in the solar cycle. Predictions from the PCAIRE code were then compared to integral doses measured at commercial altitudes during experimental flights made by various research groups over the past five years over the given solar cycle. In general, the code predictions are in agreement with the measured data within {+-} 20%. An additional correlation has been developed for estimation of aircrew exposure resulting from solar particle events. (author)

  17. Piecewise Mapping in HEVC Lossless Intra-prediction Coding.

    Science.gov (United States)

    Sanchez, Victor; Auli-Llinas, Francesc; Serra-Sagrista, Joan

    2016-05-19

    The lossless intra-prediction coding modality of the High Efficiency Video Coding (HEVC) standard provides high coding performance while allowing frame-by-frame basis access to the coded data. This is of interest in many professional applications such as medical imaging, automotive vision and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using Differential Pulse Code Modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. Specifically, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4×4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain maximum bit-rate reductions of 5.54% and 28.33% for screen content material compared to DPCM-based and block-wise intra-prediction, respectively. Compared to Intra- Block Copy, piecewise mapping can attain maximum bit-rate reductions of 11.48% for camera-captured material.

  18. FLASH: A finite element computer code for variably saturated flow

    Energy Technology Data Exchange (ETDEWEB)

    Baca, R.G.; Magnuson, S.O.

    1992-05-01

    A numerical model was developed for use in performance assessment studies at the INEL. The numerical model, referred to as the FLASH computer code, is designed to simulate two-dimensional fluid flow in fractured-porous media. The code is specifically designed to model variably saturated flow in an arid site vadose zone and saturated flow in an unconfined aquifer. In addition, the code also has the capability to simulate heat conduction in the vadose zone. This report presents the following: description of the conceptual frame-work and mathematical theory; derivations of the finite element techniques and algorithms; computational examples that illustrate the capability of the code; and input instructions for the general use of the code. The FLASH computer code is aimed at providing environmental scientists at the INEL with a predictive tool for the subsurface water pathway. This numerical model is expected to be widely used in performance assessments for: (1) the Remedial Investigation/Feasibility Study process and (2) compliance studies required by the US Department of Energy Order 5820.2A.

  19. Incompressible face seals: Computer code IFACE

    Science.gov (United States)

    Artiles, Antonio

    1994-01-01

    Capabilities of the computer code IFACE are given in viewgraph format. These include: two dimensional, incompressible, isoviscous flow; rotation of both rotor and housing; roughness in both rotor and housing; arbitrary film thickness distribution, including steps, pockets, and tapers; three degrees of freedom; dynamic coefficients; prescribed force and moments; pocket pressures or orifice size; turbulence, Couette and Poiseuille flow; cavitation; and inertia pressure drops at inlets to film.

  20. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  1. New developments in the Saphire computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Russell, K.D.; Wood, S.T.; Kvarfordt, K.J. [Idaho Engineering Lab., Idaho Falls, ID (United States)] [and others

    1996-03-01

    The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a suite of computer programs that were developed to create and analyze a probabilistic risk assessment (PRA) of a nuclear power plant. Many recent enhancements to this suite of codes have been made. This presentation will provide an overview of these features and capabilities. The presentation will include a discussion of the new GEM module. This module greatly reduces and simplifies the work necessary to use the SAPHIRE code in event assessment applications. An overview of the features provided in the new Windows version will also be provided. This version is a full Windows 32-bit implementation and offers many new and exciting features. [A separate computer demonstration was held to allow interested participants to get a preview of these features.] The new capabilities that have been added since version 5.0 will be covered. Some of these major new features include the ability to store an unlimited number of basic events, gates, systems, sequences, etc.; the addition of improved reporting capabilities to allow the user to generate and {open_quotes}scroll{close_quotes} through custom reports; the addition of multi-variable importance measures; and the simplification of the user interface. Although originally designed as a PRA Level 1 suite of codes, capabilities have recently been added to SAPHIRE to allow the user to apply the code in Level 2 analyses. These features will be discussed in detail during the presentation. The modifications and capabilities added to this version of SAPHIRE significantly extend the code in many important areas. Together, these extensions represent a major step forward in PC-based risk analysis tools. This presentation provides a current up-to-date status of these important PRA analysis tools.

  2. Tuning complex computer code to data

    Energy Technology Data Exchange (ETDEWEB)

    Cox, D.; Park, J.S.; Sacks, J.; Singer, C.

    1992-01-01

    The problem of estimating parameters in a complex computer simulator of a nuclear fusion reactor from an experimental database is treated. Practical limitations do not permit a standard statistical analysis using nonlinear regression methodology. The assumption that the function giving the true theoretical predictions is a realization of a Gaussian stochastic process provides a statistical method for combining information from relatively few computer runs with information from the experimental database and making inferences on the parameters.

  3. SALE: Safeguards Analytical Laboratory Evaluation computer code

    Energy Technology Data Exchange (ETDEWEB)

    Carroll, D.J.; Bush, W.J.; Dolan, C.A.

    1976-09-01

    The Safeguards Analytical Laboratory Evaluation (SALE) program implements an industry-wide quality control and evaluation system aimed at identifying and reducing analytical chemical measurement errors. Samples of well-characterized materials are distributed to laboratory participants at periodic intervals for determination of uranium or plutonium concentration and isotopic distributions. The results of these determinations are statistically-evaluated, and each participant is informed of the accuracy and precision of his results in a timely manner. The SALE computer code which produces the report is designed to facilitate rapid transmission of this information in order that meaningful quality control will be provided. Various statistical techniques comprise the output of the SALE computer code. Assuming an unbalanced nested design, an analysis of variance is performed in subroutine NEST resulting in a test of significance for time and analyst effects. A trend test is performed in subroutine TREND. Microfilm plots are obtained from subroutine CUMPLT. Within-laboratory standard deviations are calculated in the main program or subroutine VAREST, and between-laboratory standard deviations are calculated in SBLV. Other statistical tests are also performed. Up to 1,500 pieces of data for each nuclear material sampled by 75 (or fewer) laboratories may be analyzed with this code. The input deck necessary to run the program is shown, and input parameters are discussed in detail. Printed output and microfilm plot output are described. Output from a typical SALE run is included as a sample problem.

  4. Neutron spectrum unfolding using computer code SAIPS

    CERN Document Server

    Karim, S

    1999-01-01

    The main objective of this project was to study the neutron energy spectrum at rabbit station-1 in Pakistan Research Reactor (PARR-I). To do so, multiple foils activation method was used to get the saturated activities. The computer code SAIPS was used to unfold the neutron spectra from the measured reaction rates. Of the three built in codes in SAIPS, only SANDI and WINDOWS were used. Contribution of thermal part of the spectra was observed to be higher than the fast one. It was found that the WINDOWS gave smooth spectra while SANDII spectra have violet oscillations in the resonance region. The uncertainties in the WINDOWS results are higher than those of SANDII. The results show reasonable agreement with the published results.

  5. From Coding to Computational Thinking and Back

    OpenAIRE

    DePryck, K.

    2016-01-01

    Presentation of Dr. Koen DePryck in the Computational Thinking Session in TEEM 2016 Conference, held in the University of Salamanca (Spain), Nov 2-4, 2016.   Introducing coding in the curriculum at an early age is considered a long-term investment in bridging the skills gap between the technology demands of the labour market and the availability of people to fill them. The keys to success include moving from mere literacy to active control – not only at the level of learners but also ...

  6. Spiking network simulation code for petascale computers

    Science.gov (United States)

    Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz

    2014-01-01

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682

  7. Spiking network simulation code for petascale computers

    Directory of Open Access Journals (Sweden)

    Susanne eKunkel

    2014-10-01

    Full Text Available Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

  8. Visual mismatch negativity: A predictive coding view

    Directory of Open Access Journals (Sweden)

    Gabor eStefanics

    2014-09-01

    Full Text Available An increasing number of studies investigate the visual mismatch negativity (vMMN or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist’s point of view in a predictive coding framework: 1 experimental protocols and procedures to control ‘refractoriness’ effects; 2 methods to control attention; 3 vMMN and veridical perception.

  9. Methodology for computational fluid dynamics code verification/validation

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.; Blottner, F.G.; Aeschliman, D.P.

    1995-07-01

    The issues of verification, calibration, and validation of computational fluid dynamics (CFD) codes has been receiving increasing levels of attention in the research literature and in engineering technology. Both CFD researchers and users of CFD codes are asking more critical and detailed questions concerning the accuracy, range of applicability, reliability and robustness of CFD codes and their predictions. This is a welcomed trend because it demonstrates that CFD is maturing from a research tool to the world of impacting engineering hardware and system design. In this environment, the broad issue of code quality assurance becomes paramount. However, the philosophy and methodology of building confidence in CFD code predictions has proven to be more difficult than many expected. A wide variety of physical modeling errors and discretization errors are discussed. Here, discretization errors refer to all errors caused by conversion of the original partial differential equations to algebraic equations, and their solution. Boundary conditions for both the partial differential equations and the discretized equations will be discussed. Contrasts are drawn between the assumptions and actual use of numerical method consistency and stability. Comments are also made concerning the existence and uniqueness of solutions for both the partial differential equations and the discrete equations. Various techniques are suggested for the detection and estimation of errors caused by physical modeling and discretization of the partial differential equations.

  10. Superimposed Code Theorectic Analysis of DNA Codes and DNA Computing

    Science.gov (United States)

    2010-03-01

    Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292...5, June 2008, pp. 525-34. 32 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008...combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences represented in DNA. ComDMem is a

  11. Development of probabilistic internal dosimetry computer code

    Science.gov (United States)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-02-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of

  12. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  13. Development of a shock noise prediction code for high-speed helicopters - The subsonically moving shock

    Science.gov (United States)

    Tadghighi, H.; Holz, R.; Farassat, F.; Lee, Yung-Jang

    1991-01-01

    A previously defined airfoil subsonic shock-noise prediction formula whose result depends on a mapping of the time-dependent shock surface to a time-independent computational domain is presently coded and incorporated in the NASA-Langley rotor-noise prediction code, WOPWOP. The structure and algorithms used in the shock-noise prediction code are presented; special care has been taken to reduce computation time while maintaining accuracy. Numerical examples of shock-noise prediction are presented for hover and forward flight. It is confirmed that shock noise is an important component of the quadrupole source.

  14. A surface code quantum computer in silicon.

    Science.gov (United States)

    Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L

    2015-10-01

    The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.

  15. Application of the RESRAD computer code to VAMP scenario S

    Energy Technology Data Exchange (ETDEWEB)

    Gnanapragasam, E.K.; Yu, C.

    1997-03-01

    The RESRAD computer code developed at Argonne National Laboratory was among 11 models from 11 countries participating in the international Scenario S validation of radiological assessment models with Chernobyl fallout data from southern Finland. The validation test was conducted by the Multiple Pathways Assessment Working Group of the Validation of Environmental Model Predictions (VAMP) program coordinated by the International Atomic Energy Agency. RESRAD was enhanced to provide an output of contaminant concentrations in environmental media and in food products to compare with measured data from southern Finland. Probability distributions for inputs that were judged to be most uncertain were obtained from the literature and from information provided in the scenario description prepared by the Finnish Centre for Radiation and Nuclear Safety. The deterministic version of RESRAD was run repeatedly to generate probability distributions for the required predictions. These predictions were used later to verify the probabilistic RESRAD code. The RESRAD predictions of radionuclide concentrations are compared with measured concentrations in selected food products. The radiological doses predicted by RESRAD are also compared with those estimated by the Finnish Centre for Radiation and Nuclear Safety.

  16. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  17. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e.,...

  18. Predictive coding and the slowness principle: an information-theoretic approach.

    Science.gov (United States)

    Creutzig, Felix; Sprekeler, Henning

    2008-04-01

    Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.

  19. Predictive codes of familiarity and context during the perceptual learning of facial identities.

    Science.gov (United States)

    Apps, Matthew A J; Tsakiris, Manos

    2013-01-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  20. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    Science.gov (United States)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  1. Hanford Meteorological Station computer codes: Volume 6, The SFC computer code

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, G.L.; Buck, J.W.

    1987-11-01

    Each hour the Hanford Meteorological Station (HMS), operated by Pacific Northwest Laboratory, records and archives weather observations. Hourly surface weather observations consist of weather phenomena such as cloud type and coverage; dry bulb, wet bulb, and dew point temperatures; relative humidity; atmospheric pressure; and wind speed and direction. The SFC computer code is used to archive those weather observations and apply quality assurance checks to the data. This code accesses an input file, which contains the previous archive's date and hour and an output file, which contains surface observations for the current day. As part of the program, a data entry form consisting of 24 fields must be filled in. The information on the form is appended to the daily file, which provides an archive for the hourly surface observations.

  2. Private Computing and Mobile Code Systems

    NARCIS (Netherlands)

    Cartrysse, K.

    2005-01-01

    This thesis' objective is to provide privacy to mobile code. A practical example of mobile code is a mobile software agent that performs a task on behalf of its user. The agent travels over the network and is executed at different locations of which beforehand it is not known whether or not these ca

  3. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.;

    is RLNC (Random Linear Network Coding) and the goal is to reduce the amount of coding operations both at the coding and decoding node, and at the same time remove the need for dedicated signaling messages. In a traditional RLNC system, coding operation takes up significant computational resources and adds......This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...

  4. Multicode comparison of selected source-term computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Hermann, O.W.; Parks, C.V.; Renier, J.P.; Roddy, J.W.; Ashline, R.C.; Wilson, W.B.; LaBauve, R.J.

    1989-04-01

    This report summarizes the results of a study to assess the predictive capabilities of three radionuclide inventory/depletion computer codes, ORIGEN2, ORIGEN-S, and CINDER-2. The task was accomplished through a series of comparisons of their output for several light-water reactor (LWR) models (i.e., verification). Of the five cases chosen, two modeled typical boiling-water reactors (BWR) at burnups of 27.5 and 40 GWd/MTU and two represented typical pressurized-water reactors (PWR) at burnups of 33 and 50 GWd/MTU. In the fifth case, identical input data were used for each of the codes to examine the results of decay only and to show differences in nuclear decay constants and decay heat rates. Comparisons were made for several different characteristics (mass, radioactivity, and decay heat rate) for 52 radionuclides and for nine decay periods ranging from 30 d to 10,000 years. Only fission products and actinides were considered. The results are presented in comparative-ratio tables for each of the characteristics, decay periods, and cases. A brief summary description of each of the codes has been included. Of the more than 21,000 individual comparisons made for the three codes (taken two at a time), nearly half (45%) agreed to within 1%, and an additional 17% fell within the range of 1 to 5%. Approximately 8% of the comparison results disagreed by more than 30%. However, relatively good agreement was obtained for most of the radionuclides that are expected to contribute the greatest impact to waste disposal. Even though some defects have been noted, each of the codes in the comparison appears to produce respectable results. 12 figs., 12 tabs.

  5. Computer codes for birds of North America

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Purpose of paper was to provide a more useful way to provide codes for all North American species, thus making the list useful for virtually all projects concerning...

  6. Non-coding RNA prediction and verification in Saccharomyces cerevisiae.

    Directory of Open Access Journals (Sweden)

    Laura A Kavanaugh

    2009-01-01

    Full Text Available Non-coding RNA (ncRNA play an important and varied role in cellular function. A significant amount of research has been devoted to computational prediction of these genes from genomic sequence, but the ability to do so has remained elusive due to a lack of apparent genomic features. In this work, thermodynamic stability of ncRNA structural elements, as summarized in a Z-score, is used to predict ncRNA in the yeast Saccharomyces cerevisiae. This analysis was coupled with comparative genomics to search for ncRNA genes on chromosome six of S. cerevisiae and S. bayanus. Sets of positive and negative control genes were evaluated to determine the efficacy of thermodynamic stability for discriminating ncRNA from background sequence. The effect of window sizes and step sizes on the sensitivity of ncRNA identification was also explored. Non-coding RNA gene candidates, common to both S. cerevisiae and S. bayanus, were verified using northern blot analysis, rapid amplification of cDNA ends (RACE, and publicly available cDNA library data. Four ncRNA transcripts are well supported by experimental data (RUF10, RUF11, RUF12, RUF13, while one additional putative ncRNA transcript is well supported but the data are not entirely conclusive. Six candidates appear to be structural elements in 5' or 3' untranslated regions of annotated protein-coding genes. This work shows that thermodynamic stability, coupled with comparative genomics, can be used to predict ncRNA with significant structural elements.

  7. PEBBLES: A COMPUTER CODE FOR MODELING PACKING, FLOW AND RECIRCULATIONOF PEBBLES IN A PEBBLE BED REACTOR

    Energy Technology Data Exchange (ETDEWEB)

    Joshua J. Cogliati; Abderrafi M. Ougouag

    2006-10-01

    A comprehensive, high fidelity model for pebble flow has been developed and embodied in the PEBBLES computer code. In this paper, a description of the physical artifacts included in the model is presented and some results from using the computer code for predicting the features of pebble flow and packing in a realistic pebble bed reactor design are shown. The sensitivity of models to various physical parameters is also discussed.

  8. PORPST: A statistical postprocessor for the PORMC computer code

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, P.W.; Didier, B.T. (Pacific Northwest Lab., Richland, WA (United States))

    1991-06-01

    This report describes the theory underlying the PORPST code and gives details for using the code. The PORPST code is designed to do statistical postprocessing on files written by the PORMC computer code. The data written by PORMC are summarized in terms of means, variances, standard deviations, or statistical distributions. In addition, the PORPST code provides for plotting of the results, either internal to the code or through use of the CONTOUR3 postprocessor. Section 2.0 discusses the mathematical basis of the code, and Section 3.0 discusses the code structure. Section 4.0 describes the free-format point command language. Section 5.0 describes in detail the commands to run the program. Section 6.0 provides an example program run, and Section 7.0 provides the references. 11 refs., 1 fig., 17 tabs.

  9. Optimization of KINETICS Chemical Computation Code

    Science.gov (United States)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  10. Codes of Ethics for Computing at Russian Institutions and Universities.

    Science.gov (United States)

    Pourciau, Lester J.; Spain, Victoria, Ed.

    1997-01-01

    To determine the degree to which Russian institutions and universities have formulated and promulgated codes of ethics or policies for acceptable computer use, the author examined Russian institution and university home pages. Lists home pages examined, 10 commandments for computer ethics from the Computer Ethics Institute, and a policy statement…

  11. Continuous Materiality: Through a Hierarchy of Computational Codes

    Directory of Open Access Journals (Sweden)

    Jichen Zhu

    2008-01-01

    Full Text Available The legacy of Cartesian dualism inherent in linguistic theory deeply influences current views on the relation between natural language, computer code, and the physical world. However, the oversimplified distinction between mind and body falls short of capturing the complex interaction between the material and the immaterial. In this paper, we posit a hierarchy of codes to delineate a wide spectrum of continuous materiality. Our research suggests that diagrams in architecture provide a valuable analog for approaching computer code in emergent digital systems. After commenting on ways that Cartesian dualism continues to haunt discussions of code, we turn our attention to diagrams and design morphology. Finally we notice the implications a material understanding of code bears for further research on the relation between human cognition and digital code. Our discussion concludes by noticing several areas that we have projected for ongoing research.

  12. Computer code simulations of explosions in flow networks and comparison with experiments

    Science.gov (United States)

    Gregory, W. S.; Nichols, B. D.; Moore, J. A.; Smith, P. R.; Steinke, R. G.; Idzorek, R. D.

    1987-10-01

    A program of experimental testing and computer code development for predicting the effects of explosions in air-cleaning systems is being carried out for the Department of Energy. This work is a combined effort by the Los Alamos National Laboratory and New Mexico State University (NMSU). Los Alamos has the lead responsibility in the project and develops the computer codes; NMSU performs the experimental testing. The emphasis in the program is on obtaining experimental data to verify the analytical work. The primary benefit of this work will be the development of a verified computer code that safety analysts can use to analyze the effects of hypothetical explosions in nuclear plant air cleaning systems. The experimental data show the combined effects of explosions in air-cleaning systems that contain all of the important air-cleaning elements (blowers, dampers, filters, ductwork, and cells). A small experimental set-up consisting of multiple rooms, ductwork, a damper, a filter, and a blower was constructed. Explosions were simulated with a shock tube, hydrogen/air-filled gas balloons, and blasting caps. Analytical predictions were made using the EVENT84 and NF85 computer codes. The EVENT84 code predictions were in good agreement with the effects of the hydrogen/air explosions, but they did not model the blasting cap explosions adequately. NF85 predicted shock entrance to and within the experimental set-up very well. The NF85 code was not used to model the hydrogen/air or blasting cap explosions.

  13. Structural Computer Code Evaluation. Volume I

    Science.gov (United States)

    1976-11-01

    Rivlin model for large strains. Other exanmples are given in Reference 5. Hypoelasticity A hypoelastic material is one in which the components of...remains is the application of these codes to specific rocket nozzle problems and the evaluation of their capabilities to model modern nozzle mraterial...behavior. Further work may also require the development of appropriate material property data or new material models to adequately characterize these

  14. Quantum computation with Turaev-Viro codes

    CERN Document Server

    Koenig, Robert; Reichardt, Ben W

    2010-01-01

    The Turaev-Viro invariant for a closed 3-manifold is defined as the contraction of a certain tensor network. The tensors correspond to tetrahedra in a triangulation of the manifold, with values determined by a fixed spherical category. For a manifold with boundary, the tensor network has free indices that can be associated to qudits, and its contraction gives the coefficients of a quantum error-correcting code. The code has local stabilizers determined by Levin and Wen. For example, applied to the genus-one handlebody using the Z_2 category, this construction yields the well-known toric code. For other categories, such as the Fibonacci category, the construction realizes a non-abelian anyon model over a discrete lattice. By studying braid group representations acting on equivalence classes of colored ribbon graphs embedded in a punctured sphere, we identify the anyons, and give a simple recipe for mapping fusion basis states of the doubled category to ribbon graphs. We explain how suitable initial states can ...

  15. Lattice Boltzmann method fundamentals and engineering applications with computer codes

    CERN Document Server

    Mohamad, A A

    2014-01-01

    Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.

  16. Computer aided power flow software engineering and code generation

    Energy Technology Data Exchange (ETDEWEB)

    Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)

    1996-02-01

    In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.

  17. Computer aided power flow software engineering and code generation

    Energy Technology Data Exchange (ETDEWEB)

    Bacher, R. [Swiss Federal Inst. of Tech., Zuerich (Switzerland)

    1995-12-31

    In this paper a software engineering concept is described which permits the automatic solution of a non-linear set of network equations. The power flow equation set can be seen as a defined subset of a network equation set. The automated solution process is the numerical Newton-Raphson solution process of the power flow equations where the key code parts are the numeric mismatch and the numeric Jacobian term computation. It is shown that both the Jacobian and the mismatch term source code can be automatically generated in a conventional language such as Fortran or C. Thereby one starts from a high level, symbolic language with automatic differentiation and code generation facilities. As a result of this software engineering process an efficient, very high quality Newton-Raphson solution code is generated which allows easier implementation of network equation model enhancements and easier code maintenance as compared to hand-coded Fortran or C code.

  18. APC: A New Code for Atmospheric Polarization Computations

    Science.gov (United States)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2014-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface.

  19. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  20. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    Energy Technology Data Exchange (ETDEWEB)

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  1. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  2. Neutron noise computation using panda deterministic code

    Energy Technology Data Exchange (ETDEWEB)

    Humbert, Ph. [CEA Bruyeres le Chatel (France)

    2003-07-01

    PANDA is a general purpose discrete ordinates neutron transport code with deterministic and non deterministic applications. In this paper we consider the adaptation of PANDA to stochastic neutron counting problems. More specifically we consider the first two moments of the count number probability distribution. In a first part we will recall the equations for the single neutron and source induced count number moments with the corresponding expression for the excess of relative variance or Feynman function. In a second part we discuss the numerical solution of these inhomogeneous adjoint time dependent transport coupled equations with discrete ordinate methods. Finally, numerical applications are presented in the third part. (author)

  3. Computer code for intraply hybrid composite design

    Science.gov (United States)

    Chamis, C. C.; Sinclair, J. H.

    1981-01-01

    A computer program has been developed and is described herein for intraply hybrid composite design (INHYD). The program includes several composite micromechanics theories, intraply hybrid composite theories and a hygrothermomechanical theory. These theories provide INHYD with considerable flexibility and capability which the user can exercise through several available options. Key features and capabilities of INHYD are illustrated through selected samples.

  4. A Brief Review of Computational Gene Prediction Methods

    Institute of Scientific and Technical Information of China (English)

    Zhuo Wang; Yazhu Chen; Yixue Li

    2004-01-01

    With the development of genome sequencing for many organisms, more and more raw sequences need to be annotated. Gene prediction by computational methods for finding the location of protein coding regions is one of the essential issues in bioinformatics. Two classes of methods are generally adopted: similarity based searches and ab initio prediction. Here, we review the development of gene prediction methods, summarize the measures for evaluating predictor quality, highlight open problems in this area, and discuss future research directions.

  5. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    modern speech coders. In the first part of the thesis, we provide an overview of Sparse Linear Predic- tion, a set of speech processing tools created by introducing sparsity constraints into the LP framework. This approach defines predictors that look for a sparse residual rather than a minimum variance...... of high-order sparse predictors. These predictors, by modeling efficiently the spectral envelope and the harmonics components with very few coefficients, have direct applications in speech processing, engendering a joint estimation of short-term and long-term predictors. We also give preliminary results...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  6. Computer vision cracks the leaf code.

    Science.gov (United States)

    Wilf, Peter; Zhang, Shengping; Chikkerur, Sharat; Little, Stefan A; Wing, Scott L; Serre, Thomas

    2016-03-22

    Understanding the extremely variable, complex shape and venation characters of angiosperm leaves is one of the most challenging problems in botany. Machine learning offers opportunities to analyze large numbers of specimens, to discover novel leaf features of angiosperm clades that may have phylogenetic significance, and to use those characters to classify unknowns. Previous computer vision approaches have primarily focused on leaf identification at the species level. It remains an open question whether learning and classification are possible among major evolutionary groups such as families and orders, which usually contain hundreds to thousands of species each and exhibit many times the foliar variation of individual species. Here, we tested whether a computer vision algorithm could use a database of 7,597 leaf images from 2,001 genera to learn features of botanical families and orders, then classify novel images. The images are of cleared leaves, specimens that are chemically bleached, then stained to reveal venation. Machine learning was used to learn a codebook of visual elements representing leaf shape and venation patterns. The resulting automated system learned to classify images into families and orders with a success rate many times greater than chance. Of direct botanical interest, the responses of diagnostic features can be visualized on leaf images as heat maps, which are likely to prompt recognition and evolutionary interpretation of a wealth of novel morphological characters. With assistance from computer vision, leaves are poised to make numerous new contributions to systematic and paleobotanical studies.

  7. HUDU: The Hanford Unified Dose Utility computer code

    Energy Technology Data Exchange (ETDEWEB)

    Scherpelz, R.I.

    1991-02-01

    The Hanford Unified Dose Utility (HUDU) computer program was developed to provide rapid initial assessment of radiological emergency situations. The HUDU code uses a straight-line Gaussian atmospheric dispersion model to estimate the transport of radionuclides released from an accident site. For dose points on the plume centerline, it calculates internal doses due to inhalation and external doses due to exposure to the plume. The program incorporates a number of features unique to the Hanford Site (operated by the US Department of Energy), including a library of source terms derived from various facilities' safety analysis reports. The HUDU code was designed to run on an IBM-PC or compatible personal computer. The user interface was designed for fast and easy operation with minimal user training. The theoretical basis and mathematical models used in the HUDU computer code are described, as are the computer code itself and the data libraries used. Detailed instructions for operating the code are also included. Appendices to the report contain descriptions of the program modules, listings of HUDU's data library, and descriptions of the verification tests that were run as part of the code development. 14 refs., 19 figs., 2 tabs.

  8. Experimental methodology for computational fluid dynamics code validation

    Energy Technology Data Exchange (ETDEWEB)

    Aeschliman, D.P.; Oberkampf, W.L.

    1997-09-01

    Validation of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. Typically, CFD code validation is accomplished through comparison of computed results to previously published experimental data that were obtained for some other purpose, unrelated to code validation. As a result, it is a near certainty that not all of the information required by the code, particularly the boundary conditions, will be available. The common approach is therefore unsatisfactory, and a different method is required. This paper describes a methodology developed specifically for experimental validation of CFD codes. The methodology requires teamwork and cooperation between code developers and experimentalists throughout the validation process, and takes advantage of certain synergisms between CFD and experiment. The methodology employs a novel uncertainty analysis technique which helps to define the experimental plan for code validation wind tunnel experiments, and to distinguish between and quantify various types of experimental error. The methodology is demonstrated with an example of surface pressure measurements over a model of varying geometrical complexity in laminar, hypersonic, near perfect gas, 3-dimensional flow.

  9. Computer Security: better code, fewer problems

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2016-01-01

    The origin of many security incidents is negligence or unintentional mistakes made by web developers or programmers. In the rush to complete the work, due to skewed priorities, or just to ignorance, basic security principles can be omitted or forgotten.   The resulting vulnerabilities lie dormant until the evil side spots them and decides to hit hard. Computer security incidents in the past have put CERN’s reputation at risk due to websites being defaced with negative messages about the Organization, hash files of passwords being extracted, restricted data exposed… And it all started with a little bit of negligence! If you check out the Top 10 web development blunders, you will see that the most prevalent mistakes are: Not filtering input, e.g. accepting “<“ or “>” in input fields even if only a number is expected.  Not validating that input: you expect a birth date? So why accept letters? &...

  10. A proposed framework for computational fluid dynamics code calibration/validation

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.

    1993-12-31

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ``calibrated code,`` ``validated code,`` and a ``validation experiment`` is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance.

  11. A three-dimensional magnetostatics computer code for insertion devices.

    Science.gov (United States)

    Chubar, O; Elleaume, P; Chavanne, J

    1998-05-01

    RADIA is a three-dimensional magnetostatics computer code optimized for the design of undulators and wigglers. It solves boundary magnetostatics problems with magnetized and current-carrying volumes using the boundary integral approach. The magnetized volumes can be arbitrary polyhedrons with non-linear (iron) or linear anisotropic (permanent magnet) characteristics. The current-carrying elements can be straight or curved blocks with rectangular cross sections. Boundary conditions are simulated by the technique of mirroring. Analytical formulae used for the computation of the field produced by a magnetized volume of a polyhedron shape are detailed. The RADIA code is written in object-oriented C++ and interfaced to Mathematica [Mathematica is a registered trademark of Wolfram Research, Inc.]. The code outperforms currently available finite-element packages with respect to the CPU time of the solver and accuracy of the field integral estimations. An application of the code to the case of a wedge-pole undulator is presented.

  12. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  13. Recent applications of the transonic wing analysis computer code, TWING

    Science.gov (United States)

    Subramanian, N. R.; Holst, T. L.; Thomas, S. D.

    1982-01-01

    An evaluation of the transonic-wing-analysis computer code TWING is given. TWING utilizes a fully implicit approximate factorization iteration scheme to solve the full potential equation in conservative form. A numerical elliptic-solver grid-generation scheme is used to generate the required finite-difference mesh. Several wing configurations were analyzed, and the limits of applicability of this code was evaluated. Comparisons of computed results were made with available experimental data. Results indicate that the code is robust, accurate (when significant viscous effects are not present), and efficient. TWING generally produces solutions an order of magnitude faster than other conservative full potential codes using successive-line overrelaxation. The present method is applicable to a wide range of isolated wing configurations including high-aspect-ratio transport wings and low-aspect-ratio, high-sweep, fighter configurations.

  14. Compendium of computer codes for the safety analysis of fast breeder reactors

    Energy Technology Data Exchange (ETDEWEB)

    1977-10-01

    The objective of the compendium is to provide the reader with a guide which briefly describes many of the computer codes used for liquid metal fast breeder reactor safety analyses, since it is for this system that most of the codes have been developed. The compendium is designed to address the following frequently asked questions from individuals in licensing and research and development activities: (1) What does the code do. (2) To what safety problems has it been applied. (3) What are the code's limitations. (4) What is being done to remove these limitations. (5) How does the code compare with experimental observations and other code predictions. (6) What reference documents are available.

  15. Development of a computer code for thermal hydraulics of reactors (THOR). [BWR and PWR

    Energy Technology Data Exchange (ETDEWEB)

    Wulff, W

    1975-01-01

    The purpose of the advanced code development work is to construct a computer code for the prediction of thermohydraulic transients in water-cooled nuclear reactor systems. The fundamental formulation of fluid dynamics is to be based on the one-dimensional drift flux model for non-homogeneous, non-equilibrium flows of two-phase mixtures. Particular emphasis is placed on component modeling, automatic prediction of initial steady state conditions, inclusion of one-dimensional transient neutron kinetics, freedom in the selection of computed spatial detail, development of reliable constitutive descriptions, and modular code structure. Numerical solution schemes have been implemented to integrate simultaneously the one-dimensional transient drift flux equations. The lumped-parameter modeling analyses of thermohydraulic transients in the reactor core and in the pressurizer have been completed. The code development for the prediction of the initial steady state has been completed with preliminary representation of individual reactor system components. A program has been developed to predict critical flow expanding from a dead-ended pipe; the computed results have been compared and found in good agreement with idealized flow solutions. Transport properties for liquid water and water vapor have been coded and verified.

  16. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    Science.gov (United States)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  17. Prediction Structures for Multi-view Video Coding

    Directory of Open Access Journals (Sweden)

    Heera Narayanan

    2013-07-01

    Full Text Available Removing statistical redundancies is the key to achieve efficient video compression. Multi-view Video Coding (MVC uses prediction structures for removing spatial and temporal redundancies. The compression is based on multiple reference frames as in H.264 standard. The reference frames are managed by following the hierarchical prediction structure. Intra frame, inter-frame and inter-view prediction schemes are efficiently combined to achieve compression from video sequences simultaneously generated using multiple camera systems. MATLAB simulation of an IPP hierarchical prediction structure is performed to test the efficiency of temporal and spatial prediction algorithms for MVC

  18. Least-Square Prediction for Backward Adaptive Video Coding

    OpenAIRE

    2006-01-01

    Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in im...

  19. From structure prediction to genomic screens for novel non-coding RNAs

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.

    2011-01-01

    Abstract: Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction...

  20. A Predictive Coding Account of Psychotic Symptoms in Autism Spectrum Disorder

    Science.gov (United States)

    van Schalkwyk, Gerrit I.; Volkmar, Fred R.; Corlett, Philip R.

    2017-01-01

    The co-occurrence of psychotic and autism spectrum disorder (ASD) symptoms represents an important clinical challenge. Here we consider this problem in the context of a computational psychiatry approach that has been applied to both conditions--predictive coding. Some symptoms of schizophrenia have been explained in terms of a failure of top-down…

  1. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of auto-regressive (AR) sources for transmission across erasure channels. The method is based on Linear Predictive Coding (LPC) with Kalman estimation at the decoder. The method designs the encoder and decoder off-line through an iterative...... (SNR) compared to the same coding framework optimized for no loss. We furthermore investigate the impact on decoding performance when channel losses are correlated. We find that the method still provides substantial improvements in this case despite being designed for i.i.d. losses....

  2. AGR-2 safety test predictions using the PARFUME code

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise P. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    This report documents calculations performed to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the second irradiation test of the Advanced Gas Reactor program (AGR-2). The calculations include the modeling of the AGR-2 irradiation that occurred from June 2010 to October 2013 in the Advanced Test Reactor (ATR) and the modeling of a safety testing phase to support safety tests planned at Oak Ridge National Laboratory and at Idaho National Laboratory (INL) for a selection of AGR-2 compacts. The heat-up of AGR-2 compacts is a critical component of the AGR-2 fuel performance evaluation, and its objectives are to identify the effect of accident test temperature, burnup, and irradiation temperature on the performance of the fuel at elevated temperature. Safety testing of compacts will be followed by detailed examinations of the fuel particles to further evaluate fission product retention and behavior of the kernel and coatings. The modeling was performed using the particle fuel model computer code PARFUME developed at INL. PARFUME is an advanced gas-cooled reactor fuel performance modeling and analysis code (Miller 2009). It has been developed as an integrated mechanistic code that evaluates the thermal, mechanical, and physico-chemical behavior of fuel particles during irradiation to determine the failure probability of a population of fuel particles given the particle-to-particle statistical variations in physical dimensions and material properties that arise from the fuel fabrication process, accounting for all viable mechanisms that can lead to particle failure. The code also determines the diffusion of fission products from the fuel through the particle coating layers, and through the fuel matrix to the coolant boundary. The subsequent release of fission products is calculated at the compact level (release of fission products from the compact). PARFUME calculates the

  3. AGR-2 Safety Test Predictions Using the PARFUME Code

    Energy Technology Data Exchange (ETDEWEB)

    Blaise Collin

    2014-09-01

    This report documents calculations performed to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the second irradiation test of the Advanced Gas Reactor program (AGR-2). The calculations include the modeling of the AGR-2 irradiation that occurred from June 2010 to October 2013 in the Advanced Test Reactor (ATR) and the modeling of a safety testing phase to support safety tests planned at Oak Ridge National Laboratory and at Idaho National Laboratory (INL) for a selection of AGR-2 compacts. The heat-up of AGR-2 compacts is a critical component of the AGR-2 fuel performance evaluation, and its objectives are to identify the effect of accident test temperature, burnup, and irradiation temperature on the performance of the fuel at elevated temperature. Safety testing of compacts will be followed by detailed examinations of the fuel particles to further evaluate fission product retention and behavior of the kernel and coatings. The modeling was performed using the particle fuel model computer code PARFUME developed at INL. PARFUME is an advanced gas-cooled reactor fuel performance modeling and analysis code (Miller 2009). It has been developed as an integrated mechanistic code that evaluates the thermal, mechanical, and physico-chemical behavior of fuel particles during irradiation to determine the failure probability of a population of fuel particles given the particle-to-particle statistical variations in physical dimensions and material properties that arise from the fuel fabrication process, accounting for all viable mechanisms that can lead to particle failure. The code also determines the diffusion of fission products from the fuel through the particle coating layers, and through the fuel matrix to the coolant boundary. The subsequent release of fission products is calculated at the compact level (release of fission products from the compact). PARFUME calculates the

  4. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    Directory of Open Access Journals (Sweden)

    Robert K Bradley

    Full Text Available We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  5. Highly Optimized Code Generation for Stencil Codes with Computation Reuse for GPUs

    Institute of Scientific and Technical Information of China (English)

    Wen-Jing Ma; Kan Gao; Guo-Ping Long

    2016-01-01

    Computation reuse is known as an effective optimization technique. However, due to the complexity of modern GPU architectures, there is yet not enough understanding regarding the intriguing implications of the interplay of compu-tation reuse and hardware specifics on application performance. In this paper, we propose an automatic code generator for a class of stencil codes with inherent computation reuse on GPUs. For such applications, the proper reuse of intermediate results, combined with careful register and on-chip local memory usage, has profound implications on performance. Current state of the art does not address this problem in depth, partially due to the lack of a good program representation that can expose all potential computation reuse. In this paper, we leverage the computation overlap graph (COG), a simple representation of data dependence and data reuse with “element view”, to expose potential reuse opportunities. Using COG, we propose a portable code generation and tuning framework for GPUs. Compared with current state-of-the-art code generators, our experimental results show up to 56.7%performance improvement on modern GPUs such as NVIDIA C2050.

  6. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    Science.gov (United States)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  7. Dream to predict? REM dreaming as prospective coding

    Directory of Open Access Journals (Sweden)

    Sue eLlewellyn

    2016-01-01

    Full Text Available The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies- through expectations derived from discerning patterns in associated past experiences- would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus- without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning- within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e. immediate danger and require the same response (e.g. flight. Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate- another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being

  8. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli

    2007-02-01

    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  9. Computer code for double beta decay QRPA based calculations

    Energy Technology Data Exchange (ETDEWEB)

    Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)

    2014-11-11

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  10. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  11. Connecting Neural Coding to Number Cognition: A Computational Account

    Science.gov (United States)

    Prather, Richard W.

    2012-01-01

    The current study presents a series of computational simulations that demonstrate how the neural coding of numerical magnitude may influence number cognition and development. This includes behavioral phenomena cataloged in cognitive literature such as the development of numerical estimation and operational momentum. Though neural research has…

  12. General review of the MOSTAS computer code for wind turbines

    Science.gov (United States)

    Dungundji, J.; Wendell, J. H.

    1981-01-01

    The MOSTAS computer code for wind turbine analysis is reviewed, and techniques and methods used in its analyses are described. Impressions of its strengths and weakness, and recommendations for its application, modification, and further development are made. Basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed.

  13. MMA, A Computer Code for Multi-Model Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Eileen P. Poeter and Mary C. Hill

    2007-08-20

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.

  14. A Neuronal Model of Predictive Coding Accounting for the Mismatch Negativity

    OpenAIRE

    Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas

    2012-01-01

    International audience; The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spi...

  15. ANL/HTP: a computer code for the simulation of heat pipe operation

    Energy Technology Data Exchange (ETDEWEB)

    McLennan, G.A.

    1983-11-01

    ANL/HTP is a computer code for the simulation of heat pipe operation, to predict heat pipe performance and temperature distributions during steady state operation. Source and sink temperatures and heat transfer coefficients can be set as input boundary conditions, and varied for parametric studies. Five code options are included to calculate performance for fixed operating conditions, or to vary any one of the four boundary conditions to determine the heat pipe limited performance. The performance limits included are viscous, sonic, entrainment capillary, and boiling, using the best available theories to model these effects. The code has built-in models for a number of wick configurations - open grooves, screen-covered grooves, screen-wrap, and arteries, with provision for expansion. The current version of the code includes the thermophysical properties of sodium as the working fluid in an expandable subroutine. The code-calculated performance agrees quite well with measured experiment data.

  16. Computed radiography simulation using the Monte Carlo code MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)

    2010-09-15

    Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.

  17. Predicting code-switching in multilingual communication for immigrant communities

    NARCIS (Netherlands)

    Papalexakis, E. Evangelos; Nguyen, Dong; Doğruöz, A. Seza

    2014-01-01

    Immigrant communities host multilingual speakers who switch across languages and cultures in their daily communication practices. Although there are in-depth linguistic descriptions of code-switching across different multilingual communication settings, there is a need for automatic prediction of co

  18. Predictive coding for motion stimuli in human early visual cortex

    NARCIS (Netherlands)

    Schellekens, Wouter; van Wezel, Richard J A; Petridou, Natalia; Ramsey, Nick F.; Raemaekers, Mathijs

    2016-01-01

    The current study investigates if early visual cortical areas, V1, V2 and V3, use predictive coding to process motion information. Previous studies have reported biased visual motion responses at locations where novel visual information was presented (i.e., the motion trailing edge), which is plausi

  19. Predictive coding for motion stimuli in human early visual cortex

    NARCIS (Netherlands)

    Schellekens, Wouter; Wezel, van Richard J.A.; Petridou, Natalia; Ramsey, Nick F.; Raemeakers, Mathijs; Zaborszky, L.; Zilles, K.

    2014-01-01

    The current study investigates if early visual cortical areas, V1, V2 and V3, use predictive coding to process motion information. Previous studies have reported biased visual motion responses at locations where novel visual information was presented (i.e., the motion trailing edge), which is plausi

  20. Fault-tolerant quantum computing with color codes

    CERN Document Server

    Landahl, Andrew J; Rice, Patrick R

    2011-01-01

    We present and analyze protocols for fault-tolerant quantum computing using color codes. We present circuit-level schemes for extracting the error syndrome of these codes fault-tolerantly. We further present an integer-program-based decoding algorithm for identifying the most likely error given the syndrome. We simulated our syndrome extraction and decoding algorithms against three physically-motivated noise models using Monte Carlo methods, and used the simulations to estimate the corresponding accuracy thresholds for fault-tolerant quantum error correction. We also used a self-avoiding walk analysis to lower-bound the accuracy threshold for two of these noise models. We present and analyze two architectures for fault-tolerantly computing with these codes: one with 2D arrays of qubits are stacked atop each other and one in a single 2D substrate. Our analysis demonstrates that color codes perform slightly better than Kitaev's surface codes when circuit details are ignored. When these details are considered, w...

  1. Sodium fast reactor gaps analysis of computer codes and models for accident analysis and reactor safety.

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, Hae-Yong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibaraki-ken, Japan); Serre, Frederic (Centre d' %C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)

    2011-06-01

    This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the

  2. New Parallel computing framework for radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.; /Fermilab; Niita, K.; /JAERI, Tokai

    2010-09-01

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.

  3. New Parallel computing framework for radiation transport codes

    CERN Document Server

    Kostin, M A; Niita, K

    2012-01-01

    A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The frame work was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the sch eduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be...

  4. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1983-06-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  5. LMFBR models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; McAdoo, J.W.; Bjerke, M.A.

    1981-10-01

    Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 238/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

  6. User's manual for HDR3 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Arundale, C.J.

    1982-10-01

    A description of the HDR3 computer code and instructions for its use are provided. HDR3 calculates space heating costs for a hot dry rock (HDR) geothermal space heating system. The code also compares these costs to those of a specific oil heating system in use at the National Aeronautics and Space Administration Flight Center at Wallops Island, Virginia. HDR3 allows many HDR system parameters to be varied so that the user may examine various reservoir management schemes and may optimize reservoir design to suit a particular set of geophysical and economic parameters.

  7. Compendium of computer codes for the researcher in magnetic fusion energy

    Energy Technology Data Exchange (ETDEWEB)

    Porter, G.D. (ed.)

    1989-03-10

    This is a compendium of computer codes, which are available to the fusion researcher. It is intended to be a document that permits a quick evaluation of the tools available to the experimenter who wants to both analyze his data, and compare the results of his analysis with the predictions of available theories. This document will be updated frequently to maintain its usefulness. I would appreciate receiving further information about codes not included here from anyone who has used them. The information required includes a brief description of the code (including any special features), a bibliography of the documentation available for the code and/or the underlying physics, a list of people to contact for help in running the code, instructions on how to access the code, and a description of the output from the code. Wherever possible, the code contacts should include people from each of the fusion facilities so that the novice can talk to someone ''down the hall'' when he first tries to use a code. I would also appreciate any comments about possible additions and improvements in the index. I encourage any additional criticism of this document. 137 refs.

  8. Computer codes for evaluation of control room habitability (HABIT)

    Energy Technology Data Exchange (ETDEWEB)

    Stage, S.A. [Pacific Northwest Lab., Richland, WA (United States)

    1996-06-01

    This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.

  9. War of ontology worlds: mathematics, computer code, or Esperanto?

    Directory of Open Access Journals (Sweden)

    Andrey Rzhetsky

    2011-09-01

    Full Text Available The use of structured knowledge representations-ontologies and terminologies-has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies.

  10. War of Ontology Worlds: Mathematics, Computer Code, or Esperanto?

    Science.gov (United States)

    Rzhetsky, Andrey; Evans, James A.

    2011-01-01

    The use of structured knowledge representations—ontologies and terminologies—has become standard in biomedicine. Definitions of ontologies vary widely, as do the values and philosophies that underlie them. In seeking to make these views explicit, we conducted and summarized interviews with a dozen leading ontologists. Their views clustered into three broad perspectives that we summarize as mathematics, computer code, and Esperanto. Ontology as mathematics puts the ultimate premium on rigor and logic, symmetry and consistency of representation across scientific subfields, and the inclusion of only established, non-contradictory knowledge. Ontology as computer code focuses on utility and cultivates diversity, fitting ontologies to their purpose. Like computer languages C++, Prolog, and HTML, the code perspective holds that diverse applications warrant custom designed ontologies. Ontology as Esperanto focuses on facilitating cross-disciplinary communication, knowledge cross-referencing, and computation across datasets from diverse communities. We show how these views align with classical divides in science and suggest how a synthesis of their concerns could strengthen the next generation of biomedical ontologies. PMID:21980276

  11. A Computer Code for Swirling Turbulent Axisymmetric Recirculating Flows in Practical Isothermal Combustor Geometries

    Science.gov (United States)

    Lilley, D. G.; Rhode, D. L.

    1982-01-01

    A primitive pressure-velocity variable finite difference computer code was developed to predict swirling recirculating inert turbulent flows in axisymmetric combustors in general, and for application to a specific idealized combustion chamber with sudden or gradual expansion. The technique involves a staggered grid system for axial and radial velocities, a line relaxation procedure for efficient solution of the equations, a two-equation k-epsilon turbulence model, a stairstep boundary representation of the expansion flow, and realistic accommodation of swirl effects. A user's manual, dealing with the computational problem, showing how the mathematical basis and computational scheme may be translated into a computer program is presented. A flow chart, FORTRAN IV listing, notes about various subroutines and a user's guide are supplied as an aid to prospective users of the code.

  12. Benchmarking of computer codes and approaches for modeling exposure scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Seitz, R.R. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Rittmann, P.D.; Wood, M.I. [Westinghouse Hanford Co., Richland, WA (United States); Cook, J.R. [Westinghouse Savannah River Co., Aiken, SC (United States)

    1994-08-01

    The US Department of Energy Headquarters established a performance assessment task team (PATT) to integrate the activities of DOE sites that are preparing performance assessments for the disposal of newly generated low-level waste. The PATT chartered a subteam with the task of comparing computer codes and exposure scenarios used for dose calculations in performance assessments. This report documents the efforts of the subteam. Computer codes considered in the comparison include GENII, PATHRAE-EPA, MICROSHIELD, and ISOSHLD. Calculations were also conducted using spreadsheets to provide a comparison at the most fundamental level. Calculations and modeling approaches are compared for unit radionuclide concentrations in water and soil for the ingestion, inhalation, and external dose pathways. Over 30 tables comparing inputs and results are provided.

  13. Computational Complexity of Decoding Orthogonal Space-Time Block Codes

    CERN Document Server

    Ayanoglu, Ender; Karipidis, Eleftherios

    2009-01-01

    The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.

  14. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-01-01

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  15. Status of the MELTSPREAD-1 computer code for the analysis of transient spreading of core debris melts

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.; Chu, C.C.

    1992-04-01

    A transient, one dimensional, finite difference computer code (MELTSPREAD-1) has been developed to predict spreading behavior of high temperature melts flowing over concrete and/or steel surfaces submerged in water, or without the effects of water if the surface is initially dry. This paper provides a summary overview of models and correlations currently implemented in the code, code validation activities completed thus far, LWR spreading-related safety issues for which the code has been applied, and the status of documentation for the code.

  16. Bragg optics computer codes for neutron scattering instrument design

    Energy Technology Data Exchange (ETDEWEB)

    Popovici, M.; Yelon, W.B.; Berliner, R.R. [Missouri Univ. Research Reactor, Columbia, MO (United States); Stoica, A.D. [Institute of Physics and Technology of Materials, Bucharest (Romania)

    1997-09-01

    Computer codes for neutron crystal spectrometer design, optimization and experiment planning are described. Phase space distributions, linewidths and absolute intensities are calculated by matrix methods in an extension of the Cooper-Nathans resolution function formalism. For modeling the Bragg reflection on bent crystals the lamellar approximation is used. Optimization is done by satisfying conditions of focusing in scattering and in real space, and by numerically maximizing figures of merit. Examples for three-axis and two-axis spectrometers are given.

  17. General review of the MOSTAS computer code for wind turbines

    Energy Technology Data Exchange (ETDEWEB)

    Dugundji, J.; Wendell, J.H.

    1981-06-01

    The MOSTAS computer code for wind turbine analysis is reviewed, and the techniques and methods used in its analyses are described in some detail. Some impressions of its strengths and weaknesses, and some recommendations for its application, modification, and further development are made. Additionally, some basic techniques used in wind turbine stability and response analyses for systems with constant and periodic coefficients are reviewed in the Appendices.

  18. Refactoring Android Java Code for On-Demand Computation Offloading

    OpenAIRE

    Zhang, Ying; Huang, Gang; Liu, Xuanzhe; Zhang, Wei; Zhang, Wei; Mei, Hong; Yang, Shunxiang

    2012-01-01

    International audience; Computation offloading is a promising way to improve the performance as well as reduce the battery energy consumption of a smartphone application by executing some part of the application on a remote server. Supporting such capability is not easy to smartphone app developers for 1) correctness: some codes, e.g. those for GPS, gravity and other sensors, can only run on the smartphone so that the developers have to identify which part of the application cannot be offload...

  19. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma (Alion Science and Technology, Albuquerque, NM)

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  20. User's manual for MODCAL: Bounding surface soil plasticity model calibration and prediction code, volume 2

    Science.gov (United States)

    Dennatale, J. S.; Herrmann, L. R.; Defalias, Y. F.

    1983-02-01

    In order to reduce the complexity of the model calibration process, a computer-aided automated procedure has been developed and tested. The computer code employs a Quasi-Newton optimization strategy to locate that set of parameter values which minimizes the discrepancy between the model predictions and the experimental observations included in the calibration data base. Through application to a number of real soils, the automated procedure has been found to be an efficient, reliable and economical means of accomplishing model calibration. Although the code was developed specifically for use with the Bounding Surface plasticity model, it can readily be adapted to other constitutive formulations. Since the code greatly reduces the dependence of calibration success on user expertise, it significantly increases the accessibility and usefulness of sophisticated material models to the general engineering community.

  1. Overview of numerical codes developed for predicted electrothermal deicing of aircraft blades

    Science.gov (United States)

    Keith, Theo G.; De Witt, Kenneth J.; Wright, William B.; Masiulaniec, K. Cyril

    1988-01-01

    An overview of the deicing computer codes that have been developed at the University of Toledo under sponsorship of the NASA-Lewis Research Center is presented. These codes simulate the transient heat conduction and phase change occurring in an electrothermal deicier pad that has an arbitrary accreted ice shape on its surface. The codes are one-dimensional rectangular, two-dimensional rectangular, and two-dimensional with a coordinate transformation to model the true blade geometry. All modifications relating to the thermal physics of the deicing problem that have been incorporated into the codes will be discussed. Recent results of reformulating the codes using different numerical methods to increase program efficiency are described. In particular, this reformulation has enabled a more comprehensive two-dimensional code to run in much less CPU time than the original version. The code predictions are compared with experimental data obtained in the NASA-Lewis Icing Research Tunnel with a UH1H blade fitted with a B. F. Goodrich electrothermal deicer pad. Both continuous and cyclic heater firing cases are considered. The major objective in this comparison is to illustrate which codes give acceptable results in different regions of the airfoil for different heater firing sequences.

  2. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    Science.gov (United States)

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  3. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  4. Lossless predictive coding for images with Bayesian treatment.

    Science.gov (United States)

    Liu, Jing; Zhai, Guangtao; Yang, Xiaokang; Chen, Li

    2014-12-01

    Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear models and design a new algorithm to automatically select one suitable model for prediction. From the Bayesian perspective, the model with maximum posterior probability is considered as the best. Two types of model evidence are included in our algorithm. One is traditional training evidence, which represents the models’ suitability for current pixel under the assumption of local consistency. The other is target evidence, which is proposed to express the preference for different models from the perspective of patch redundancy. It is shown that the fusion of training evidence and target evidence jointly exploits the benefits of local consistency and patch redundancy. As a result, our proposed predictor is more suitable for natural images with textures and object boundaries. Comprehensive experiments demonstrate that the proposed predictor achieves higher efficiency compared with the state-of-the-art lossless predictors.

  5. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01

    A one-dimensional, multicell, Eulerian finite difference computer code (MELTSPREAD-1) has been developed to provide an improved prediction of the gravity driven spreading and thermal interactions of molten corium flowing over a concrete or steel surface. In this paper, the modeling incorporated into the code is described and the spreading models are benchmarked against a simple dam break'' problem as well as water simulant spreading data obtained in a scaled apparatus of the Mk I containment. Results are also presented for a scoping calculation of the spreading behavior and shell thermal response in the full scale Mk I system following vessel meltthrough. 24 refs., 15 figs.

  6. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  7. TERRA: a computer code for simulating the transport of environmentally released radionuclides through agriculture

    Energy Technology Data Exchange (ETDEWEB)

    Baes, C.F. III; Sharp, R.D.; Sjoreen, A.L.; Hermann, O.W.

    1984-11-01

    TERRA is a computer code which calculates concentrations of radionuclides and ingrowing daughters in surface and root-zone soil, produce and feed, beef, and milk from a given deposition rate at any location in the conterminous United States. The code is fully integrated with seven other computer codes which together comprise a Computerized Radiological Risk Investigation System, CRRIS. Output from either the long range (> 100 km) atmospheric dispersion code RETADD-II or the short range (<80 km) atmospheric dispersion code ANEMOS, in the form of radionuclide air concentrations and ground deposition rates by downwind location, serves as input to TERRA. User-defined deposition rates and air concentrations may also be provided as input to TERRA through use of the PRIMUS computer code. The environmental concentrations of radionuclides predicted by TERRA serve as input to the ANDROS computer code which calculates population and individual intakes, exposures, doses, and risks. TERRA incorporates models to calculate uptake from soil and atmospheric deposition on four groups of produce for human consumption and four groups of livestock feeds. During the environmental transport simulation, intermediate calculations of interception fraction for leafy vegetables, produce directly exposed to atmospherically depositing material, pasture, hay, and silage are made based on location-specific estimates of standing crop biomass. Pasture productivity is estimated by a model which considers the number and types of cattle and sheep, pasture area, and annual production of other forages (hay and silage) at a given location. Calculations are made of the fraction of grain imported from outside the assessment area. TERRA output includes the above calculations and estimated radionuclide concentrations in plant produce, milk, and a beef composite by location.

  8. Improvement of level-1 PSA computer code package

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  9. Computationally efficient sub-band coding of ECG signals.

    Science.gov (United States)

    Husøy, J H; Gjerde, T

    1996-03-01

    A data compression technique is presented for the compression of discrete time electrocardiogram (ECG) signals. The compression system is based on sub-band coding, a technique traditionally used for compressing speech and images. The sub-band coder employs quadrature mirror filter banks (QMF) with up to 32 critically sampled sub-bands. Both finite impulse response (FIR) and the more computationally efficient infinite impulse response (IIR) filter banks are considered as candidates in a complete ECG coding system. The sub-bands are threshold, quantized using uniform quantizers and run-length coded. The output of the run-length coder is further compressed by a Huffman coder. Extensive simulations indicate that 16 sub-bands are a suitable choice for this application. Furthermore, IIR filter banks are preferable due to their superiority in terms of computational efficiency. We conclude that the present scheme, which is suitable for real time implementation on a PC, can provide compression ratios between 5 and 15 without loss of clinical information.

  10. Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate

    CERN Document Server

    Guruswami, Venkatesan

    2010-01-01

    In this paper, we consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently "simple" circuit. For three classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder/decoder that works for every channel in the class. (1) Unique decoding for additive errors: We give the first construction of poly-time encodable/decodable codes for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1-H(p). Such channels capture binary symmetric errors and burst errors as special cases. (2) List-decoding for log-space channels: A space-S(n) channel reads and modifies the transmitted codeword as a stream, using at most S(n) bits of workspace on transmissions of n bi...

  11. Compressing industrial computed tomography images by means of contour coding

    Science.gov (United States)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  12. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  13. PLUTO code for computational Astrophysics: News and Developments

    Science.gov (United States)

    Tzeferacos, P.; Mignone, A.

    2012-01-01

    We present an overview on recent developments and functionalities available with the PLUTO code for astrophysical fluid dynamics. The recent extension of the code to a conservative finite difference formulation and high order spatial discretization of the compressible equations of magneto-hydrodynamics (MHD), complementary to its finite volume approach, allows for a highly accurate treatment of smooth flows, while avoiding loss of accuracy near smooth extrema and providing sharp non-oscillatory transitions at discontinuities. Among the novel features, we present alternative, fully explicit treatments to include non-ideal dissipative processes (namely viscosity, resistivity and anisotropic thermal conduction), that do not suffer from the usual timestep limitation of explicit time stepping. These methods, offsprings of the multistep Runge-Kutta family that use a Chebyshev polynomial recursion, are competitive substitutes of computationally expensive implicit schemes that involve sparse matrix inversion. Several multi-dimensional benchmarks and appli-cations assess the potential of PLUTO to efficiently handle many astrophysical problems.

  14. Understanding and Predicting Attitudes towards Computers.

    Science.gov (United States)

    Pancer, S. Mark; And Others

    1992-01-01

    The ability of the theory of reasoned action to predict computer-related attitudes and behavior was demonstrated through two studies: a questionnaire on computer behaviors and attitudes; and word processing training involving various levels of persuasive communication based on belief statements identified in the first study. (22 references) (MES)

  15. AGR-1 Safety Test Predictions using the PARFUME code

    Energy Technology Data Exchange (ETDEWEB)

    Blaise Collin

    2012-05-01

    The PARFUME modeling code was used to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the first irradiation test of the Advanced Gas Reactor program (AGR-1). These calculations support the AGR-1 Safety Testing Experiment, which is part of the PIE effort on AGR-1. Modeling of the AGR-1 Safety Test Predictions includes a 620-day irradiation followed by a 300-hour heat-up phase of selected AGR-1 compacts. Results include fuel failure probability, palladium penetration, and fractional release of fission products. Results show that no particle failure is predicted during irradiation or heat-up, and that fractional release of fission products is limited during irradiation but that it significantly increases during heat-up.

  16. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    Energy Technology Data Exchange (ETDEWEB)

    Van Buren, Kendra L. [Los Alamos National Laboratory; Canfield, Jesse M. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Sauer, Jeremy A. [Los Alamos National Laboratory

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.

  17. Analysis of the Length of Braille Texts in English Braille American Edition, the Nemeth Code, and Computer Braille Code versus the Unified English Braille Code

    Science.gov (United States)

    Knowlton, Marie; Wetzel, Robin

    2006-01-01

    This study compared the length of text in English Braille American Edition, the Nemeth code, and the computer braille code with the Unified English Braille Code (UEBC)--also known as Unified English Braille (UEB). The findings indicate that differences in the length of text are dependent on the type of material that is transcribed and the grade…

  18. pyro: A teaching code for computational astrophysical hydrodynamics

    CERN Document Server

    Zingale, Michael

    2013-01-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  19. pyro: A teaching code for computational astrophysical hydrodynamics

    Science.gov (United States)

    Zingale, M.

    2014-10-01

    We describe pyro: a simple, freely-available code to aid students in learning the computational hydrodynamics methods widely used in astrophysics. pyro is written with simplicity and learning in mind and intended to allow students to experiment with various methods popular in the field, including those for advection, compressible and incompressible hydrodynamics, multigrid, and diffusion in a finite-volume framework. We show some of the test problems from pyro, describe its design philosophy, and suggest extensions for students to build their understanding of these methods.

  20. Development of a predictive code for aircrew radiation exposure.

    Science.gov (United States)

    McCall, M J; Lemay, F; Bean, M R; Lewis, B J; Bennett, L G I

    2009-10-01

    Using the empirical data measured by the Royal Military College with a tissue equivalent proportional counter, a model was derived to allow for the interpolation of the dose rate for any global position, altitude and date. Through integration of the dose-rate function over a great circle flight path or between various waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAire) was further developed to provide an estimate of the total dose equivalent on any route worldwide at any period in the solar cycle.

  1. Geometric plane shapes for computer-generated holographic engraving codes

    Science.gov (United States)

    Augier, Ángel G.; Rabal, Héctor; Sánchez, Raúl B.

    2017-04-01

    We report a new theoretical and experimental study on hologravures, as holographic computer-generated laser-engravings. A geometric theory of images based on the general principles of light ray behaviour is shown. The models used are also applicable for similar engravings obtained by any non-laser method, and the solutions allow for the analysis of particular situations, not only in the case of light reflection mode, but also in transmission mode geometry. This approach is a novel perspective allowing the three-dimensional (3D) design of engraved images for specific ends. We prove theoretically that plane curves of very general geometric shapes can be used to encode image information onto a two-dimensional (2D) engraving, showing notable influence on the behaviour of reconstructed images that appears as an exciting investigation topic, extending its applications. Several cases of code using particular curvilinear shapes are experimentally studied. The computer-generated objects are coded by using the chosen curve type, and engraved by a laser on a plane surface of suitable material. All images are recovered optically by adequate illumination. The pseudoscopic or orthoscopic character of these images is considered, and an appropriate interpretation is presented.

  2. DCT/DST-based transform coding for intra prediction in image/video coding.

    Science.gov (United States)

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  3. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    Science.gov (United States)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  4. A new computer code to evaluate detonation performance of high explosives and their thermochemical properties, part I.

    Science.gov (United States)

    Keshavarz, Mohammad Hossein; Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid

    2009-12-30

    In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO(2), O-NO(2) and N-NO(2) energetic groups have also been generated and compared with well-known complex code BKW.

  5. A new computer code to evaluate detonation performance of high explosives and their thermochemical properties, part I

    Energy Technology Data Exchange (ETDEWEB)

    Keshavarz, Mohammad Hossein, E-mail: mhkeshavarz@mut-es.ac.ir [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of); Motamedoshariati, Hadi; Moghayadnia, Reza; Nazari, Hamid Reza; Azarniamehraban, Jamshid [Department of Chemistry, Malek-ashtar University of Technology, Shahin-shahr P.O. Box 83145/115 (Iran, Islamic Republic of)

    2009-12-30

    In this paper a new simple user-friendly computer code, in Visual Basic, has been introduced to evaluate detonation performance of high explosives and their thermochemical properties. The code is based on recently developed methods to obtain thermochemical and performance parameters of energetic materials, which can complement the computer outputs of the other thermodynamic chemical equilibrium codes. It can predict various important properties of high explosive including velocity of detonation, detonation pressure, heat of detonation, detonation temperature, Gurney velocity, adiabatic exponent and specific impulse of high explosives. It can also predict detonation performance of aluminized explosives that can have non-ideal behaviors. This code has been validated with well-known and standard explosives and compared the predicted results, where the predictions of desired properties were possible, with outputs of some computer codes. A large amount of data for detonation performance on different classes of explosives from C-NO{sub 2}, O-NO{sub 2} and N-NO{sub 2} energetic groups have also been generated and compared with well-known complex code BKW.

  6. Evaluation of detonation energy from EXPLO5 computer code results

    Energy Technology Data Exchange (ETDEWEB)

    Suceska, M. [Brodarski Institute, Zagreb (Croatia). Marine Research and Special Technologies

    1999-10-01

    The detonation energies of several high explosives are evaluated from the results of chemical-equilibrium computer code named EXPLO5. Two methods of the evaluation of detonation energy are applied: (a) Direct evaluation from the internal energy of detonation products at the CJ point and the energy of shock compression of the detonation products, i.e. by equating the detonation energy and the heat of detonation, and (b) evaluation from the expansion isentrope of detonation products, applying the JWL model. These energies are compared to the energies computed from cylinder test derived JWL coefficients. It is found out that the detonation energies obtained directly from the energy of detonation products at the CJ point are uniformly to high (0.9445{+-}0.577 kJ/cm{sup 3}) while the detonation energies evaluated from the expansion isentrope, are in a considerable agreement (0.2072{+-}0.396 kJ/cm{sup 3}) with the energies calculated from cylinder test derived JWL coefficients. (orig.) [German] Die Detonationsenergien verschiedener Hochleistungssprengstoffe werden bewertet aus den Ergebnissen des Computer Codes fuer chemische Gleichgewichte genannt EXPLO5. Zwei Methoden wurden angewendet: (a) Direkte Bewertung aus der inneren Energie der Detonationsprodukte am CJ-Punkt und aus der Energie der Stosskompression der Detonationsprodukte, d.h. durch Gleichsetzung von Detonationsenergie und Detonationswaerme, (b) Auswertung durch die Expansions-Isentrope der Detonationsprodukte unter Anwendung des JWL-Modells. Diese Energien werden verglichen mit den berechneten Energien mit aus dem Zylindertest abgeleiteten JWL-Koeffizienten. Es wird gefunden, dass die Detonationsenergien, die direkt aus der Energie der Detonationsprodukte beim CJ-Punkt erhalten wurden, einheitlich zu hoch sind (0,9445{+-}0,577 kJ/cm{sup 3}), waehrend die aus der Expansions-Isentrope erhaltenen in guter Uebereinstimmung sind (0,2072{+-}0,396 kJ/cm{sup 3}) mit den berechneten Energien mit aus dem Zylindertest

  7. Education:=Coding+Aesthetics; Aesthetic Understanding, Computer Science Education, and Computational Thinking

    Science.gov (United States)

    Good, Jonathon; Keenan, Sarah; Mishra, Punya

    2016-01-01

    The popular press is rife with examples of how students in the United States and around the globe are learning to program, make, and tinker. The Hour of Code, maker-education, and similar efforts are advocating that more students be exposed to principles found within computer science. We propose an expansion beyond simply teaching computational…

  8. Prediction of Radiation Fog by DNA Computing

    OpenAIRE

    Ray, Kumar Sankar; Mondal, Mandrita

    2015-01-01

    In this paper we propose a wet lab algorithm for prediction of radiation fog by DNA computing. The concept of DNA computing is essentially exploited for generating the classifier algorithm in the wet lab. The classifier is based on a new concept of similarity based fuzzy reasoning suitable for wet lab implementation. This new concept of similarity based fuzzy reasoning is different from conventional approach to fuzzy reasoning based on similarity measure and also replaces the logical aspect o...

  9. Reasoning with Computer Code: a new Mathematical Logic

    Science.gov (United States)

    Pissanetzky, Sergio

    2013-01-01

    A logic is a mathematical model of knowledge used to study how we reason, how we describe the world, and how we infer the conclusions that determine our behavior. The logic presented here is natural. It has been experimentally observed, not designed. It represents knowledge as a causal set, includes a new type of inference based on the minimization of an action functional, and generates its own semantics, making it unnecessary to prescribe one. This logic is suitable for high-level reasoning with computer code, including tasks such as self-programming, objectoriented analysis, refactoring, systems integration, code reuse, and automated programming from sensor-acquired data. A strong theoretical foundation exists for the new logic. The inference derives laws of conservation from the permutation symmetry of the causal set, and calculates the corresponding conserved quantities. The association between symmetries and conservation laws is a fundamental and well-known law of nature and a general principle in modern theoretical Physics. The conserved quantities take the form of a nested hierarchy of invariant partitions of the given set. The logic associates elements of the set and binds them together to form the levels of the hierarchy. It is conjectured that the hierarchy corresponds to the invariant representations that the brain is known to generate. The hierarchies also represent fully object-oriented, self-generated code, that can be directly compiled and executed (when a compiler becomes available), or translated to a suitable programming language. The approach is constructivist because all entities are constructed bottom-up, with the fundamental principles of nature being at the bottom, and their existence is proved by construction. The new logic is mathematically introduced and later discussed in the context of transformations of algorithms and computer programs. We discuss what a full self-programming capability would really mean. We argue that self

  10. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  11. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    Science.gov (United States)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  12. Computer program for prediction of the deposition of material released from fixed and rotary wing aircraft

    Science.gov (United States)

    Teske, M. E.

    1984-01-01

    This is a user manual for the computer code ""AGDISP'' (AGricultural DISPersal) which has been developed to predict the deposition of material released from fixed and rotary wing aircraft in a single-pass, computationally efficient manner. The formulation of the code is novel in that the mean particle trajectory and the variance about the mean resulting from turbulent fluid fluctuations are simultaneously predicted. The code presently includes the capability of assessing the influence of neutral atmospheric conditions, inviscid wake vortices, particle evaporation, plant canopy and terrain on the deposition pattern.

  13. Interface design of VSOP'94 computer code for safety analysis

    Science.gov (United States)

    Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi

    2014-09-01

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  14. A computer code for analysis of severe accidents in LWRs

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    The ICARE2 computer code, developed and validated since 1988 at IPSN (nuclear safety and protection institute), calculates in a mechanistic way the physical and chemical phenomena involved in the core degradation process during possible severe accidents in LWR's. The coupling between ICARE2 and the best-estimate thermal-hydraulics code CATHARE2 was completed at IPSN and led to the release of a first ICARE/CATHARE V1 version in 1999, followed by 2 successive revisions in 2000 and 2001. This documents gathers all the contributions presented at the first international ICARE/CATHARE users'club seminar that took place in November 2001. This seminar was characterized by a high quality and variety of the presentations, showing an increase of reactor applications and user needs in this area (2D/3D aspects, reflooding, corium slumping into the lower head,...). 2 sessions were organized. The first one was dedicated to the applications of ICARE2 V3mod1 against small-scale experiments such as PHEBUS FPT2 and FPT3 tests, PHEBUS AIC, QUENCH experiments, NRU-FLHT-5 test, ACRR-MP1 and DC1 experiments, CORA-PWR tests, and PBF-SFD1.4 test. The second session involved ICARE/CATHARE V1mod1 reactor applications and users'guidelines. Among reactor applications we found: code applicability to high burn-up fuel rods, simulation of the TMI-2 transient, simulation of a PWR-900 high pressure severe accident sequence, and the simulation of a VVER-1000 large break LOCA scenario. (A.C.)

  15. Compute-and-Forward: Harnessing Interference through Structured Codes

    CERN Document Server

    Nazer, Bobak

    2009-01-01

    Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information. Its potential is demonstrated through examples drawn ...

  16. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    Science.gov (United States)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  17. Computer Tensor Codes to Design the War Drive

    Science.gov (United States)

    Maccone, C.

    To address problems in Breakthrough Propulsion Physics (BPP) and design the Warp Drive one needs sheer computing capabilities. This is because General Relativity (GR) and Quantum Field Theory (QFT) are so mathematically sophisticated that the amount of analytical calculations is prohibitive and one can hardly do all of them by hand. In this paper we make a comparative review of the main tensor calculus capabilities of the three most advanced and commercially available “symbolic manipulator” codes. We also point out that currently one faces such a variety of different conventions in tensor calculus that it is difficult or impossible to compare results obtained by different scholars in GR and QFT. Mathematical physicists, experimental physicists and engineers have each their own way of customizing tensors, especially by using different metric signatures, different metric determinant signs, different definitions of the basic Riemann and Ricci tensors, and by adopting different systems of physical units. This chaos greatly hampers progress toward the design of the Warp Drive. It is thus suggested that NASA would be a suitable organization to establish standards in symbolic tensor calculus and anyone working in BPP should adopt these standards. Alternatively other institutions, like CERN in Europe, might consider the challenge of starting the preliminary implementation of a Universal Tensor Code to design the Warp Drive.

  18. Computer code for the atomistic simulation of lattice defects and dynamics. [COMENT code

    Energy Technology Data Exchange (ETDEWEB)

    Schiffgens, J.O.; Graves, N.J.; Oster, C.A.

    1980-04-01

    This document has been prepared to satisfy the need for a detailed, up-to-date description of a computer code that can be used to simulate phenomena on an atomistic level. COMENT was written in FORTRAN IV and COMPASS (CDC assembly language) to solve the classical equations of motion for a large number of atoms interacting according to a given force law, and to perform the desired ancillary analysis of the resulting data. COMENT is a dual-purpose intended to describe static defect configurations as well as the detailed motion of atoms in a crystal lattice. It can be used to simulate the effect of temperature, impurities, and pre-existing defects on radiation-induced defect production mechanisms, defect migration, and defect stability.

  19. Sequence Prediction With Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns.

    Science.gov (United States)

    Rasanen, Okko J; Saarinen, Jukka P

    2016-09-01

    Modeling and prediction of temporal sequences is central to many signal processing and machine learning applications. Prediction based on sequence history is typically performed using parametric models, such as fixed-order Markov chains ( n -grams), approximations of high-order Markov processes, such as mixed-order Markov models or mixtures of lagged bigram models, or with other machine learning techniques. This paper presents a method for sequence prediction based on sparse hyperdimensional coding of the sequence structure and describes how higher order temporal structures can be utilized in sparse coding in a balanced manner. The method is purely incremental, allowing real-time online learning and prediction with limited computational resources. Experiments with prediction of mobile phone use patterns, including the prediction of the next launched application, the next GPS location of the user, and the next artist played with the phone media player, reveal that the proposed method is able to capture the relevant variable-order structure from the sequences. In comparison with the n -grams and the mixed-order Markov models, the sparse hyperdimensional predictor clearly outperforms its peers in terms of unweighted average recall and achieves an equal level of weighted average recall as the mixed-order Markov chain but without the batch training of the mixed-order model.

  20. A CMOS Imager with Focal Plane Compression using Predictive Coding

    Science.gov (United States)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  1. Fast motion prediction algorithm for multiview video coding

    Science.gov (United States)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  2. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    Science.gov (United States)

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding.

  3. Analysis of the microturbine combustion chamber by using the CHEMKIN III computer code; Analise da camara de combustao de microturbinas empregando-se o codigo computacional CHEMKIN III

    Energy Technology Data Exchange (ETDEWEB)

    Madela, Vinicius Zacarias; Pauliny, Luis F. de A.; Veras, Carlos A. Gurgel [Brasilia Univ., DF (Brazil). Dept. de Engenharia Mecanica]. E-mail: gurgel@enm.unb.br; Costa, Fernando de S. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Lab. Associado de Combustao e Propulsao]. E-mail: fernando@cptec.inpe.br

    2000-07-01

    This work presents the results obtained with the simulation of multi fuel micro turbines combustion chambers. In particular, the predictions for the methane and Diesel burning are presented. The appropriate routines of the CHEMKIN III computer code were used.

  4. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  5. On the application of computational fluid dynamics codes for liquefied natural gas dispersion.

    Science.gov (United States)

    Luketa-Hanlin, Anay; Koopman, Ronald P; Ermak, Donald L

    2007-02-20

    Computational fluid dynamics (CFD) codes are increasingly being used in the liquefied natural gas (LNG) industry to predict natural gas dispersion distances. This paper addresses several issues regarding the use of CFD for LNG dispersion such as specification of the domain, grid, boundary and initial conditions. A description of the k-epsilon model is presented, along with modifications required for atmospheric flows. Validation issues pertaining to the experimental data from the Burro, Coyote, and Falcon series of LNG dispersion experiments are also discussed. A description of the atmosphere is provided as well as discussion on the inclusion of the Coriolis force to model very large LNG spills.

  6. Multivariate optical computation for predictive spectroscopy.

    Science.gov (United States)

    Nelson, M P; Aust, J F; Dobrowolski, J A; Verly, P G; Myrick, M L

    1998-01-01

    A novel optical approach to predicting chemical and physical properties based on principal component analysis (PCA) is proposed and evaluated using a data set from earlier work. In our approach, a regression vector produced by PCA is designed into the structure of a set of paired optical filters. Light passing through the paired filters produces an analog detector signal that is directly proportional to the chemical/physical property for which the regression vector was designed. This simple optical computational method for predictive spectroscopy is evaluated in several ways, using the example data for numeric simulation. First, we evaluate the sensitivity of the method to various types of spectroscopic errors commonly encountered and find the method to have the same susceptibilities toward error as standard methods. Second, we use propagation of errors to determine the effects of detector noise on the predictive power of the method, finding the optical computation approach to have a large multiplex advantage over conventional methods. Third, we use two different design approaches to the construction of the paired filter set for the example measurement to evaluate manufacturability, finding that adequate methods exist to design appropriate optical devices. Fourth, we numerically simulate the predictive errors introduced by design errors in the paired filters, finding that predictive errors are not increased over conventional methods. Fifth, we consider how the performance of the method is affected by light intensities that are not linearly related to chemical composition (as in transmission spectroscopy) and find that the method is only marginally affected. In summary, we conclude that many types of predictive measurements based on use of regression (or other) vectors and linear mathematics can be performed more rapidly, more effectly, and at considerably lower cost by the proposed optical computation method than by traditional dispersive or interferometric

  7. Simulation of Conjugate Structure Algebraic Code Excited Linear Prediction Speech Coder

    Directory of Open Access Journals (Sweden)

    Ritisha Virulkar

    2014-03-01

    Full Text Available The CS-ACELP is a speech coder that is based on the linear prediction coding technique. It gives us the bit rate reduced to up to 8kbps and at the same time reduces the computational comp le xity of speech search described in ITU rec-ommendation G.729. This codec is used for compression of speech signal.The idea behind this algorithm is to predict the next co ming signals by the means of linear prediction. For his it uses fixed codebook and adaptive codebook.The quality of speech delivered by this coder is equivalent to 32 kbps ADPCM. The processes responsible for achieving reduction in bit rateare: sending less number of bits for no voice detection and carrying out conditional search in fixed codebook.

  8. Computationally efficient prediction of area per lipid

    DEFF Research Database (Denmark)

    Chaban, Vitaly V.

    2014-01-01

    Area per lipid (APL) is an important property of biological and artificial membranes. Newly constructed bilayers are characterized by their APL and newly elaborated force fields must reproduce APL. Computer simulations of APL are very expensive due to slow conformational dynamics. The simulated d....... Thus, sampling times to predict accurate APL are reduced by a factor of 10. (C) 2014 Elsevier B.V. All rights reserved....

  9. Is ""predictability"" in computational sciences a myth?

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M [Los Alamos National Laboratory

    2011-01-31

    Within the last two decades, Modeling and Simulation (M&S) has become the tool of choice to investigate the behavior of complex phenomena. Successes encountered in 'hard' sciences are prompting interest to apply a similar approach to Computational Social Sciences in support, for example, of national security applications faced by the Intelligence Community (IC). This manuscript attempts to contribute to the debate on the relevance of M&S to IC problems by offering an overview of what it takes to reach 'predictability' in computational sciences. Even though models developed in 'soft' and 'hard' sciences are different, useful analogies can be drawn. The starting point is to view numerical simulations as 'filters' capable to represent information only within specific length, time or energy bandwidths. This simplified view leads to the discussion of resolving versus modeling which motivates the need for sub-scale modeling. The role that modeling assumptions play in 'hiding' our lack-of-knowledge about sub-scale phenomena is explained which leads to discussing uncertainty in simulations. It is argued that the uncertainty caused by resolution and modeling assumptions should be dealt with differently than uncertainty due to randomness or variability. The corollary is that a predictive capability cannot be defined solely as accuracy, or ability of predictions to match the available physical observations. We propose that 'predictability' is the demonstration that predictions from a class of 'equivalent' models are as consistent as possible. Equivalency stems from defining models that share a minimum requirement of accuracy, while being equally robust to the sources of lack-of-knowledge in the problem. Examples in computational physics and engineering are given to illustrate the discussion.

  10. Implementation of a 3D mixing layer code on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Roe, K.; Thakur, R.; Dang, T.; Bogucz, E. [Syracuse Univ., NY (United States)

    1995-09-01

    This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.

  11. A Novel Feature Extraction Scheme with Ensemble Coding for Protein–Protein Interaction Prediction

    Directory of Open Access Journals (Sweden)

    Xiuquan Du

    2014-07-01

    Full Text Available Protein–protein interactions (PPIs play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF algorithm with the ensemble coding (EC method. To reduce computational time, a feature selection method (DX was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp.

  12. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 3: Assessment Manual

    Energy Technology Data Exchange (ETDEWEB)

    Müller, C.; Hughes, E. D.; Niederauer, G. F.; Wilkening, H.; Travis, J. R.; Spore, J. W.; Royl, P.; Baumann, W.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best- estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume

  13. SEACC: the systems engineering and analysis computer code for small wind systems

    Energy Technology Data Exchange (ETDEWEB)

    Tu, P.K.C.; Kertesz, V.

    1983-03-01

    The systems engineering and analysis (SEA) computer program (code) evaluates complete horizontal-axis SWECS performance. Rotor power output as a function of wind speed and energy production at various wind regions are predicted by the code. Efficiencies of components such as gearbox, electric generators, rectifiers, electronic inverters, and batteries can be included in the evaluation process to reflect the complete system performance. Parametric studies can be carried out for blade design characteristics such as airfoil series, taper rate, twist degrees and pitch setting; and for geometry such as rotor radius, hub radius, number of blades, coning angle, rotor rpm, etc. Design tradeoffs can also be performed to optimize system configurations for constant rpm, constant tip speed ratio and rpm-specific rotors. SWECS energy supply as compared to the load demand for each hour of the day and during each session of the year can be assessed by the code if the diurnal wind and load distributions are known. Also available during each run of the code is blade aerodynamic loading information.

  14. Direct social perception, mindreading and Bayesian predictive coding.

    Science.gov (United States)

    de Bruin, Leon; Strijbos, Derek

    2015-11-01

    Mindreading accounts of social cognition typically claim that we cannot directly perceive the mental states of other agents and therefore have to exercise certain cognitive capacities in order to infer them. In recent years this view has been challenged by proponents of the direct social perception (DSP) thesis, who argue that the mental states of other agents can be directly perceived. In this paper we show, first, that the main disagreement between proponents of DSP and mindreading accounts has to do with the so-called 'sandwich model' of social cognition. Although proponents of DSP are critical of this model, we argue that they still seem to accept the distinction between perception, cognition and action that underlies it. Second, we contrast the sandwich model of social cognition with an alternative theoretical framework that is becoming increasingly popular in the cognitive neurosciences: Bayesian Predictive Coding (BPC). We show that the BPC framework renders a principled distinction between perception, cognition and action obsolete, and can accommodate elements of both DSP and mindreading accounts. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Assessment of the computer code COBRA/CFTL

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C. B.; Burhop, C. J.

    1981-07-01

    The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.

  16. Biocomputational prediction of non-coding RNAs in model cyanobacteria

    Directory of Open Access Journals (Sweden)

    Ude Susanne

    2009-03-01

    Full Text Available Abstract Background In bacteria, non-coding RNAs (ncRNA are crucial regulators of gene expression, controlling various stress responses, virulence, and motility. Previous work revealed a relatively high number of ncRNAs in some marine cyanobacteria. However, for efficient genetic and biochemical analysis it would be desirable to identify a set of ncRNA candidate genes in model cyanobacteria that are easy to manipulate and for which extended mutant, transcriptomic and proteomic data sets are available. Results Here we have used comparative genome analysis for the biocomputational prediction of ncRNA genes and other sequence/structure-conserved elements in intergenic regions of the three unicellular model cyanobacteria Synechocystis PCC6803, Synechococcus elongatus PCC6301 and Thermosynechococcus elongatus BP1 plus the toxic Microcystis aeruginosa NIES843. The unfiltered numbers of predicted elements in these strains is 383, 168, 168, and 809, respectively, combined into 443 sequence clusters, whereas the numbers of individual elements with high support are 94, 56, 64, and 406, respectively. Removing also transposon-associated repeats, finally 78, 53, 42 and 168 sequences, respectively, are left belonging to 109 different clusters in the data set. Experimental analysis of selected ncRNA candidates in Synechocystis PCC6803 validated new ncRNAs originating from the fabF-hoxH and apcC-prmA intergenic spacers and three highly expressed ncRNAs belonging to the Yfr2 family of ncRNAs. Yfr2a promoter-luxAB fusions confirmed a very strong activity of this promoter and indicated a stimulation of expression if the cultures were exposed to elevated light intensities. Conclusion Comparison to entries in Rfam and experimental testing of selected ncRNA candidates in Synechocystis PCC6803 indicate a high reliability of the current prediction, despite some contamination by the high number of repetitive sequences in some of these species. In particular, we

  17. Superimposed Code Theoretic Analysis of Deoxyribonucleic Acid (DNA) Codes and DNA Computing

    Science.gov (United States)

    2010-01-01

    DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE Proceedings of International Symposium on Information Theory, pp. 2292 – 2296...2008, pp. 525-34. 28. A. Macula, et al., “Random Coding Bounds for DNA Codes Based on Fibonacci Ensembles of DNA Sequences ”, 2008 IEEE...component of this innovation is the combinatorial method of bio-memory design and detection that encodes item or process information as numerical sequences

  18. User manual for PACTOLUS: a code for computing power costs.

    Energy Technology Data Exchange (ETDEWEB)

    Huber, H.D.; Bloomster, C.H.

    1979-02-01

    PACTOLUS is a computer code for calculating the cost of generating electricity. Through appropriate definition of the input data, PACTOLUS can calculate the cost of generating electricity from a wide variety of power plants, including nuclear, fossil, geothermal, solar, and other types of advanced energy systems. The purpose of PACTOLUS is to develop cash flows and calculate the unit busbar power cost (mills/kWh) over the entire life of a power plant. The cash flow information is calculated by two principal models: the Fuel Model and the Discounted Cash Flow Model. The Fuel Model is an engineering cost model which calculates the cash flow for the fuel cycle costs over the project lifetime based on input data defining the fuel material requirements, the unit costs of fuel materials and processes, the process lead and lag times, and the schedule of the capacity factor for the plant. For nuclear plants, the Fuel Model calculates the cash flow for the entire nuclear fuel cycle. For fossil plants, the Fuel Model calculates the cash flow for the fossil fuel purchases. The Discounted Cash Flow Model combines the fuel costs generated by the Fuel Model with input data on the capital costs, capital structure, licensing time, construction time, rates of return on capital, tax rates, operating costs, and depreciation method of the plant to calculate the cash flow for the entire lifetime of the project. The financial and tax structure for both investor-owned utilities and municipal utilities can be simulated through varying the rates of return on equity and debt, the debt-equity ratios, and tax rates. The Discounted Cash Flow Model uses the principal that the present worth of the revenues will be equal to the present worth of the expenses including the return on investment over the economic life of the project. This manual explains how to prepare the input data, execute cases, and interpret the output results. (RWR)

  19. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    Science.gov (United States)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  20. Reducing Computational Overhead of Network Coding with Intrinsic Information Conveying

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Pedersen, Morten V.

    This paper investigated the possibility of intrinsic information conveying in network coding systems. The information is embedded into the coding vector by constructing the vector based on a set of predefined rules. This information can subsequently be retrieved by any receiver. The starting point...... to the overall energy consumption, which is particular problematic for mobile battery-driven devices. In RLNC coding is performed over a FF (Finite Field). We propose to divide this field into sub fields, and let each sub field signify some information or state. In order to embed the information correctly...... the coding operations must be performed in a particular way, which we introduce. Finally we evaluate the suggested system and find that the amount of coding can be significantly reduced both at nodes that recode and decode....

  1. 3-D field computation: The near-triumph of commerical codes

    Energy Technology Data Exchange (ETDEWEB)

    Turner, L.R.

    1995-07-01

    In recent years, more and more of those who design and analyze magnets and other devices are using commercial codes rather than developing their own. This paper considers the commercial codes and the features available with them. Other recent trends with 3-D field computation include parallel computation and visualization methods such as virtual reality systems.

  2. ORNL ALICE: a statistical model computer code including fission competition. [In FORTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Plasil, F.

    1977-11-01

    A listing of the computer code ORNL ALICE is given. This code is a modified version of computer codes ALICE and OVERLAID ALICE. It allows for higher excitation energies and for a greater number of evaporated particles than the earlier versions. The angular momentum removal option was made more general and more internally consistent. Certain roundoff errors are avoided by keeping a strict accounting of partial probabilities. Several output options were added.

  3. Comparison of computer codes for estimates of the symmetric coupled bunch instabilities growth times

    CERN Document Server

    Angal-Kalinin, Deepa

    2002-01-01

    The standard computer codes used for estimating the growth times of the symmetric coupled bunch instabilities are ZAP and BBI.The code Vlasov was earlier used for the LHC for the estimates of the coupled bunch instabilities growth time[1]. The results obtained by these three codes have been compared and the options under which their results can be compared are discussed. The differences in the input and the output for these three codes are given for a typical case.

  4. Results and code predictions for ABCOVE (aerosol behavior code validation and evaluation) aerosol code validation with low concentration NaOH and NaI aerosol: CSTF test AB7

    Energy Technology Data Exchange (ETDEWEB)

    Hilliard, R.K.; McCormack, J.D.; Muhlestein, L.D.

    1985-10-01

    A program for aerosol behavior validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The third large-scale test in the ABCOVE program, AB7, was performed in the 850-m/sup 3/ CSTF vessel with a two-species test aerosol. The test conditions involved the release of a simulated fission product aerosol, NaI, into the containment atmosphere after the end of a small sodium pool fire. Four organizations made pretest predictions of aerosol behavior using five computer codes. Two of the codes (QUICKM and CONTAIN) were discrete, multiple species codes, while three (HAA-3, HAA-4, and HAARM-3) were log-normal codes which assume uniform coagglomeration of different aerosol species. Detailed test results are presented and compared with the code predictions for eight key aerosol behavior parameters. 11 refs., 44 figs., 35 tabs.

  5. Assessment of Turbulent Shock-Boundary Layer Interaction Computations Using the OVERFLOW Code

    Science.gov (United States)

    Oliver, A. B.; Lillard, R. P.; Schwing, A. M.; Blaisdell, G> A.; Lyrintzis, A. S.

    2007-01-01

    The performance of two popular turbulence models, the Spalart-Allmaras model and Menter s SST model, and one relatively new model, Olsen & Coakley s Lag model, are evaluated using the OVERFLOWcode. Turbulent shock-boundary layer interaction predictions are evaluated with three different experimental datasets: a series of 2D compression ramps at Mach 2.87, a series of 2D compression ramps at Mach 2.94, and an axisymmetric coneflare at Mach 11. The experimental datasets include flows with no separation, moderate separation, and significant separation, and use several different experimental measurement techniques (including laser doppler velocimetry (LDV), pitot-probe measurement, inclined hot-wire probe measurement, preston tube skin friction measurement, and surface pressure measurement). Additionally, the OVERFLOW solutions are compared to the solutions of a second CFD code, DPLR. The predictions for weak shock-boundary layer interactions are in reasonable agreement with the experimental data. For strong shock-boundary layer interactions, all of the turbulence models overpredict the separation size and fail to predict the correct skin friction recovery distribution. In most cases, surface pressure predictions show too much upstream influence, however including the tunnel side-wall boundary layers in the computation improves the separation predictions.

  6. Validation of computationally predicted substrates for laccase

    Directory of Open Access Journals (Sweden)

    Reena

    2014-10-01

    Full Text Available Present study reports the validation (oxidation of computationally predicted oxidation of xenobiotic contaminants by commercially available pure laccase from Trametes versicolor. Selected contaminants were predicted as potential targets for laccase oxidation by using in-silico docking tool. The oxidation by laccase was measured by change in absorbance at specific λ max of each compound. Sinapic acid and tyrosine were taken as positive and negative controls, respectively. Oxidation was observed in m-chlorophenol, 2,4 di-chlorophenol, 2,4,6 tri-chlorophenol, captan, atrazine and thiodicarb, except malathion, which showed no activity. It could be speculated that the predicted substrates showing oxidation shared homology at structural and chemical level with positive control compounds. In case of malathion, structural non-homology with sinapic acid could be attributed to its inactivity towards laccase that required further structural analysis. Thus, a remediation tool proposing an advanced remediation approach combining the application of theoretical in-silico method and subsequent experimental validation using pure laccase could be proposed. As number and type of xenobiotics increase, the unfeasibility to screen them experimentally for bioremediation also rise. This approach would be useful to reduce the time and cost required in other screening methods.

  7. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  8. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    Energy Technology Data Exchange (ETDEWEB)

    Isaev, M. Yu., E-mail: isaev-my@nrcki.ru [National Research Center Kurchatov Institute (Russian Federation); Medvedev, S. Yu. [Russian Academy of Sciences, Keldysh Institute (Russian Federation); Cooper, W. A. [Suisse Plasma Centre (Switzerland)

    2017-02-15

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1. The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.

  9. Alfvén eigenmode evolution computed with the VENUS and KINX codes for the ITER baseline scenario

    Science.gov (United States)

    Isaev, M. Yu.; Medvedev, S. Yu.; Cooper, W. A.

    2017-02-01

    A new application of the VENUS code is described, which computes alpha particle orbits in the perturbed electromagnetic fields and its resonant interaction with the toroidal Alfvén eigenmodes (TAEs) for the ITER device. The ITER baseline scenario with Q = 10 and the plasma toroidal current of 15 MA is considered as the most important and relevant for the International Tokamak Physics Activity group on energetic particles (ITPA-EP). For this scenario, typical unstable TAE-modes with the toroidal index n = 20 have been predicted that are localized in the plasma core near the surface with safety factor q = 1. The spatial structure of ballooning and antiballooning modes has been computed with the ideal MHD code KINX. The linear growth rates and the saturation levels taking into account the damping effects and the different mode frequencies have been calculated with the VENUS code for both ballooning and antiballooning TAE-modes.

  10. Quantum error correcting codes and one-way quantum computing: Towards a quantum memory

    CERN Document Server

    Schlingemann, D

    2003-01-01

    For realizing a quantum memory we suggest to first encode quantum information via a quantum error correcting code and then concatenate combined decoding and re-encoding operations. This requires that the encoding and the decoding operation can be performed faster than the typical decoherence time of the underlying system. The computational model underlying the one-way quantum computer, which has been introduced by Hans Briegel and Robert Raussendorf, provides a suitable concept for a fast implementation of quantum error correcting codes. It is shown explicitly in this article is how encoding and decoding operations for stabilizer codes can be realized on a one-way quantum computer. This is based on the graph code representation for stabilizer codes, on the one hand, and the relation between cluster states and graph codes, on the other hand.

  11. Application of computational fluid dynamics methods to improve thermal hydraulic code analysis

    Science.gov (United States)

    Sentell, Dennis Shannon, Jr.

    A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

  12. Recommendations for computer code selection of a flow and transport code to be used in undisturbed vadose zone calculations for TWRS immobilized environmental analyses

    Energy Technology Data Exchange (ETDEWEB)

    VOOGD, J.A.

    1999-04-19

    An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis.

  13. Computer programs to predict induced effects of jets exhausting into a crossflow

    Science.gov (United States)

    Perkins, S. C., Jr.; Mendenhall, M. R.

    1984-01-01

    A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.

  14. Quantum computation with topological codes from qubit to topological fault-tolerance

    CERN Document Server

    Fujii, Keisuke

    2015-01-01

    This book presents a self-consistent review of quantum computation with topological quantum codes. The book covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological fault-tolerant operations. The underlying basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced in a self-consistent way. The interdisciplinary fields between quantum information and other fields of physics such as condensed matter physics and statistical physics are also explored in terms of the topological quantum codes. This book thus provides the first comprehensive description of the whole picture of topological quantum codes and quantum computation with them.

  15. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    Science.gov (United States)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  16. Computational prediction of new auxetic materials.

    Science.gov (United States)

    Dagdelen, John; Montoya, Joseph; de Jong, Maarten; Persson, Kristin

    2017-08-22

    Auxetics comprise a rare family of materials that manifest negative Poisson's ratio, which causes an expansion instead of contraction under tension. Most known homogeneously auxetic materials are porous foams or artificial macrostructures and there are few examples of inorganic materials that exhibit this behavior as polycrystalline solids. It is now possible to accelerate the discovery of materials with target properties, such as auxetics, using high-throughput computations, open databases, and efficient search algorithms. Candidates exhibiting features correlating with auxetic behavior were chosen from the set of more than 67 000 materials in the Materials Project database. Poisson's ratios were derived from the calculated elastic tensor of each material in this reduced set of compounds. We report that this strategy results in the prediction of three previously unidentified homogeneously auxetic materials as well as a number of compounds with a near-zero homogeneous Poisson's ratio, which are here denoted "anepirretic materials".There are very few inorganic materials with auxetic homogenous Poisson's ratio in polycrystalline form. Here authors develop an approach to screening materials databases for target properties such as negative Poisson's ratio by using stability and structural motifs to predict new instances of homogenous auxetic behavior as well as a number of materials with near-zero Poisson's ratio.

  17. Computational prediction of solubilizers' effect on partitioning.

    Science.gov (United States)

    Hoest, Jan; Christensen, Inge T; Jørgensen, Flemming S; Hovgaard, Lars; Frokjaer, Sven

    2007-02-01

    A computational model for the prediction of solubilizers' effect on drug partitioning has been developed. Membrane/water partitioning was evaluated by means of immobilized artificial membrane (IAM) chromatography. Four solubilizers were used to alter the partitioning in the IAM column. Two types of molecular descriptors were calculated: 2D descriptors using the MOE software and 3D descriptors using the Volsurf software. Structure-property relationships between each of the two types of descriptors and partitioning were established using partial least squares, projection to latent structures (PLS) statistics. Statistically significant relationships between the molecular descriptors and the IAM data were identified. Based on the 2D descriptors structure-property relationships R(2)Y=0. 99 and Q(2)=0.82-0.83 were obtained for some of the solubilizers. The most important descriptor was related to logP. For the Volsurf 3D descriptors models with R(2)Y=0.53-0.64 and Q(2)=0.40-0.54 were obtained using five descriptors. The present study showed that it is possible to predict partitioning of substances in an artificial phospholipid membrane, with or without the use of solubilizers.

  18. Two-Phase Flow in Geothermal Wells: Development and Uses of a Good Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Ramirez, Jaime

    1983-06-01

    A computer code is developed for vertical two-phase flow in geothermal wellbores. The two-phase correlations used were developed by Orkiszewski (1967) and others and are widely applicable in the oil and gas industry. The computer code is compared to the flowing survey measurements from wells in the East Mesa, Cerro Prieto, and Roosevelt Hot Springs geothermal fields with success. Well data from the Svartsengi field in Iceland are also used. Several applications of the computer code are considered. They range from reservoir analysis to wellbore deposition studies. It is considered that accurate and workable wellbore simulators have an important role to play in geothermal reservoir engineering.

  19. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...

  20. Second Generation Integrated Composite Analyzer (ICAN) Computer Code

    Science.gov (United States)

    Murthy, Pappu L. N.; Ginty, Carol A.; Sanfeliz, Jose G.

    1993-01-01

    This manual updates the original 1986 NASA TP-2515, Integrated Composite Analyzer (ICAN) Users and Programmers Manual. The various enhancements and newly added features are described to enable the user to prepare the appropriate input data to run this updated version of the ICAN code. For reference, the micromechanics equations are provided in an appendix and should be compared to those in the original manual for modifications. A complete output for a sample case is also provided in a separate appendix. The input to the code includes constituent material properties, factors reflecting the fabrication process, and laminate configuration. The code performs micromechanics, macromechanics, and laminate analyses, including the hygrothermal response of polymer-matrix-based fiber composites. The output includes the various ply and composite properties, the composite structural response, and the composite stress analysis results with details on failure. The code is written in FORTRAN 77 and can be used efficiently as a self-contained package (or as a module) in complex structural analysis programs. The input-output format has changed considerably from the original version of ICAN and is described extensively through the use of a sample problem.

  1. Code of Ethical Conduct for Computer-Using Educators: An ICCE Policy Statement.

    Science.gov (United States)

    Computing Teacher, 1987

    1987-01-01

    Prepared by the International Council for Computers in Education's Ethics and Equity Committee, this code of ethics for educators using computers covers nine main areas: curriculum issues, issues relating to computer access, privacy/confidentiality issues, teacher-related issues, student issues, the community, school organizational issues,…

  2. Implementation of discrete transfer radiation method into swift computational fluid dynamics code

    Directory of Open Access Journals (Sweden)

    Baburić Mario

    2004-01-01

    Full Text Available The Computational Fluid Dynamics (CFD has developed into a powerful tool widely used in science, technology and industrial design applications, when ever fluid flow, heat transfer, combustion, or other complicated physical processes, are involved. During decades of development of CFD codes scientists were writing their own codes, that had to include not only the model of processes that were of interest, but also a whole spectrum of necessary CFD procedures, numerical techniques, pre-processing and post-processing. That has arrested much of the scientist effort in work that has been copied many times over, and was not actually producing the added value. The arrival of commercial CFD codes brought relief to many engineers that could now use the user-function approach for mod el ling purposes, en trusting the application to do the rest of the work. This pa per shows the implementation of Discrete Transfer Radiation Method into AVL’s commercial CFD code SWIFT with the help of user defined functions. Few standard verification test cases were per formed first, and in order to check the implementation of the radiation method it self, where the comparisons with available analytic solution could be performed. After wards, the validation was done by simulating the combustion in the experimental furnace at IJmuiden (Netherlands, for which the experimental measurements were available. The importance of radiation prediction in such real-size furnaces is proved again to be substantial, where radiation itself takes the major fraction of over all heat transfer. The oil-combustion model used in simulations was the semi-empirical one that has been developed at the Power Engineering Department, and which is suit able for a wide range of typical oil flames.

  3. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors

    Science.gov (United States)

    Schmidt, James F.

    1995-01-01

    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  4. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...

  5. On Predictive Coding for Erasure Channels Using a Kalman Framework

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Murthi, Manohar; Andersen, Søren Vang

    2009-01-01

    We present a new design method for robust low-delay coding of autoregressive sources for transmission across erasure channels. It is a fundamental rethinking of existing concepts. It considers the encoder a mechanism that produces signal measurements from which the decoder estimates the original...

  6. Computational Participation: Understanding Coding as an Extension of Literacy Instruction

    Science.gov (United States)

    Burke, Quinn; O'Byrne, W. Ian; Kafai, Yasmin B.

    2016-01-01

    Understanding the computational concepts on which countless digital applications run offers learners the opportunity to no longer simply read such media but also become more discerning end users and potentially innovative "writers" of new media themselves. To think computationally--to solve problems, to design systems, and to process and…

  7. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.

  8. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  9. Computer codes used during upgrading activities at MINT TRIGA reactor

    Energy Technology Data Exchange (ETDEWEB)

    Mohammad Suhaimi Kassim; Adnan Bokhari; Mohd. Idris Taib [Malaysian Institute for Nuclear Technology Research, Kajang (Malaysia)

    1999-10-01

    MINT TRIGA Reactor is a 1-MW swimming pool nuclear research reactor commissioned in 1982. In 1993, a project was initiated to upgrade the thermal power to 2 MW. The IAEA assistance was sought to assist the various activities relevant to an upgrading exercise. For neutronics calculations, the IAEA has provided expert assistance to introduce the WIMS code, TRIGAP, and EXTERMINATOR2. For thermal-hydraulics calculations, PARET and RELAP5 were introduced. Shielding codes include ANISN and MERCURE. However, in the middle of 1997, MINT has decided to change the scope of the project to safety upgrading of the MINT Reactor. This paper describes some of the activities carried out during the upgrading process. (author)

  10. Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor

    Science.gov (United States)

    Ajmani, Kumud; Liu, Nan-Suey

    2005-01-01

    The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.

  11. Challenges of Computational Processing of Code-Switching

    OpenAIRE

    Çetinoğlu, Özlem; Schulz, Sarah; Vu, Ngoc Thang

    2016-01-01

    This paper addresses challenges of Natural Language Processing (NLP) on non-canonical multilingual data in which two or more languages are mixed. It refers to code-switching which has become more popular in our daily life and therefore obtains an increasing amount of attention from the research community. We report our experience that cov- ers not only core NLP tasks such as normalisation, language identification, language modelling, part-of-speech tagging and dependency parsing but also more...

  12. Curved Duct Noise Prediction Using the Fast Scattering Code

    Science.gov (United States)

    Dunn, M. H.; Tinetti, Ana F.; Farassat, F.

    2007-01-01

    Results of a study to validate the Fast Scattering Code (FSC) as a duct noise predictor, including the effects of curvature, finite impedance on the walls, and uniform background flow, are presented in this paper. Infinite duct theory was used to generate the modal content of the sound propagating within the duct. Liner effects were incorporated via a sound absorbing boundary condition on the scattering surfaces. Simulations for a rectangular duct of constant cross-sectional area have been compared to analytical solutions and experimental data. Comparisons with analytical results indicate that the code can properly calculate a given dominant mode for hardwall surfaces. Simulated acoustic behavior in the presence of lined walls (using hardwall duct modes as incident sound) is consistent with expected trends. Duct curvature was found to enhance weaker modes and reduce pressure amplitude. Agreement between simulated and experimental results for a straight duct with hard walls (no flow) was excellent.

  13. POTRE: A computer code for the assessment of dose from ingestion

    Energy Technology Data Exchange (ETDEWEB)

    Hanusik, V.; Mitro, A.; Niedel, S.; Grosikova, B.; Uvirova, E.; Stranai, I. (Institute of Radioecology and Applied Nuclear Techniques, Kosice (Czechoslovakia))

    1991-01-01

    The paper describes the computer code PORET and the auxiliary database system which allow to assess the radiation exposure from ingestion of foodstuffs contaminated by radionuclides released from a nuclear facility during normal operation into the atmosphere. (orig.).

  14. Speeding-up MADYMO 3D on serial and parallel computers using a portable coding environment

    NARCIS (Netherlands)

    Tsiandikos, T.; Rooijackers, H.F.L.; Asperen, F.G.J. van; Lupker, H.A.

    1996-01-01

    This paper outlines the strategy and methodology used to create a portable coding environment for the commercial package MADYMO. The objective is to design a global data structure that efficiently utilises the memory and cache of computers, so that one source code can be used for serial, vector and

  15. A Coding System for Qualitative Studies of the Information-Seeking Process in Computer Science Research

    Science.gov (United States)

    Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela

    2015-01-01

    Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…

  16. The Unified English Braille Code: Examination by Science, Mathematics, and Computer Science Technical Expert Braille Readers

    Science.gov (United States)

    Holbrook, M. Cay; MacCuspie, P. Ann

    2010-01-01

    Braille-reading mathematicians, scientists, and computer scientists were asked to examine the usability of the Unified English Braille Code (UEB) for technical materials. They had little knowledge of the code prior to the study. The research included two reading tasks, a short tutorial about UEB, and a focus group. The results indicated that the…

  17. Metropol, a computer code for the simulation of transport of contaminants with groundwater

    NARCIS (Netherlands)

    Sauter FJ; Hassanizadeh SM; Leijnse A; Glasbergen P; Slot AFM

    1990-01-01

    In this report a description is given of the computer code METROPOL. This code simulates the three dimensional flow of groundwater with varying density and the simultaneous transport of contaminants in low concentration and is based on the finite element method. The basic equations for groundwater

  18. Comparison of different computer platforms for running the Versatile Advection Code

    NARCIS (Netherlands)

    Toth, G.; Keppens, R.; Sloot, P.; Bubak, M.; Hertzberger, B.

    1998-01-01

    The Versatile Advection Code is a general tool for solving hydrodynamical and magnetohydrodynamical problems arising in astrophysics. We compare the performance of the code on different computer platforms, including work stations and vector and parallel supercomputers. Good parallel scaling can be a

  19. Code and papers: computing publication patterns in the LHC era

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Publications in scholarly journals establish the body of knowledge deriving from scientific research; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies. This presentation reviews the evolution of computing-oriented publications in HEP following the start of operation of LHC. Quantitative analyses are illustrated, which document the production of scholarly papers on computing-related topics by HEP experiments and core tools projects (including distributed computing R&D), and the citations they receive. Several scientometric indicators are analyzed to characterize the role of computing in HEP literature. Distinctive features of scholarly publication production in the software-oriented and hardware-oriented experimental HEP communities are highlighted. Current patterns and trends are compared to the situation in previous generations' HEP experiments at LEP, Tevatron and B-factories. The results of this scientometric analysis document objec...

  20. Proposed standards for peer-reviewed publication of computer code

    Science.gov (United States)

    Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...

  1. Comparing Fine-Grained Source Code Changes And Code Churn For Bug Prediction

    NARCIS (Netherlands)

    Giger, E.; Pinzger, M.; Gall, H.C.

    2011-01-01

    A significant amount of research effort has been dedicated to learning prediction models that allow project managers to efficiently allocate resources to those parts of a software system that most likely are bug-prone and therefore critical. Prominent measures for building bug prediction models are

  2. Computation of Grobner basis for systematic encoding of generalized quasi-cyclic codes

    CERN Document Server

    Van, Vo Tam; Mita, Seiichi

    2008-01-01

    Generalized quasi-cyclic (GQC) codes form a wide and useful class of linear codes that includes thoroughly quasi-cyclic codes, finite geometry (FG) low density parity check (LDPC) codes, and Hermitian codes. Although it is known that the systematic encoding of GQC codes is equivalent to the division algorithm in the theory of Grobner basis of modules, there has been no algorithm that computes Grobner basis for all types of GQC codes. In this paper, we propose two algorithms to compute Grobner basis for GQC codes from their parity check matrices: echelon canonical form algorithm and transpose algorithm. Both algorithms require sufficiently small number of finite-field operations with the order of the third power of code-length. Each algorithm has its own characteristic; the first algorithm is composed of elementary methods, and the second algorithm is based on a novel formula and is faster than the first one for high-rate codes. Moreover, we show that a serial-in serial-out encoder architecture for FG LDPC cod...

  3. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music.

    Science.gov (United States)

    Vuust, Peter; Witek, Maria A G

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard ("rhythm") and the brain's anticipatory structuring of music ("meter"). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain's general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  4. Windtalking Computers: Frequency Normalization, Binary Coding Systems and Encryption

    CERN Document Server

    Zirkind, Givon

    2009-01-01

    The goal of this paper is to discuss the application of known techniques, knowledge and technology in a novel way, to encrypt computer and non-computer data. To-date most computers use base 2 and most encryption systems use ciphering and/or an encryption algorithm, to convert data into a secret message. The method of having the computer "speak another secret language" as used in human military secret communications has never been imitated. The author presents the theory and several possible implementations of a method for computers for secret communications analogous to human beings using a secret language or; speaking multiple languages. The kind of encryption scheme proposed significantly increases the complexity of and the effort needed for, decryption. As every methodology has its drawbacks, so too, the data of the proposed system has its drawbacks. It is not as compressed as base 2 would be. However, this is manageable and acceptable, if the goal is very strong encryption: At least two methods and their ...

  5. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    Science.gov (United States)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  6. International standard problem (ISP) no. 41 follow up exercise: Containment iodine computer code exercise: parametric studies

    Energy Technology Data Exchange (ETDEWEB)

    Ball, J.; Glowa, G.; Wren, J. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Ewig, F. [GRS Koln (Germany); Dickenson, S. [AEAT, (United Kingdom); Billarand, Y.; Cantrel, L. [IPSN (France); Rydl, A. [NRIR (Czech Republic); Royen, J. [OECD/NEA (France)

    2001-11-01

    This report describes the results of the second phase of International Standard Problem (ISP) 41, an iodine behaviour code comparison exercise. The first phase of the study, which was based on a simple Radioiodine Test Facility (RTF) experiment, demonstrated that all of the iodine behaviour codes had the capability to reproduce iodine behaviour for a narrow range of conditions (single temperature, no organic impurities, controlled pH steps). The current phase, a parametric study, was designed to evaluate the sensitivity of iodine behaviour codes to boundary conditions such as pH, dose rate, temperature and initial I{sup -} concentration. The codes used in this exercise were IODE(IPSN), IODE(NRIR), IMPAIR(GRS), INSPECT(AEAT), IMOD(AECL) and LIRIC(AECL). The parametric study described in this report identified several areas of discrepancy between the various codes. In general, the codes agree regarding qualitative trends, but their predictions regarding the actual amount of volatile iodine varied considerably. The largest source of the discrepancies between code predictions appears to be their different approaches to modelling the formation and destruction of organic iodides. A recommendation arising from this exercise is that an additional code comparison exercise be performed on organic iodide formation, against data obtained front intermediate-scale studies (two RTF (AECL, Canada) and two CAIMAN facility, (IPSN, France) experiments have been chosen). This comparison will allow each of the code users to realistically evaluate and improve the organic iodide behaviour sub-models within their codes. (author)

  7. A predictive coding framework for rapid neural dynamics during sentence-level language comprehension

    NARCIS (Netherlands)

    Lewis, A.G.; Bastiaansen, M.C.M.

    2015-01-01

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using electroencephalography (EEG) and/or magnetoencephalography (MEG), and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding account

  8. SENDIN and SENTINEL: two computer codes to assess the effects of nuclear data changes

    Energy Technology Data Exchange (ETDEWEB)

    Marable, J. H.; Drischler, J. D.; Weisbin, C. R.

    1977-07-01

    A description is given of the computer code SENTINEL, which provides a simple means for finding the effects on calculated reactor and shielding performance parameters due to proposed changes in the cross section data base. This code uses predetermined detailed sensitivity coefficients in SENPRO format, which is described in Appendix A. Knowledge of details of the particular reactor and/or shielding assemblies is not required of the user. Also described is the computer code SENDIN, which converts unformatted (binary) sensitivity files to card image form and vice versa. This is useful for transferring sensitivity files from one installation to another.

  9. TPASS: a gamma-ray spectrum analysis and isotope identification computer code

    Energy Technology Data Exchange (ETDEWEB)

    Dickens, J.K.

    1981-03-01

    The gamma-ray spectral data-reduction and analysis computer code TPASS is described. This computer code is used to analyze complex Ge(Li) gamma-ray spectra to obtain peak areas corrected for detector efficiencies, from which are determined gamma-ray yields. These yields are compared with an isotope gamma-ray data file to determine the contributions to the observed spectrum from decay of specific radionuclides. A complete FORTRAN listing of the code and a complex test case are given.

  10. Mathematical model and computer code for the analysis of advanced fast reactor dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Schukin, N.V. (Moscow Engineering Physics Inst. (Russian Federation)); Korsun, A.S. (Moscow Engineering Physics Inst. (Russian Federation)); Vitruk, S.G. (Moscow Engineering Physics Inst. (Russian Federation)); Zimin, V.G. (Moscow Engineering Physics Inst. (Russian Federation)); Romanin, S.D. (Moscow Engineering Physics Inst. (Russian Federation))

    1993-04-01

    Efficient algorithms for mathematical modeling of 3-D neutron kinetics and thermal hydraulics are described. The model and appropriate computer code make it possible to analyze a variety of transient events ranging from normal operational states to catastrophic accident excursions. To verify the code, a number of calculations of different kind of transients was carried out. The results of the calculations show that the model and the computer code could be used for conceptual design of advanced liquid metal reactors. The detailed description of calculations of TOP WS accident is presented. (orig./DG)

  11. Development of a system of computer codes for severe accident analyses and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Soon Hong; Cheon, Moon Heon; Cho, Nam jin; No, Hui Cheon; Chang, Hyeon Seop; Moon, Sang Kee; Park, Seok Jeong; Chung, Jee Hwan [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1991-12-15

    The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in Nuclear Power Plants. This system of codes is necessary to conduct individual plant examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident resistance. The scope and contents of this study are as follows : development of a system of computer codes for severe accident analyses, development of severe accident management strategy.

  12. Proceedings of the conference on computer codes and the linear accelerator community

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, R.K. (comp.)

    1990-07-01

    The conference whose proceedings you are reading was envisioned as the second in a series, the first having been held in San Diego in January 1988. The intended participants were those people who are actively involved in writing and applying computer codes for the solution of problems related to the design and construction of linear accelerators. The first conference reviewed many of the codes both extant and under development. This second conference provided an opportunity to update the status of those codes, and to provide a forum in which emerging new 3D codes could be described and discussed. The afternoon poster session on the second day of the conference provided an opportunity for extended discussion. All in all, this conference was felt to be quite a useful interchange of ideas and developments in the field of 3D calculations, parallel computation, higher-order optics calculations, and code documentation and maintenance for the linear accelerator community. A third conference is planned.

  13. Exact Gap Computation for Code Coverage Metrics in ISO-C

    CERN Document Server

    Richter, Dirk; 10.4204/EPTCS.80.4

    2012-01-01

    Test generation and test data selection are difficult tasks for model based testing. Tests for a program can be meld to a test suite. A lot of research is done to quantify the quality and improve a test suite. Code coverage metrics estimate the quality of a test suite. This quality is fine, if the code coverage value is high or 100%. Unfortunately it might be impossible to achieve 100% code coverage because of dead code for example. There is a gap between the feasible and theoretical maximal possible code coverage value. Our review of the research indicates, none of current research is concerned with exact gap computation. This paper presents a framework to compute such gaps exactly in an ISO-C compatible semantic and similar languages. We describe an efficient approximation of the gap in all the other cases. Thus, a tester can decide if more tests might be able or necessary to achieve better coverage.

  14. Visualization of elastic wavefields computed with a finite difference code

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, S. [Lawrence Livermore National Lab., CA (United States); Harris, D.

    1994-11-15

    The authors have developed a finite difference elastic propagation model to simulate seismic wave propagation through geophysically complex regions. To facilitate debugging and to assist seismologists in interpreting the seismograms generated by the code, they have developed an X Windows interface that permits viewing of successive temporal snapshots of the (2D) wavefield as they are calculated. The authors present a brief video displaying the generation of seismic waves by an explosive source on a continent, which propagate to the edge of the continent then convert to two types of acoustic waves. This sample calculation was part of an effort to study the potential of offshore hydroacoustic systems to monitor seismic events occurring onshore.

  15. Introduction to error correcting codes in quantum computers

    CERN Document Server

    Salas, P J

    2006-01-01

    The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3

  16. A predictive coding account of MMN reduction in schizophrenia

    NARCIS (Netherlands)

    Wacongne, C.

    2016-01-01

    The mismatch negativity (MMN) is thought to be an index of the automatic activation of a specialized network for active prediction and deviance detection in the auditory cortex. It is consistently reduced in schizophrenic patients and has received a lot of interest as a clinical and translational to

  17. Precise minds in uncertain worlds: predictive coding in autism.

    Science.gov (United States)

    Van de Cruys, Sander; Evers, Kris; Van der Hallen, Ruth; Van Eylen, Lien; Boets, Bart; de-Wit, Lee; Wagemans, Johan

    2014-10-01

    There have been numerous attempts to explain the enigma of autism, but existing neurocognitive theories often provide merely a refined description of 1 cluster of symptoms. Here we argue that deficits in executive functioning, theory of mind, and central coherence can all be understood as the consequence of a core deficit in the flexibility with which people with autism spectrum disorder can process violations to their expectations. More formally we argue that the human mind processes information by making and testing predictions and that the errors resulting from violations to these predictions are given a uniform, inflexibly high weight in autism spectrum disorder. The complex, fluctuating nature of regularities in the world and the stochastic and noisy biological system through which people experience it require that, in the real world, people not only learn from their errors but also need to (meta-)learn to sometimes ignore errors. Especially when situations (e.g., social) or stimuli (e.g., faces) become too complex or dynamic, people need to tolerate a certain degree of error in order to develop a more abstract level of representation. Starting from an inability to flexibly process prediction errors, a number of seemingly core deficits become logically secondary symptoms. Moreover, an insistence on sameness or the acting out of stereotyped and repetitive behaviors can be understood as attempts to provide a reassuring sense of predictive success in a world otherwise filled with error. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  18. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  19. Performance evaluation of moment-method codes on an Intel iPSC/860 hypercube computer

    Energy Technology Data Exchange (ETDEWEB)

    Klimkowski, K.; Ling, H. (Texas Univ., Austin (United States))

    1993-09-01

    An analytical evaluation is conducted of the performance of a moment-method code on a parallel computer, treating algorithmic complexity costs within the framework of matrix size and the 'subblock-size' matrix-partitioning parameter. A scaled-efficiencies analysis is conducted for the measured computation times of the matrix-fill operation and LU decomposition. 6 refs.

  20. Computing the Feng-Rao distances for codes from order domains

    DEFF Research Database (Denmark)

    Ruano Benito, Diego

    2007-01-01

    We compute the Feng–Rao distance of a code coming from an order domain with a simplicial value semigroup. The main tool is the Apéry set of a semigroup that can be computed using a Gröbner basis....

  1. STAT, GAPS, STRAIN, DRWDIM: a system of computer codes for analyzing HTGR fuel test element metrology data. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Saurwein, J.J.

    1977-08-01

    A system of computer codes has been developed to statistically reduce Peach Bottom fuel test element metrology data and to compare the material strains and fuel rod-fuel hole gaps computed from these data with HTGR design code predictions. The codes included in this system are STAT, STRAIN, GAPS, and DRWDIM. STAT statistically evaluates test element metrology data yielding fuel rod, fuel body, and sleeve irradiation-induced strains; fuel rod anisotropy; and additional data characterizing each analyzed fuel element. STRAIN compares test element fuel rod and fuel body irradiation-induced strains computed from metrology data with the corresponding design code predictions. GAPS compares test element fuel rod, fuel hole heat transfer gaps computed from metrology data with the corresponding design code predictions. DRWDIM plots the measured and predicted gaps and strains. Although specifically developed to expedite the analysis of Peach Bottom fuel test elements, this system can be applied, without extensive modification, to the analysis of Fort St. Vrain or other HTGR-type fuel test elements.

  2. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    Science.gov (United States)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs

  3. Fine-grained code changes and bugs: Improving bug prediction

    OpenAIRE

    Giger, Emanuel

    2012-01-01

    Software development and, in particular, software maintenance are time consuming and require detailed knowledge of the structure and the past development activities of a software system. Limited resources and time constraints make the situation even more difficult. Therefore, a significant amount of research effort has been dedicated to learning software prediction models that allow project members to allocate and spend the limited resources efficiently on the (most) critical parts of their s...

  4. Robust Coding for Lossy Computing with Observation Costs

    CERN Document Server

    Ahmadi, Behzad

    2011-01-01

    An encoder wishes to minimize the bit rate necessary to guarantee that a decoder is able to calculate a symbol-wise function of a sequence available only at the encoder and a sequence that can be measured only at the decoder. This classical problem, first studied by Yamamoto, is addressed here by including two new aspects: (i) The decoder obtains noisy measurements of its sequence, where the quality of such measurements can be controlled via a cost-constrained "action" sequence, which is taken at the decoder or at the encoder; (ii) Measurement at the decoder may fail in a way that is unpredictable to the encoder, thus requiring robust encoding. The considered scenario generalizes known settings such as the Heegard-Berger-Kaspi and the "source coding with a vending machine" problems. The rate-distortion-cost function is derived in relevant special cases, along with general upper and lower bounds. Numerical examples are also worked out to obtain further insight into the optimal system design.

  5. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  6. The Uncertainty Test for the MAAP Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Park, S. H.; Song, Y. M.; Park, S. Y.; Ahn, K. I.; Kim, K. R.; Lee, Y. J. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-10-15

    After the Three Mile Island Unit 2 (TMI-2) and Chernobyl accidents, safety issues for a severe accident are treated in various aspects. Major issues in our research part include a level 2 PSA. The difficulty in expanding the level 2 PSA as a risk information activity is the uncertainty. In former days, it attached a weight to improve the quality in a internal accident PSA, but the effort is insufficient for decrease the phenomenon uncertainty in the level 2 PSA. In our country, the uncertainty degree is high in the case of a level 2 PSA model, and it is necessary to secure a model to decrease the uncertainty. We have not yet experienced the uncertainty assessment technology, the assessment system itself depends on advanced nations. In advanced nations, the severe accident simulator is implemented in the hardware level. But in our case, basic function in a software level can be implemented. In these circumstance at home and abroad, similar instances are surveyed such as UQM and MELCOR. Referred to these instances, SAUNA (Severe Accident UNcertainty Analysis) system is being developed in our project to assess and decrease the uncertainty in a level 2 PSA. It selects the MAAP code to analyze the uncertainty in a severe accident.

  7. The 3D MHD code GOEMHD3 for astrophysical plasmas with large Reynolds numbers. Code description, verification, and computational performance

    Science.gov (United States)

    Skála, J.; Baruffa, F.; Büchner, J.; Rampp, M.

    2015-08-01

    Context. The numerical simulation of turbulence and flows in almost ideal astrophysical plasmas with large Reynolds numbers motivates the implementation of magnetohydrodynamical (MHD) computer codes with low resistivity. They need to be computationally efficient and scale well with large numbers of CPU cores, allow obtaining a high grid resolution over large simulation domains, and be easily and modularly extensible, for instance, to new initial and boundary conditions. Aims: Our aims are the implementation, optimization, and verification of a computationally efficient, highly scalable, and easily extensible low-dissipative MHD simulation code for the numerical investigation of the dynamics of astrophysical plasmas with large Reynolds numbers in three dimensions (3D). Methods: The new GOEMHD3 code discretizes the ideal part of the MHD equations using a fast and efficient leap-frog scheme that is second-order accurate in space and time and whose initial and boundary conditions can easily be modified. For the investigation of diffusive and dissipative processes the corresponding terms are discretized by a DuFort-Frankel scheme. To always fulfill the Courant-Friedrichs-Lewy stability criterion, the time step of the code is adapted dynamically. Numerically induced local oscillations are suppressed by explicit, externally controlled diffusion terms. Non-equidistant grids are implemented, which enhance the spatial resolution, where needed. GOEMHD3 is parallelized based on the hybrid MPI-OpenMP programing paradigm, adopting a standard two-dimensional domain-decomposition approach. Results: The ideal part of the equation solver is verified by performing numerical tests of the evolution of the well-understood Kelvin-Helmholtz instability and of Orszag-Tang vortices. The accuracy of solving the (resistive) induction equation is tested by simulating the decay of a cylindrical current column. Furthermore, we show that the computational performance of the code scales very

  8. Development of a model and computer code to describe solar grade silicon production processes

    Science.gov (United States)

    Gould, R. K.; Srivastava, R.

    1979-01-01

    Two computer codes were developed for describing flow reactors in which high purity, solar grade silicon is produced via reduction of gaseous silicon halides. The first is the CHEMPART code, an axisymmetric, marching code which treats two phase flows with models describing detailed gas-phase chemical kinetics, particle formation, and particle growth. It can be used to described flow reactors in which reactants, mix, react, and form a particulate phase. Detailed radial gas-phase composition, temperature, velocity, and particle size distribution profiles are computed. Also, deposition of heat, momentum, and mass (either particulate or vapor) on reactor walls is described. The second code is a modified version of the GENMIX boundary layer code which is used to compute rates of heat, momentum, and mass transfer to the reactor walls. This code lacks the detailed chemical kinetics and particle handling features of the CHEMPART code but has the virtue of running much more rapidly than CHEMPART, while treating the phenomena occurring in the boundary layer in more detail.

  9. A new hybrid coding for protein secondary structure prediction based on primary structure similarity.

    Science.gov (United States)

    Li, Zhong; Wang, Jing; Zhang, Shunpu; Zhang, Qifeng; Wu, Wuming

    2017-03-16

    The coding pattern of protein can greatly affect the prediction accuracy of protein secondary structure. In this paper, a novel hybrid coding method based on the physicochemical properties of amino acids and tendency factors is proposed for the prediction of protein secondary structure. The principal component analysis (PCA) is first applied to the physicochemical properties of amino acids to construct a 3-bit-code, and then the 3 tendency factors of amino acids are calculated to generate another 3-bit-code. Two 3-bit-codes are fused to form a novel hybrid 6-bit-code. Furthermore, we make a geometry-based similarity comparison of the protein primary structure between the reference set and the test set before the secondary structure prediction. We finally use the support vector machine (SVM) to predict those amino acids which are not detected by the primary structure similarity comparison. Experimental results show that our method achieves a satisfactory improvement in accuracy in the prediction of protein secondary structure.

  10. Validation of physics and thermalhydraulic computer codes for advanced Candu reactor applications

    Energy Technology Data Exchange (ETDEWEB)

    Wren, D.J.; Popov, N.; Snell, V.G. [Atomic Energy of Canada Ltd, (Canada)

    2004-07-01

    Atomic Energy of Canada Ltd. (AECL) is developing an Advanced Candu Reactor (ACR) that is an evolutionary advancement of the currently operating Candu 6 reactors. The ACR is being designed to produce electrical power for a capital cost and at a unit-energy cost significantly less than that of the current reactor designs. The ACR retains the modular Candu concept of horizontal fuel channels surrounded by a heavy water moderator. However, ACR uses slightly enriched uranium fuel compared to the natural uranium used in Candu 6. This achieves the twin goals of improved economics (via large reductions in the heavy water moderator volume and replacement of the heavy water coolant with light water coolant) and improved safety. AECL has developed and implemented a software quality assurance program to ensure that its analytical, scientific and design computer codes meet the required standards for software used in safety analyses. Since the basic design of the ACR is equivalent to that of the Candu 6, most of the key phenomena associated with the safety analyses of ACR are common, and the Candu industry standard tool-set of safety analysis codes can be applied to the analysis of the ACR. A systematic assessment of computer code applicability addressing the unique features of the ACR design was performed covering the important aspects of the computer code structure, models, constitutive correlations, and validation database. Arising from this assessment, limited additional requirements for code modifications and extensions to the validation databases have been identified. This paper provides an outline of the AECL software quality assurance program process for the validation of computer codes used to perform physics and thermal-hydraulics safety analyses of the ACR. It describes the additional validation work that has been identified for these codes and the planned, and ongoing, experimental programs to extend the code validation as required to address specific ACR design

  11. Issues in computational fluid dynamics code verification and validation

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, W.L.; Blottner, F.G.

    1997-09-01

    A broad range of mathematical modeling errors of fluid flow physics and numerical approximation errors are addressed in computational fluid dynamics (CFD). It is strongly believed that if CFD is to have a major impact on the design of engineering hardware and flight systems, the level of confidence in complex simulations must substantially improve. To better understand the present limitations of CFD simulations, a wide variety of physical modeling, discretization, and solution errors are identified and discussed. Here, discretization and solution errors refer to all errors caused by conversion of the original partial differential, or integral, conservation equations representing the physical process, to algebraic equations and their solution on a computer. The impact of boundary conditions on the solution of the partial differential equations and their discrete representation will also be discussed. Throughout the article, clear distinctions are made between the analytical mathematical models of fluid dynamics and the numerical models. Lax`s Equivalence Theorem and its frailties in practical CFD solutions are pointed out. Distinctions are also made between the existence and uniqueness of solutions to the partial differential equations as opposed to the discrete equations. Two techniques are briefly discussed for the detection and quantification of certain types of discretization and grid resolution errors.

  12. Algorithms and computer codes for atomic and molecular quantum scattering theory

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, L. (ed.)

    1979-01-01

    This workshop has succeeded in bringing up 11 different coupled equation codes on the NRCC computer, testing them against a set of 24 different test problems and making them available to the user community. These codes span a wide variety of methodologies, and factors of up to 300 were observed in the spread of computer times on specific problems. A very effective method was devised for examining the performance of the individual codes in the different regions of the integration range. Many of the strengths and weaknesses of the codes have been identified. Based on these observations, a hybrid code has been developed which is significantly superior to any single code tested. Thus, not only have the original goals been fully met, the workshop has resulted directly in an advancement of the field. All of the computer programs except VIVS are available upon request from the NRCC. Since an improved version of VIVS is contained in the hybrid program, VIVAS, it was not made available for distribution. The individual program LOGD is, however, available. In addition, programs which compute the potential energy matrices of the test problems are also available. The software library names for Tests 1, 2 and 4 are HEH2, LICO, and EN2, respectively.

  13. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    Science.gov (United States)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  14. Verification, validation, and predictive capability in computational engineering and physics.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Hirsch, Charles (Vrije Universiteit Brussel, Brussels, Belgium); Trucano, Timothy Guy

    2003-02-01

    Developers of computer codes, analysts who use the codes, and decision makers who rely on the results of the analyses face a critical question: How should confidence in modeling and simulation be critically assessed? Verification and validation (V&V) of computational simulations are the primary methods for building and quantifying this confidence. Briefly, verification is the assessment of the accuracy of the solution to a computational model. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data. In verification, the relationship of the simulation to the real world is not an issue. In validation, the relationship between computation and the real world, i.e., experimental data, is the issue.

  15. Basic prediction techniques in modern video coding standards

    CERN Document Server

    Kim, Byung-Gyu

    2016-01-01

    This book discusses in detail the basic algorithms of video compression that are widely used in modern video codec. The authors dissect complicated specifications and present material in a way that gets readers quickly up to speed by describing video compression algorithms succinctly, without going to the mathematical details and technical specifications. For accelerated learning, hybrid codec structure, inter- and intra- prediction techniques in MPEG-4, H.264/AVC, and HEVC are discussed together. In addition, the latest research in the fast encoder design for the HEVC and H.264/AVC is also included.

  16. [Vascular assessment in stroke codes: role of computed tomography angiography].

    Science.gov (United States)

    Mendigaña Ramos, M; Cabada Giadas, T

    2015-01-01

    Advances in imaging studies for acute ischemic stroke are largely due to the development of new efficacious treatments carried out in the acute phase. Together with computed tomography (CT) perfusion studies, CT angiography facilitates the selection of patients who are likely to benefit from appropriate early treatment. CT angiography plays an important role in the workup for acute ischemic stroke because it makes it possible to confirm vascular occlusion, assess the collateral circulation, and obtain an arterial map that is very useful for planning endovascular treatment. In this review about CT angiography, we discuss the main technical characteristics, emphasizing the usefulness of the technique in making the right diagnosis and improving treatment strategies. Copyright © 2012 SERAM. Published by Elsevier España, S.L.U. All rights reserved.

  17. Symbolic coding for noninvertible systems: uniform approximation and numerical computation

    Science.gov (United States)

    Beyn, Wolf-Jürgen; Hüls, Thorsten; Schenke, Andre

    2016-11-01

    It is well known that the homoclinic theorem, which conjugates a map near a transversal homoclinic orbit to a Bernoulli subshift, extends from invertible to specific noninvertible dynamical systems. In this paper, we provide a unifying approach that combines such a result with a fully discrete analog of the conjugacy for finite but sufficiently long orbit segments. The underlying idea is to solve appropriate discrete boundary value problems in both cases, and to use the theory of exponential dichotomies to control the errors. This leads to a numerical approach that allows us to compute the conjugacy to any prescribed accuracy. The method is demonstrated for several examples where invertibility of the map fails in different ways.

  18. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    Science.gov (United States)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  19. V.S.O.P. (99/05) Computer Code System : computer code system for reactor physics and fuel cycle simulation

    OpenAIRE

    Scherer, W.; Brockmann, H.; Haas, K. A.; Rütten, H. J.

    2005-01-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The...

  20. Hierarchical Novelty-Familiarity Representation in the Visual System by Modular Predictive Coding.

    Science.gov (United States)

    Vladimirskiy, Boris; Urbanczik, Robert; Senn, Walter

    2015-01-01

    Predictive coding has been previously introduced as a hierarchical coding framework for the visual system. At each level, activity predicted by the higher level is dynamically subtracted from the input, while the difference in activity continuously propagates further. Here we introduce modular predictive coding as a feedforward hierarchy of prediction modules without back-projections from higher to lower levels. Within each level, recurrent dynamics optimally segregates the input into novelty and familiarity components. Although the anatomical feedforward connectivity passes through the novelty-representing neurons, it is nevertheless the familiarity information which is propagated to higher levels. This modularity results in a twofold advantage compared to the original predictive coding scheme: the familiarity-novelty representation forms quickly, and at each level the full representational power is exploited for an optimized readout. As we show, natural images are successfully compressed and can be reconstructed by the familiarity neurons at each level. Missing information on different spatial scales is identified by novelty neurons and complements the familiarity representation. Furthermore, by virtue of the recurrent connectivity within each level, non-classical receptive field properties still emerge. Hence, modular predictive coding is a biologically realistic metaphor for the visual system that dynamically extracts novelty at various scales while propagating the familiarity information.

  1. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding.

    Science.gov (United States)

    Dai, Wenrui; Xiong, Hongkai; Wang, Jia; Zheng, Yuan F

    2014-02-01

    Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.

  2. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M

    2006-07-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  3. Compilation of documented computer codes applicable to environmental assessment of radioactivity releases. [Nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F. O.; Miller, C. W.; Shaeffer, D. L.; Garten, Jr., C. T.; Shor, R. W.; Ensminger, J. T.

    1977-04-01

    The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title.

  4. A Machine Learning Perspective on Predictive Coding with PAQ

    CERN Document Server

    Knoll, Byron

    2011-01-01

    PAQ8 is an open source lossless data compression algorithm that currently achieves the best compression rates on many benchmarks. This report presents a detailed description of PAQ8 from a statistical machine learning perspective. It shows that it is possible to understand some of the modules of PAQ8 and use this understanding to improve the method. However, intuitive statistical explanations of the behavior of other modules remain elusive. We hope the description in this report will be a starting point for discussions that will increase our understanding, lead to improvements to PAQ8, and facilitate a transfer of knowledge from PAQ8 to other machine learning methods, such a recurrent neural networks and stochastic memoizers. Finally, the report presents a broad range of new applications of PAQ to machine learning tasks including language modeling and adaptive text prediction, adaptive game playing, classification, and compression using features from the field of deep learning.

  5. From structure prediction to genomic screens for novel non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Jan Gorodkin

    2011-08-01

    Full Text Available Non-coding RNAs (ncRNAs are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs. A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  6. Prediction Method for Image Coding Quality Based on Differential Information Entropy

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2014-02-01

    Full Text Available For the requirement of quality-based image coding, an approach to predict the quality of image coding based on differential information entropy is proposed. First of all, some typical prediction approaches are introduced, and then the differential information entropy is reviewed. Taking JPEG2000 as an example, the relationship between differential information entropy and the objective assessment indicator PSNR at a fixed compression ratio is established via data fitting, and the constraint for fitting is to minimize the average error. Next, the relationship among differential information entropy, compression ratio and PSNR at various compression ratios is constructed and this relationship is used as an indicator to predict the image coding quality. Finally, the proposed approach is compared with some traditional approaches. From the experiments, it can be seen that the differential information entropy has a better linear relationship with image coding quality than that with the image activity. Therefore, the conclusion can be reached that the proposed approach is capable of predicting image coding quality at low compression ratios with small errors, and can be widely applied in a variety of real-time space image coding systems for its simplicity.

  7. Computer code selection criteria for flow and transport code(s) to be used in undisturbed vadose zone calculations for TWRS environmental analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mann, F.M.

    1998-01-26

    The Tank Waste Remediation System (TWRS) is responsible for the safe storage, retrieval, and disposal of waste currently being held in 177 underground tanks at the Hanford Site. In order to successfully carry out its mission, TWRS must perform environmental analyses describing the consequences of tank contents leaking from tanks and associated facilities during the storage, retrieval, or closure periods and immobilized low-activity tank waste contaminants leaving disposal facilities. Because of the large size of the facilities and the great depth of the dry zone (known as the vadose zone) underneath the facilities, sophisticated computer codes are needed to model the transport of the tank contents or contaminants. This document presents the code selection criteria for those vadose zone analyses (a subset of the above analyses) where the hydraulic properties of the vadose zone are constant in time the geochemical behavior of the contaminant-soil interaction can be described by simple models, and the geologic or engineered structures are complicated enough to require a two-or three dimensional model. Thus, simple analyses would not need to use the fairly sophisticated codes which would meet the selection criteria in this document. Similarly, those analyses which involve complex chemical modeling (such as those analyses involving large tank leaks or those analyses involving the modeling of contaminant release from glass waste forms) are excluded. The analyses covered here are those where the movement of contaminants can be relatively simply calculated from the moisture flow. These code selection criteria are based on the information from the low-level waste programs of the US Department of Energy (DOE) and of the US Nuclear Regulatory Commission as well as experience gained in the DOE Complex in applying these criteria. Appendix table A-1 provides a comparison between the criteria in these documents and those used here. This document does not define the models (that

  8. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    Energy Technology Data Exchange (ETDEWEB)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case.

  9. Fault-tolerant quantum computation with asymmetric Bacon-Shor codes

    Science.gov (United States)

    Brooks, Peter; Preskill, John

    2013-03-01

    We develop a scheme for fault-tolerant quantum computation based on asymmetric Bacon-Shor codes, which works effectively against highly biased noise dominated by dephasing. We find the optimal Bacon-Shor block size as a function of the noise strength and the noise bias, and estimate the logical error rate and overhead cost achieved by this optimal code. Our fault-tolerant gadgets, based on gate teleportation, are well suited for hardware platforms with geometrically local gates in two dimensions.

  10. HIFI: a computer code for projectile fragmentation accompanied by incomplete fusion

    Energy Technology Data Exchange (ETDEWEB)

    Wu, J.R.

    1980-07-01

    A brief summary of a model proposed to describe projectile fragmentation accompanied by incomplete fusion and the instructions for the use of the computer code HIFI are given. The code HIFI calculates single inclusive spectra, coincident spectra and excitation functions resulting from particle-induced reactions. It is a multipurpose program which can calculate any type of coincident spectra as long as the reaction is assumed to take place in two steps.

  11. SAMDIST: A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Leal, L.C.

    1995-01-01

    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  12. SAMDIST A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    CERN Document Server

    Leal, L C

    1995-01-01

    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  13. The development of an intelligent interface to a computational fluid dynamics flow-solver code

    Science.gov (United States)

    Williams, Anthony D.

    1988-01-01

    Researchers at NASA Lewis are currently developing an 'intelligent' interface to aid in the development and use of large, computational fluid dynamics flow-solver codes for studying the internal fluid behavior of aerospace propulsion systems. This paper discusses the requirements, design, and implementation of an intelligent interface to Proteus, a general purpose, three-dimensional, Navier-Stokes flow solver. The interface is called PROTAIS to denote its introduction of artificial intelligence (AI) concepts to the Proteus code.

  14. LEADS-DC: A computer code for intense dc beam nonlinear transport simulation

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    An intense dc beam nonlinear transport code has been developed. The code is written in Visual FORTRAN 6.6 and has ~13000 lines. The particle distribution in the transverse cross section is uniform or Gaussian. The space charge forces are calculated by the PIC (particle in cell) scheme, and the effects of the applied fields on the particle motion are calculated with the Lie algebraic method through the third order approximation. Obviously,the solutions to the equations of particle motion are self-consistent. The results obtained from the theoretical analysis have been put in the computer code. Many optical beam elements are contained in the code. So, the code can simulate the intense dc particle motions in the beam transport lines, high voltage dc accelerators and ion implanters.

  15. COBRA-SFS (Spent Fuel Storage): A thermal-hydraulic analysis computer code: Volume 2, User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Rector, D.R.; Cuta, J.M.; Lombardo, N.J.; Michener, T.E.; Wheeler, C.L.

    1986-11-01

    COBRA-SFS (Spent Fuel Storage) is a general thermal-hydraulic analysis computer code used to predict temperatures and velocities in a wide variety of systems. The code was refined and specialized for spent fuel storage system analyses for the US Department of Energy's Commercial Spent Fuel Management Program. The finite-volume equations governing mass, momentum, and energy conservation are written for an incompressible, single-phase fluid. The flow equations model a wide range of conditions including natural circulation. The energy equations include the effects of solid and fluid conduction, natural convection, and thermal radiation. The COBRA-SFS code is structured to perform both steady-state and transient calculations; however, the transient capability has not yet been validated. This volume contains the input instructions for COBRA-SFS and an auxiliary radiation exchange factor code, RADX-1. It is intended to aid the user in becoming familiar with the capabilities and modeling conventions of the code.

  16. Passive Nosetip Technology (PANT) Program. Volume XVIII. Nosetip Analyses Using the EROS Computer Code

    Science.gov (United States)

    1975-06-01

    transfer Distribution Comarisons 3-7 3-6 Ran 207 Camphor Shape Change Prediction (Reg - 10 x 101/ft) 3-9 3-7 Run 208 Camphor Shape Change Prediction...environment. The film coefficient approach enables the modeling of heterogeneous reaction and sublimation kinetics, unequal species diffusion coefficients...ilar predictions. As an exercise of the shape change numerical procedures in the EROS com- puter code, two camphor shape change solutions were generated

  17. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  18. High-throughput, kingdom-wide prediction and annotation of bacterial non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Jonathan Livny

    Full Text Available BACKGROUND: Diverse bacterial genomes encode numerous small non-coding RNAs (sRNAs that regulate myriad biological processes. While bioinformatic algorithms have proven effective in identifying sRNA-encoding loci, the lack of tools and infrastructure with which to execute these computationally demanding algorithms has limited their utilization. Genome-wide predictions of sRNA-encoding genes have been conducted in less than 3% of all sequenced bacterial strains, leading to critical gaps in current annotations. The relative paucity of genome-wide sRNA prediction represents a critical gap in current annotations of bacterial genomes and has limited examination of larger issues in sRNA biology, such as sRNA evolution. METHODOLOGY/PRINCIPAL FINDINGS: We have developed and deployed SIPHT, a high throughput computational tool that utilizes workflow management and distributed computing to effectively conduct kingdom-wide predictions and annotations of intergenic sRNA-encoding genes. Candidate sRNA-encoding loci are identified based on the presence of putative Rho-independent terminators downstream of conserved intergenic sequences, and each locus is annotated for several features, including conservation in other species, association with one of several transcription factor binding sites and homology to any of over 300 previously identified sRNAs and cis-regulatory RNA elements. Using SIPHT, we conducted searches for putative sRNA-encoding genes in all 932 bacterial replicons in the NCBI database. These searches yielded nearly 60% of previously confirmed sRNAs, hundreds of previously annotated cis-encoded regulatory RNA elements such as riboswitches, and over 45,000 novel candidate intergenic loci. CONCLUSIONS/SIGNIFICANCE: Candidate loci were identified across all branches of the bacterial evolutionary tree, suggesting a central and ubiquitous role for RNA-mediated regulation among bacterial species. Annotation of candidate loci by SIPHT provides clues

  19. ASHMET: a computer code for estimating insolation incident on tilted surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Elkin, R.F.; Toelle, R.G.

    1980-05-01

    A computer code, ASHMET, has been developed by MSFC to estimate the amount of solar insolation incident on the surfaces of solar collectors. Both tracking and fixed-position collectors have been included. Climatological data for 248 US locations are built into the code. This report describes the methodology of the code, and its input and output. The basic methodology used by ASHMET is the ASHRAE clear-day insolation relationships modified by a clearness index derived from SOLMET-measured solar radiation data to a horizontal surface.

  20. Development, Verification and Use of Gust Modeling in the NASA Computational Fluid Dynamics Code FUN3D

    Science.gov (United States)

    Bartels, Robert E.

    2012-01-01

    This paper presents the implementation of gust modeling capability in the CFD code FUN3D. The gust capability is verified by computing the response of an airfoil to a sharp edged gust. This result is compared with the theoretical result. The present simulations will be compared with other CFD gust simulations. This paper also serves as a users manual for FUN3D gust analyses using a variety of gust profiles. Finally, the development of an Auto-Regressive Moving-Average (ARMA) reduced order gust model using a gust with a Gaussian profile in the FUN3D code is presented. ARMA simulated results of a sequence of one-minus-cosine gusts is shown to compare well with the same gust profile computed with FUN3D. Proper Orthogonal Decomposition (POD) is combined with the ARMA modeling technique to predict the time varying pressure coefficient increment distribution due to a novel gust profile. The aeroelastic response of a pitch/plunge airfoil to a gust environment is computed with a reduced order model, and compared with a direct simulation of the system in the FUN3D code. The two results are found to agree very well.

  1. Sample-adaptive-prediction for HEVC SCC intra coding with ridge estimation from spatially neighboring samples

    Science.gov (United States)

    Kang, Je-Won; Ryu, Soo-Kyung

    2017-02-01

    In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.

  2. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    Directory of Open Access Journals (Sweden)

    Kumar Parijat Tripathi

    Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is

  3. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA

    Science.gov (United States)

    Zuccaro, Antonio; Guarracino, Mario Rosario

    2015-01-01

    RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool), QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery) tools. It offers a report on statistical analysis of functional and Gene Ontology (GO) annotation’s enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein—protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA) by ab initio methods) helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is freely

  4. Solution of 3-dimensional time-dependent viscous flows. Part 2: Development of the computer code

    Science.gov (United States)

    Weinberg, B. C.; Mcdonald, H.

    1980-01-01

    There is considerable interest in developing a numerical scheme for solving the time dependent viscous compressible three dimensional flow equations to aid in the design of helicopter rotors. The development of a computer code to solve a three dimensional unsteady approximate form of the Navier-Stokes equations employing a linearized block emplicit technique in conjunction with a QR operator scheme is described. Results of calculations of several Cartesian test cases are presented. The computer code can be applied to more complex flow fields such as these encountered on rotating airfoils.

  5. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance.

    Science.gov (United States)

    Mano, Omer; Clark, Damon A

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system's response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data.

  6. Application of S{gamma} Model for the Mechanistic Bubble Size Prediction in the Subcooled Boiling Flow with CFD Code

    Energy Technology Data Exchange (ETDEWEB)

    Yun, B. J.; Song, C. H. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Splawski, A.; Lo, S. [CD-adapco, Melville (United States)

    2010-10-15

    Accurate simulation of subcooled boiling flow is essential for the operation and safety of nuclear power plants (NPP). In recent years, the use of computational fluid dynamics (CFD) codes has been extended to the analysis of multi-dimensional two-phase flow for the NPP. Among the applications of CFD code for the NPP analysis, the first target was selected as a mechanistic prediction of DNB (Departure from Nucleate Boiling) in PWR. In DNB-type CHF (Critical Heat Flux), the expected flow regime is bubbly or churn turbulent flow in the high mass flux and high heat flux condition and thus subcooled boiling is also one of the key phenomena for the precise prediction of DNB. In this paper, S{gamma},which is a mechanistic transport equation for the bubble parameters, was examined in a CFD code with the objective of enhancing the prediction capability of subcooled boiling flows. The models were applied in the STAR-CD 4.12 software

  7. A proposed methodology for computational fluid dynamics code verification, calibration, and validation

    Science.gov (United States)

    Aeschliman, D. P.; Oberkampf, W. L.; Blottner, F. G.

    Verification, calibration, and validation (VCV) of Computational Fluid Dynamics (CFD) codes is an essential element of the code development process. The exact manner in which code VCV activities are planned and conducted, however, is critically important. It is suggested that the way in which code validation, in particular, is often conducted--by comparison to published experimental data obtained for other purposes--is in general difficult and unsatisfactory, and that a different approach is required. This paper describes a proposed methodology for CFD code VCV that meets the technical requirements and is philosophically consistent with code development needs. The proposed methodology stresses teamwork and cooperation between code developers and experimentalists throughout the VCV process, and takes advantage of certain synergisms between CFD and experiment. A novel approach to uncertainty analysis is described which can both distinguish between and quantify various types of experimental error, and whose attributes are used to help define an appropriate experimental design for code VCV experiments. The methodology is demonstrated with an example of laminar, hypersonic, near perfect gas, 3-dimensional flow over a sliced sphere/cone of varying geometrical complexity.

  8. NADAC and MERGE: computer codes for processing neutron activation analysis data

    Energy Technology Data Exchange (ETDEWEB)

    Heft, R.E.; Martin, W.E.

    1977-05-19

    Absolute disintegration rates of specific radioactive products induced by neutron irradition of a sample are determined by spectrometric analysis of gamma-ray emissions. Nuclide identification and quantification is carried out by a complex computer code GAMANAL (described elsewhere). The output of GAMANAL is processed by NADAC, a computer code that converts the data on observed distintegration rates to data on the elemental composition of the original sample. Computations by NADAC are on an absolute basis in that stored nuclear parameters are used rather than the difference between the observed disintegration rate and the rate obtained by concurrent irradiation of elemental standards. The NADAC code provides for the computation of complex cases including those involving interrupted irradiations, parent and daughter decay situations where the daughter may also be produced independently, nuclides with very short half-lives compared to counting interval, and those involving interference by competing neutron-induced reactions. The NADAC output consists of a printed report, which summarizes analytical results, and a card-image file, which can be used as input to another computer code MERGE. The purpose of MERGE is to combine the results of multiple analyses and produce a single final answer, based on all available information, for each element found.

  9. Establishment the code for prediction of waste volume on NPP decommissioning

    Energy Technology Data Exchange (ETDEWEB)

    Cho, W. H.; Park, S. K.; Choi, Y. D.; Kim, I. S.; Moon, J. K. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    In practice, decommissioning waste volume can be estimated appropriately by finding the differences between prediction and actual operation and considering the operational problem or supplementary matters. So in the nuclear developed countries such as U.S. or Japan, the decommissioning waste volume is predicted on the basis of the experience in their own decommissioning projects. Because of the contamination caused by radioactive material, decontamination activity and management of radio-active waste should be considered in decommissioning of nuclear facility unlike the usual plant or facility. As the decommissioning activity is performed repeatedly, data for similar activities are accumulated, and optimal strategy can be achieved by comparison with the predicted strategy. Therefore, a variety of decommissioning experiences are the most important. In Korea, there is no data on the decommissioning of commercial nuclear power plants yet. However, KAERI has accumulated the basis decommissioning data of nuclear facility through decommissioning of research reactor (KRR-2) and uranium conversion plant (UCP). And DECOMMIS(DECOMMissioning Information Management System) was developed to provide and manage the whole data of decommissioning project. Two codes, FAC code and WBS code, were established in this process. FAC code is the one which is classified by decommissioning target of nuclear facility, and WBS code is classified by each decommissioning activity. The reason why two codes where created is that the codes used in DEFACS (Decommissioning Facility Characterization management System) and DEWOCS (Decommissioning Work-unit productivity Calculation System) are different from each other, and they were classified each purpose. DEFACS which manages the facility needs the code that categorizes facility characteristics, and DEWOCS which calculates unit productivity needs the code that categorizes decommissioning waste volume. KAERI has accumulated decommissioning data of KRR

  10. Improved method for predicting the peak signal-to-noise ratio quality of decoded images in fractal image coding

    Science.gov (United States)

    Wang, Qiang; Bi, Sheng

    2017-01-01

    To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.

  11. Dual Coding Theory Explains Biphasic Collective Computation in Neural Decision-Making.

    Science.gov (United States)

    Daniels, Bryan C; Flack, Jessica C; Krakauer, David C

    2017-01-01

    A central question in cognitive neuroscience is how unitary, coherent decisions at the whole organism level can arise from the distributed behavior of a large population of neurons with only partially overlapping information. We address this issue by studying neural spiking behavior recorded from a multielectrode array with 169 channels during a visual motion direction discrimination task. It is well known that in this task there are two distinct phases in neural spiking behavior. Here we show Phase I is a distributed or incompressible phase in which uncertainty about the decision is substantially reduced by pooling information from many cells. Phase II is a redundant or compressible phase in which numerous single cells contain all the information present at the population level in Phase I, such that the firing behavior of a single cell is enough to predict the subject's decision. Using an empirically grounded dynamical modeling framework, we show that in Phase I large cell populations with low redundancy produce a slow timescale of information aggregation through critical slowing down near a symmetry-breaking transition. Our model indicates that increasing collective amplification in Phase II leads naturally to a faster timescale of information pooling and consensus formation. Based on our results and others in the literature, we propose that a general feature of collective computation is a "coding duality" in which there are accumulation and consensus formation processes distinguished by different timescales.

  12. Just-in-Time Compilation-Inspired Methodology for Parallelization of Compute Intensive Java Code

    Directory of Open Access Journals (Sweden)

    GHULAM MUSTAFA

    2017-01-01

    Full Text Available Compute intensive programs generally consume significant fraction of execution time in a small amount of repetitive code. Such repetitive code is commonly known as hotspot code. We observed that compute intensive hotspots often possess exploitable loop level parallelism. A JIT (Just-in-Time compiler profiles a running program to identify its hotspots. Hotspots are then translated into native code, for efficient execution. Using similar approach, we propose a methodology to identify hotspots and exploit their parallelization potential on multicore systems. Proposed methodology selects and parallelizes each DOALL loop that is either contained in a hotspot method or calls a hotspot method. The methodology could be integrated in front-end of a JIT compiler to parallelize sequential code, just before native translation. However, compilation to native code is out of scope of this work. As a case study, we analyze eighteen JGF (Java Grande Forum benchmarks to determine parallelization potential of hotspots. Eight benchmarks demonstrate a speedup of up to 7.6x on an 8-core system

  13. Analysis of Void Fraction Distribution and Departure from Nucleate Boiling in Single Subchannel and Bundle Geometries Using Subchannel, System, and Computational Fluid Dynamics Codes

    Directory of Open Access Journals (Sweden)

    Taewan Kim

    2012-01-01

    Full Text Available In order to assess the accuracy and validity of subchannel, system, and computational fluid dynamics codes, the Paul Scherrer Institut has participated in the OECD/NRC PSBT benchmark with the thermal-hydraulic system code TRACE5.0 developed by US NRC, the subchannel code FLICA4 developed by CEA, and the computational fluid dynamic code STAR-CD developed by CD-adapco. The PSBT benchmark consists of a series of void distribution exercises and departure from nucleate boiling exercises. The results reveal that the prediction by the subchannel code FLICA4 agrees with the experimental data reasonably well in both steady-state and transient conditions. The analyses of single-subchannel experiments by means of the computational fluid dynamic code STAR-CD with the CD-adapco boiling model indicate that the prediction of the void fraction has no significant discrepancy from the experiments. The analyses with TRACE point out the necessity to perform additional assessment of the subcooled boiling model and bulk condensation model of TRACE.

  14. PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)

    Science.gov (United States)

    Vincenti, Henri

    2016-03-01

    The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.

  15. Comparison of AMOS computer code wakefield real part impedances with analytic results

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Nelson, S D

    2000-11-30

    We have performed eleven AMOS (Azimuthal Mode Simulator)[1] code runs with a simple, right circular cylindrical accelerating cavity inserted into a circular, cylindrical, lossless beam pipe to calculate the real part of the n = 1(dipole) transverse wakefield impedance of this structure. We have compared this wakefield impedance in units of ohms/m(Wm) over the frequency range of 0-1 GHz to analytic predictions from Equation (2.3.8) of Briggs et al[2]. The results from Equation (2.3.8) were converted from the CGS units of statohms to the MKS units of ohms({Omega}) and then multiplied by (2{pi}f)/c = {Omega}/c = 2{pi}/{lambda}, where f is the frequency in Hz, c is the speed of light in vacuum in m/sec, {omega} is the angular frequency in radians/sec, and {lambda} is the wavelength in m. The dipole transverse wakefield impedance written to file from AMOS must be multiplied by c/o to convert it from units of {Omega}/m to units of {Omega}. The agreement between the AMOS runs and the analytic predictions are excellent for computational grids with square cells (dz = dr) and good for grids with rectangular cells (dz < dr). The quantity dz is the fixed-size axial grid spacing, and dr is the fixed-size radial grid spacing. We have also performed one AMOS run for the same geometry to calculate the real part of the n = 0(monopole) longitudinal wakefield impedance of this structure. We have compared this wakefield impedance in units of {Omega} with analytic predictions from Equation (1.4.8) of Briggs et al[1] converted to the MKS units of {Omega}. The agreement between the two results is excellent in this case. For the monopole longitudinal wakefield impedance written to file from AMOS, nothing must be done to convert the results to units of {Omega}. In each case, the computer calculations were carried out to 50 nsec of simulation time.

  16. A Modular Computer Code for Simulating Reactive Multi-Species Transport in 3-Dimensional Groundwater Systems

    Energy Technology Data Exchange (ETDEWEB)

    TP Clement

    1999-06-24

    RT3DV1 (Reactive Transport in 3-Dimensions) is computer code that solves the coupled partial differential equations that describe reactive-flow and transport of multiple mobile and/or immobile species in three-dimensional saturated groundwater systems. RT3D is a generalized multi-species version of the US Environmental Protection Agency (EPA) transport code, MT3D (Zheng, 1990). The current version of RT3D uses the advection and dispersion solvers from the DOD-1.5 (1997) version of MT3D. As with MT3D, RT3D also requires the groundwater flow code MODFLOW for computing spatial and temporal variations in groundwater head distribution. The RT3D code was originally developed to support the contaminant transport modeling efforts at natural attenuation demonstration sites. As a research tool, RT3D has also been used to model several laboratory and pilot-scale active bioremediation experiments. The performance of RT3D has been validated by comparing the code results against various numerical and analytical solutions. The code is currently being used to model field-scale natural attenuation at multiple sites. The RT3D code is unique in that it includes an implicit reaction solver that makes the code sufficiently flexible for simulating various types of chemical and microbial reaction kinetics. RT3D V1.0 supports seven pre-programmed reaction modules that can be used to simulate different types of reactive contaminants including benzene-toluene-xylene mixtures (BTEX), and chlorinated solvents such as tetrachloroethene (PCE) and trichloroethene (TCE). In addition, RT3D has a user-defined reaction option that can be used to simulate any other types of user-specified reactive transport systems. This report describes the mathematical details of the RT3D computer code and its input/output data structure. It is assumed that the user is familiar with the basics of groundwater flow and contaminant transport mechanics. In addition, RT3D users are expected to have some experience in

  17. Physical implementation of a Majorana fermion surface code for fault-tolerant quantum computation

    Science.gov (United States)

    Vijay, Sagar; Fu, Liang

    2016-12-01

    We propose a physical realization of a commuting Hamiltonian of interacting Majorana fermions realizing Z 2 topological order, using an array of Josephson-coupled topological superconductor islands. The required multi-body interaction Hamiltonian is naturally generated by a combination of charging energy induced quantum phase-slips on the superconducting islands and electron tunneling between islands. Our setup improves on a recent proposal for implementing a Majorana fermion surface code (Vijay et al 2015 Phys. Rev. X 5 041038), a ‘hybrid’ approach to fault-tolerant quantum computation that combines (1) the engineering of a stabilizer Hamiltonian with a topologically ordered ground state with (2) projective stabilizer measurements to implement error correction and a universal set of logical gates. Our hybrid strategy has advantages over the traditional surface code architecture in error suppression and single-step stabilizer measurements, and is widely applicable to implementing stabilizer codes for quantum computation.

  18. Modeling of BWR core meltdown accidents - for application in the MELRPI. MOD2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T

    1985-04-01

    This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

  19. Computational approaches towards understanding human long non-coding RNA biology.

    Science.gov (United States)

    Jalali, Saakshi; Kapoor, Shruti; Sivadas, Ambily; Bhartiya, Deeksha; Scaria, Vinod

    2015-07-15

    Long non-coding RNAs (lncRNAs) form the largest class of non-protein coding genes in the human genome. While a small subset of well-characterized lncRNAs has demonstrated their significant role in diverse biological functions like chromatin modifications, post-transcriptional regulation, imprinting etc., the functional significance of a vast majority of them still remains an enigma. Increasing evidence of the implications of lncRNAs in various diseases including cancer and major developmental processes has further enhanced the need to gain mechanistic insights into the lncRNA functions. Here, we present a comprehensive review of the various computational approaches and tools available for the identification and annotation of long non-coding RNAs. We also discuss a conceptual roadmap to systematically explore the functional properties of the lncRNAs using computational approaches.

  20. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, L. (ed.)

    1979-01-01

    The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations.

  1. Computer Program Predicts Turbine-Stage Performance

    Science.gov (United States)

    Boyle, Robert J.; Haas, Jeffrey E.; Katsanis, Theodore

    1988-01-01

    MTSBL updated version of flow-analysis programs MERIDL and TSONIC coupled to boundary-layer program BLAYER. Method uses quasi-three-dimensional, inviscid, stream-function flow analysis iteratively coupled to calculated losses so changes in losses result in changes in flow distribution. Manner effects both configuration on flow distribution and flow distribution on losses taken into account in prediction of performance of stage. Written in FORTRAN IV.

  2. Predictive access control for distributed computation

    DEFF Research Database (Denmark)

    Yang, Fan; Hankin, Chris; Nielson, Flemming

    2013-01-01

    We show how to use aspect-oriented programming to separate security and trust issues from the logical design of mobile, distributed systems. The main challenge is how to enforce various types of security policies, in particular predictive access control policies — policies based on the future...... behavior of a program. A novel feature of our approach is that we can define policies concerning secondary use of data....

  3. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  4. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;

    2011-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:01(n)01n with minimum distance (n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w=(n(lognloglogn)2) . (2) If d=3 then w=(nlglgn). (3...

  5. Application of Multiple Description Coding for Adaptive QoS Mechanism for Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ilan Sadeh

    2014-02-01

    Full Text Available Multimedia transmission over cloud infrastructure is a hot research topic worldwide. It is very strongly related to video streaming, VoIP, mobile networks, and computer networks. The goal is a reliable integration of telephony, video and audio transmission, computing and broadband transmission based on cloud computing. One right approach to pave the way for mobile multimedia and cloud computing is Multiple Description Coding (MDC, i.e. the solution would be: TCP/IP and similar protocols to be used for transmission of text files, and Multiple Description Coding “Send and Forget” algorithm to be used as transmission method for Multimedia over the cloud. Multiple Description Coding would improve the Quality of Service and would provide new service of rate adaptive streaming. This paper presents a new approach for improving the quality of multimedia and other services in the cloud, by using Multiple Description Coding (MDC. Firsty MDC Send and Forget Algorithm is compared with the existing protocols such as TCP/IP, UDP, RTP, etc. Then the Achievable Rate Region for MDC system is evaluated. Finally, a new subset of Quality of Service that considers the blocking in multi-terminal multimedia network and fidelity losses is considered.

  6. Modern Teaching Methods in Physics with the Aid of Original Computer Codes and Graphical Representations

    Science.gov (United States)

    Ivanov, Anisoara; Neacsu, Andrei

    2011-01-01

    This study describes the possibility and advantages of utilizing simple computer codes to complement the teaching techniques for high school physics. The authors have begun working on a collection of open source programs which allow students to compare the results and graphics from classroom exercises with the correct solutions and further more to…

  7. Methods, algorithms and computer codes for calculation of electron-impact excitation parameters

    CERN Document Server

    Bogdanovich, P; Stonys, D

    2015-01-01

    We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...

  8. Computer code to interchange CDS and wave-drag geometry formats

    Science.gov (United States)

    Johnson, V. S.; Turnock, D. L.

    1986-01-01

    A computer program has been developed on the PRIME minicomputer to provide an interface for the passage of aircraft configuration geometry data between the Rockwell Configuration Development System (CDS) and a wireframe geometry format used by aerodynamic design and analysis codes. The interface program allows aircraft geometry which has been developed in CDS to be directly converted to the wireframe geometry format for analysis. Geometry which has been modified in the analysis codes can be transformed back to a CDS geometry file and examined for physical viability. Previously created wireframe geometry files may also be converted into CDS geometry files. The program provides a useful link between a geometry creation and manipulation code and analysis codes by providing rapid and accurate geometry conversion.

  9. TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Bard, F E; Christensen, B Y; Gneiting, B C

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.

  10. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  11. A Computational Tool for Helicopter Rotor Noise Prediction Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR project proposes to develop a computational tool for helicopter rotor noise prediction based on hybrid Cartesian grid/gridless approach. The uniqueness of...

  12. NASCRAC - A computer code for fracture mechanics analysis of crack growth

    Science.gov (United States)

    Harris, D. O.; Eason, E. D.; Thomas, J. M.; Bianca, C. J.; Salter, L. D.

    1987-01-01

    NASCRAC - a computer code for fracture mechanics analysis of crack growth - is described in this paper. The need for such a code is increasing as requirements grow for high reliability and low weight in aerospace components. The code is comprehensive and versatile, as well as user friendly. The major purpose of the code is calculation of fatigue, corrosion fatigue, or stress corrosion crack growth, and a variety of crack growth relations can be selected by the user. Additionally, crack retardation models are included. A very wide variety of stress intensity factor solutions are contained in the code, and extensive use is made of influence functions. This allows complex stress gradients in three-dimensional crack problems to be treated easily and economically. In cases where previous stress intensity factor solutions are not adequate, new influence functions can be calculated by the code. Additional features include incorporation of J-integral solutions from the literature and a capability for estimating elastic-plastic stress redistribution from the results of a corresponding elastic analysis. An example problem is presented which shows typical outputs from the code.

  13. Bulbar microcircuit model predicts connectivity and roles of interneurons in odor coding.

    Directory of Open Access Journals (Sweden)

    Aditya Gilra

    Full Text Available Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells linearize the input-output transformation of the principal neurons (mitral cells, unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells. Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb.

  14. Bulbar microcircuit model predicts connectivity and roles of interneurons in odor coding.

    Science.gov (United States)

    Gilra, Aditya; Bhalla, Upinder S

    2015-01-01

    Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb.

  15. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    Science.gov (United States)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  16. An Object-Oriented Computer Code for Aircraft Engine Weight Estimation

    Science.gov (United States)

    Tong, Michael T.; Naylor, Bret A.

    2009-01-01

    Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.

  17. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    Science.gov (United States)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  18. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 2: User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, B. D.; Mueller, C.; Necker, G. A.; Travis, J. R.; Spore, J. W.; Lam, K. L.; Royl, P.; Wilson, T. L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containment and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included. Volume III

  19. Hierarchical prediction and context adaptive coding for lossless color image compression.

    Science.gov (United States)

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  20. Cumulative Time Series Representation for Code Blue prediction in the Intensive Care Unit.

    Science.gov (United States)

    Salas-Boni, Rebeca; Bai, Yong; Hu, Xiao

    2015-01-01

    Patient monitors in hospitals generate a high number of false alarms that compromise patients care and burden clinicians. In our previous work, an attempt to alleviate this problem by finding combinations of monitor alarms and laboratory test that were predictive of code blue events, called SuperAlarms. Our current work consists of developing a novel time series representation that accounts for both cumulative effects and temporality was developed, and it is applied to code blue prediction in the intensive care unit (ICU). The health status of patients is represented both by a term frequency approach, TF, often used in natural language processing; and by our novel cumulative approach. We call this representation "weighted accumulated occurrence representation", or WAOR. These two representations are fed into a L1 regularized logistic regression classifier, and are used to predict code blue events. Our performance was assessed online in an independent set. We report the sensitivity of our algorithm at different time windows prior to the code blue event, as well as the work-up to detect ratio and the proportion of false code blue detections divided by the number of false monitor alarms. We obtained a better performance with our cumulative representation, retaining a sensitivity close to our previous work while improving the other metrics.

  1. Verification of the predictive capabilities of the 4C code cryogenic circuit model

    Science.gov (United States)

    Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi

    2014-01-01

    The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.

  2. Predicting and Classifying User Identification Code System Based on Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In digital fingerprinting, preventing piracy of images by colluders is an important and tedious issue. Each image will be embedded with a unique User IDentification (U ID) code that is the fingerprint for tracking the authorized user. The proposed hiding scheme makes use of a random number generator to scramble two copies of a UID,which will then be hidden in the randomly selected medium frequency coefficients of the host image. The linear support vector machine (SVM) will be used to train classifications by calculating the normalized correlation (NC) for the 2-class UID codes. The trained classifications will be the models used for identifying unreadable UID codes.Experimental results showed that the success of predicting the unreadable UID codes can be increased by applying SVM. The proposed scheme can be used to provide protections to intellectual property rights of digital images and to keep track of users to prevent collaborative piracies.

  3. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    Science.gov (United States)

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  4. A large-scale evaluation of computational protein function prediction

    NARCIS (Netherlands)

    Radivojac, P.; Clark, W.T.; Oron, T.R.; Schnoes, A.M.; Wittkop, T.; Kourmpetis, Y.A.I.; Dijk, van A.D.J.; Friedberg, I.

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high

  5. Multiple frequencies sequential coding for SSVEP-based brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Yangsong Zhang

    Full Text Available BACKGROUND: Steady-state visual evoked potential (SSVEP-based brain-computer interface (BCI has become one of the most promising modalities for a practical noninvasive BCI system. Owing to both the limitation of refresh rate of liquid crystal display (LCD or cathode ray tube (CRT monitor, and the specific physiological response property that only a very small number of stimuli at certain frequencies could evoke strong SSVEPs, the available frequencies for SSVEP stimuli are limited. Therefore, it may not be enough to code multiple targets with the traditional frequencies coding protocols, which poses a big challenge for the design of a practical SSVEP-based BCI. This study aimed to provide an innovative coding method to tackle this problem. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we present a novel protocol termed multiple frequencies sequential coding (MFSC for SSVEP-based BCI. In MFSC, multiple frequencies are sequentially used in each cycle to code the targets. To fulfill the sequential coding, each cycle is divided into several coding epochs, and during each epoch, certain frequency is used. Obviously, different frequencies or the same frequency can be presented in the coding epochs, and the different epoch sequence corresponds to the different targets. To show the feasibility of MFSC, we used two frequencies to realize four targets and carried on an offline experiment. The current study shows that: 1 MFSC is feasible and efficient; 2 the performance of SSVEP-based BCI based on MFSC can be comparable to some existed systems. CONCLUSIONS/SIGNIFICANCE: The proposed protocol could potentially implement much more targets with the limited available frequencies compared with the traditional frequencies coding protocol. The efficiency of the new protocol was confirmed by real data experiment. We propose that the SSVEP-based BCI under MFSC might be a promising choice in the future.

  6. MASTR: multiple alignment and structure prediction of non-coding RNAs using simulated annealing

    DEFF Research Database (Denmark)

    Lindgreen, Stinus; Gardner, Paul P; Krogh, Anders

    2007-01-01

    MOTIVATION: As more non-coding RNAs are discovered, the importance of methods for RNA analysis increases. Since the structure of ncRNA is intimately tied to the function of the molecule, programs for RNA structure prediction are necessary tools in this growing field of research. Furthermore, it i...

  7. Evolutionary Computation Techniques for Predicting Atmospheric Corrosion

    Directory of Open Access Journals (Sweden)

    Amine Marref

    2013-01-01

    Full Text Available Corrosion occurs in many engineering structures such as bridges, pipelines, and refineries and leads to the destruction of materials in a gradual manner and thus shortening their lifespan. It is therefore crucial to assess the structural integrity of engineering structures which are approaching or exceeding their designed lifespan in order to ensure their correct functioning, for example, carrying ability and safety. An understanding of corrosion and an ability to predict corrosion rate of a material in a particular environment plays a vital role in evaluating the residual life of the material. In this paper we investigate the use of genetic programming and genetic algorithms in the derivation of corrosion-rate expressions for steel and zinc. Genetic programming is used to automatically evolve corrosion-rate expressions while a genetic algorithm is used to evolve the parameters of an already engineered corrosion-rate expression. We show that both evolutionary techniques yield corrosion-rate expressions that have good accuracy.

  8. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Science.gov (United States)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  9. MOLOCH computer code for molecular-dynamics simulation of processes in condensed matter

    Directory of Open Access Journals (Sweden)

    Derbenev I.V.

    2011-01-01

    Full Text Available Theoretical and experimental investigation into properties of condensed matter is one of the mainstreams in RFNC-VNIITF scientific activity. The method of molecular dynamics (MD is an innovative method of theoretical materials science. Modern supercomputers allow the direct simulation of collective effects in multibillion atom sample, making it possible to model physical processes on the atomistic level, including material response to dynamic load, radiation damage, influence of defects and alloying additions upon material mechanical properties, or aging of actinides. During past ten years, the computer code MOLOCH has been developed at RFNC-VNIITF. It is a parallel code suitable for massive parallel computing. Modern programming techniques were used to make the code almost 100% efficient. Practically all instruments required for modelling were implemented in the code: a potential builder for different materials, simulation of physical processes in arbitrary 3D geometry, and calculated data processing. A set of tests was developed to analyse algorithms efficiency. It can be used to compare codes with different MD implementation between each other.

  10. Combining Topological Hardware and Topological Software: Color-Code Quantum Computing with Topological Superconductor Networks

    Directory of Open Access Journals (Sweden)

    Daniel Litinski

    2017-09-01

    Full Text Available We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall–superconductor hybrids.

  11. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2016-10-16

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  12. Once-through CANDU reactor models for the ORIGEN2 computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A.G.; Bjerke, M.A.

    1980-11-01

    Reactor physics calculations have led to the development of two CANDU reactor models for the ORIGEN2 computer code. The model CANDUs are based on (1) the existing once-through fuel cycle with feed comprised of natural uranium and (2) a projected slightly enriched (1.2 wt % /sup 235/U) fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models, as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST, are given.

  13. Adaptive Mesh Computations with the PLUTO Code for Astrophysical Fluid Dynamics

    Science.gov (United States)

    Mignone, A.; Zanni, C.

    2012-07-01

    We present an overview of the current version of the PLUTO code for numerical simulations of astrophysical fluid flows over block-structured adaptive grids. The code preserves its modular framework for the solution of the classical or relativistic magnetohydrodynamics (MHD) equations while exploiting the distributed infrastructure of the Chombo library for multidimensional adaptive mesh refinement (AMR) parallel computations. Equations are evolved in time using an explicit second-order, dimensionally unsplit time stepping scheme based on a cell-centered discretization of the flow variables. Efficiency and robustness are shown through multidimensional benchmarks and applications to problems of astrophysical relevance.

  14. Experimental assessment of computer codes used for safety analysis of integral reactors

    Energy Technology Data Exchange (ETDEWEB)

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  15. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Peloquin, R.A.

    1981-04-01

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.

  16. V.S.O.P. (99/05) computer code system

    Energy Technology Data Exchange (ETDEWEB)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)

  17. V.S.O.P. (99/05) computer code system

    Energy Technology Data Exchange (ETDEWEB)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Scherer, W.

    2005-11-01

    V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to HTRs and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. V.S.O.P.(99 / 05) represents the further development of V.S.O.P. (99). Compared to its precursor, the code system has been improved in many details. Major improvements and extensions have been included concerning the neutron spectrum calculation, the 3-d neutron diffusion options, and the thermal hydraulic section with respect to 'multi-pass'-fuelled pebblebed cores. This latest code version was developed and tested under the WINDOWS-XP - operating system. The storage requirement for the executables and the basic libraries associated with the code amounts to about 15 MB. Another 5 MB are required - if desired - for storage of the source code ({approx}65000 Fortran statements). (orig.)

  18. Implementing Scientific Simulation Codes Highly Tailored for Vector Architectures Using Custom Configurable Computing Machines

    Science.gov (United States)

    Rutishauser, David

    2006-01-01

    The motivation for this work comes from an observation that amidst the push for Massively Parallel (MP) solutions to high-end computing problems such as numerical physical simulations, large amounts of legacy code exist that are highly optimized for vector supercomputers. Because re-hosting legacy code often requires a complete re-write of the original code, which can be a very long and expensive effort, this work examines the potential to exploit reconfigurable computing machines in place of a vector supercomputer to implement an essentially unmodified legacy source code. Custom and reconfigurable computing resources could be used to emulate an original application's target platform to the extent required to achieve high performance. To arrive at an architecture that delivers the desired performance subject to limited resources involves solving a multi-variable optimization problem with constraints. Prior research in the area of reconfigurable computing has demonstrated that designing an optimum hardware implementation of a given application under hardware resource constraints is an NP-complete problem. The premise of the approach is that the general issue of applying reconfigurable computing resources to the implementation of an application, maximizing the performance of the computation subject to physical resource constraints, can be made a tractable problem by assuming a computational paradigm, such as vector processing. This research contributes a formulation of the problem and a methodology to design a reconfigurable vector processing implementation of a given application that satisfies a performance metric. A generic, parametric, architectural framework for vector processing implemented in reconfigurable logic is developed as a target for a scheduling/mapping algorithm that maps an input computation to a given instance of the architecture. This algorithm is integrated with an optimization framework to arrive at a specification of the architecture parameters

  19. Verification and validation of the simulated radar image (SRIM) code radar cross section predictions

    Science.gov (United States)

    Stanley, Dale A.

    1991-12-01

    The objectives of this study were to verify and validate the Simulated Radar Image (SRIM) Code Version 4.0 monostatic radar cross section (RCS) predictions. SRIM, uses the theory of Physical Optics (PO) to predict backscatter for a user specified aspect angle. Target obscuration and multiple reflections are taken into account by sampling the target with ray tracing. The software verification and validation technique followed in this study entailed comparing the code predictions to closed form PO equations, other RCS prediction software packages, and measured data. The targets analyzed were a sphere, rectangular flat plate, circular flat plate, solid right circular cylinder, dihedral and trihedral corner reflectors, top hat, cone, prolate spheroid, and generic missile. SRIM RCS predictions are shown for each target as a function of frequency, aspect angle, and ray density. Also presented is an automation technique that enables the user to run SRIM sequentially over a range of azimuth angles. The FORTRAN code written by the author for the PO equations is also provided.

  20. Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites

    Science.gov (United States)

    Gravett, Phillip

    1997-01-01

    The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.

  1. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    Science.gov (United States)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  2. Rate-prediction structure complexity analysis for multi-view video coding using hybrid genetic algorithms

    Science.gov (United States)

    Liu, Yebin; Dai, Qionghai; You, Zhixiang; Xu, Wenli

    2007-01-01

    Efficient exploitation of the temporal and inter-view correlation is critical to multi-view video coding (MVC), and the key to it relies on the design of prediction chain structure according to the various pattern of correlations. In this paper, we propose a novel prediction structure model to design optimal MVC coding schemes along with tradeoff analysis in depth between compression efficiency and prediction structure complexity for certain standard functionalities. Focusing on the representation of the entire set of possible chain structures rather than certain typical ones, the proposed model can given efficient MVC schemes that adaptively vary with the requirements of structure complexity and video source characteristics (the number of views, the degrees of temporal and interview correlations). To handle large scale problem in model optimization, we deploy a hybrid genetic algorithm which yields satisfactory results shown in the simulations.

  3. Complete Focal Plane Compression Based on CMOS Image Sensor Using Predictive Coding

    Institute of Scientific and Technical Information of China (English)

    Yao Suying; Yu Xiao; Gao Jing; Xu Jiangtao

    2015-01-01

    In this paper, a CMOS image sensor(CIS) is proposed, which can accomplish both decorrelation and en-tropy coding of image compression directly on the focal plane. The design is based on predictive coding for image decorrelation. The predictions are performed in analog domain by 2×2 pixel units. Both the prediction residuals and original pixel values are quantized and encoded in parallel. Since the residuals have a peak distribution around zero, the output codewords can be replaced by the valid part of the residuals’ binary mode. The compressed bit stream is accessible directly at the output of CIS without extra disposition. Simulation results show that the proposed approach achieves a compression rate of 2. 2 and PSNR of 51 on different test images.

  4. Computational prediction of Pho regulons in cyanobacteria

    Directory of Open Access Journals (Sweden)

    Xu Ying

    2007-06-01

    Full Text Available Abstract Background Phosphorus is an essential element for all life forms. However, it is limiting in most ecological environments where cyanobacteria inhabit. Elucidation of the phosphorus assimilation pathways in cyanobacteria will further our understanding of the physiology and ecology of this important group of microorganisms. However, a systematic study of the Pho regulon, the core of the phosphorus assimilation pathway in a cyanobacterium, is hitherto lacking. Results We have predicted and analyzed the Pho regulons in 19 sequenced cyanobacterial genomes using a highly effective scanning algorithm that we have previously developed. Our results show that different cyanobacterial species/ecotypes may encode diverse sets of genes responsible for the utilization of various sources of phosphorus, ranging from inorganic phosphate, phosphodiester, to phosphonates. Unlike in E. coli, some cyanobacterial genes that are directly involved in phosphorus assimilation seem to not be under the regulation of the regulator SphR (orthologue of PhoB in E coli. in some species/ecotypes. On the other hand, SphR binding sites are found for genes known to play important roles in other biological processes. These genes might serve as bridging points to coordinate the phosphorus assimilation and other biological processes. More interestingly, in three cyanobacterial genomes where no sphR gene is encoded, our results show that there is virtually no functional SphR binding site, suggesting that transcription regulators probably play an important role in retaining their binding sites. Conclusion The Pho regulons in cyanobacteria are highly diversified to accommodate to their respective living environments. The phosphorus assimilation pathways in cyanobacteria are probably tightly coupled to a number of other important biological processes. The loss of a regulator may lead to the rapid loss of its binding sites in a genome.

  5. A comparison of shuttle vernier engine plume contamination with CONTAM 3.4 code predictions

    Science.gov (United States)

    Maag, Carl R.; Jones, Thomas M.; Rao, Shankar M.; Linder, W. Kelly

    1992-01-01

    In 1985, using the CONTAM 3.2 code, it was predicted that the shuttle Primary Reaction Control System (PRCS) and Vernier Reaction Control System (VRCS) engines could be potential contamination sources to sensitive surfaces located within the shuttle payload bay. Spaceflight test data on these engines is quite limited. Shuttle mission STS-32, the Long Duration Exposure Facility retrieval mission, was instrumented with an experiment that provided the design engineer with evidence that contaminant species from the VRCS engines can enter the payload bay. More recently, the most recent version of the analysis code, CONTAM 3.4, has re-examined the contamination potential of these engines.

  6. MAXED, a computer code for the deconvolution of multisphere neutron spectrometer data using the maximum entropy method

    Energy Technology Data Exchange (ETDEWEB)

    Reginatto, M.; Goldhagen, P.

    1998-06-01

    The problem of analyzing data from a multisphere neutron spectrometer to infer the energy spectrum of the incident neutrons is discussed. The main features of the code MAXED, a computer program developed to apply the maximum entropy principle to the deconvolution (unfolding) of multisphere neutron spectrometer data, are described, and the use of the code is illustrated with an example. A user`s guide for the code MAXED is included in an appendix. The code is available from the authors upon request.

  7. Predicting CYP2C19 catalytic parameters for enantioselective oxidations using artificial neural networks and a chirality code.

    Science.gov (United States)

    Hartman, Jessica H; Cothren, Steven D; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A; Miller, Grover P

    2013-07-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (k(cat), K(m), and k(cat)/K(m)), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (k(cat) and K(m)) were more consistent with experimental values than those for catalytic efficiency (k(cat)/K(m)). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds.

  8. Predicting CYP2C19 Catalytic Parameters for Enantioselective Oxidations Using Artificial Neural Networks and a Chirality Code

    Science.gov (United States)

    Hartman, Jessica H.; Cothren, Steven D.; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A.; Miller, Grover P.

    2013-01-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (kcat, Km, and kcat/Km), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (kcat and Km) were more consistent with experimental values than those for catalytic efficiency (kcat/Km). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds. PMID:23673224

  9. The MELTSPREAD-1 computer code for the analysis of transient spreading in containments

    Energy Technology Data Exchange (ETDEWEB)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1990-01-01

    Transient spreading of molten core materials is important in the assessment of severe-accident sequences for Mk-I boiling water reactors (BWRs). Of interest is whether core materials are able to spread over the pedestal and drywell floors to contact the containment shell and cause thermally induced shell failure, or whether heat transfer to underlying concrete and overlying water will freeze the melt short of the shell. The development of a computational capability for the assessment of this problem was initiated by Sienicki et al. in the form of MELTSPREAD-O code. Development is continuing in the form of the MELTSPREAD-1 code, which contains new models for phenomena that were ignored in the earlier code. This paper summarizes these new models, provides benchmarking calculations of the relocation model against an analytical solution as well as simulant spreading data, and summarizes the results of a scoping calculation for the full Mk-I system.

  10. Computer code simulations of the formation of Meteor Crater, Arizona - Calculations MC-1 and MC-2

    Science.gov (United States)

    Roddy, D. J.; Schuster, S. H.; Kreyenhagen, K. N.; Orphal, D. L.

    1980-01-01

    It has been widely accepted that hypervelocity impact processes play a major role in the evolution of the terrestrial planets and satellites. In connection with the development of quantitative methods for the description of impact cratering, it was found that the results provided by two-dimensional finite difference, computer codes is greatly improved when initial impact conditions can be defined and when the numerical results can be tested against field and laboratory data. In order to address this problem, a numerical code study of the formation of Meteor (Barringer) Crater, Arizona, has been undertaken. A description is presented of the major results from the first two code calculations, MC-1 and MC-2, that have been completed for Meteor Crater. Both calculations used an iron meteorite with a kinetic energy of 3.8 Megatons. Calculation MC-1 had an impact velocity of 25 km/sec and MC-2 had an impact velocity of 15 km/sec.

  11. WOLF: a computer code package for the calculation of ion beam trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, D.L.

    1985-10-01

    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.

  12. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  13. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    Energy Technology Data Exchange (ETDEWEB)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  14. HYDRA-II: A hydrothermal analysis computer code: Volume 2, User's manual

    Energy Technology Data Exchange (ETDEWEB)

    McCann, R.A.; Lowery, P.S.; Lessor, D.L.

    1987-09-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite-difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum incorporate directional porosities and permeabilities that are available to model solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated methods are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume 1 - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. This volume, Volume 2 - User's Manual, contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a sample problem. The final volume, Volume 3 - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. 6 refs.

  15. Predictive Control of Networked Multiagent Systems via Cloud Computing.

    Science.gov (United States)

    Liu, Guo-Ping

    2017-01-18

    This paper studies the design and analysis of networked multiagent predictive control systems via cloud computing. A cloud predictive control scheme for networked multiagent systems (NMASs) is proposed to achieve consensus and stability simultaneously and to compensate for network delays actively. The design of the cloud predictive controller for NMASs is detailed. The analysis of the cloud predictive control scheme gives the necessary and sufficient conditions of stability and consensus of closed-loop networked multiagent control systems. The proposed scheme is verified to characterize the dynamical behavior and control performance of NMASs through simulations. The outcome provides a foundation for the development of cooperative and coordinative control of NMASs and its applications.

  16. [Prediction of litter moisture content in Tahe Forestry Bureau of Northeast China based on FWI moisture codes].

    Science.gov (United States)

    Zhang, Heng; Jin, Sen; Di, Xue-Ying

    2014-07-01

    Canadian fire weather index system (FWI) is the most widely used fire weather index system in the world. Its fuel moisture prediction is also a very important research method. In this paper, litter moisture contents of typical forest types in Tahe Forestry Bureau of Northeast China were successively observed and the relationships between FWI codes (fine fuel moisture code FFMC, duff moisture code DMC and drought code DC) and fuel moisture were analyzed. Results showed that the mean absolute error and the mean relative error of models.established using FWI moisture code FFMC was 14.9% and 70.7%, respectively, being lower than those of meteorological elements regression model, which indicated that FWI codes had some advantage in predicting litter moisture contents and could be used to predict fuel moisture contents. But the advantage was limited, and further calibration was still needed, especially in modification of FWI codes after rainfall.

  17. PSPP: A Protein Structure Prediction Pipeline for Computing Clusters

    Science.gov (United States)

    2009-07-01

    Clusters 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7...Fold recognition/threading [34] [60] PSIPRED Jones Secondary structure prediction [21] [61] Rosetta Baker Ab initio folder [41] [62] SCOP/ ASTRAL Chothia...the folds of the models that result from the Rosetta code, the structures of the top few models are compared against ASTRAL PDB-style coordinates of

  18. Development of a Computational Framework on Fluid-Solid Mixture Flow Simulations for the COMPASS Code

    Science.gov (United States)

    Zhang, Shuai; Morita, Koji; Shirakawa, Noriyuki; Yamamoto, Yuichi

    The COMPASS code is designed based on the moving particle semi-implicit method to simulate various complex mesoscale phenomena relevant to core disruptive accidents of sodium-cooled fast reactors. In this study, a computational framework for fluid-solid mixture flow simulations was developed for the COMPASS code. The passively moving solid model was used to simulate hydrodynamic interactions between fluid and solids. Mechanical interactions between solids were modeled by the distinct element method. A multi-time-step algorithm was introduced to couple these two calculations. The proposed computational framework for fluid-solid mixture flow simulations was verified by the comparison between experimental and numerical studies on the water-dam break with multiple solid rods.

  19. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    Energy Technology Data Exchange (ETDEWEB)

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK`s symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  20. Automatic code generation in SPARK: Applications of computer algebra and compiler-compilers

    Energy Technology Data Exchange (ETDEWEB)

    Nataf, J.M.; Winkelmann, F.

    1992-09-01

    We show how computer algebra and compiler-compilers are used for automatic code generation in the Simulation Problem Analysis and Research Kernel (SPARK), an object oriented environment for modeling complex physical systems that can be described by differential-algebraic equations. After a brief overview of SPARK, we describe the use of computer algebra in SPARK's symbolic interface, which generates solution code for equations that are entered in symbolic form. We also describe how the Lex/Yacc compiler-compiler is used to achieve important extensions to the SPARK simulation language, including parametrized macro objects and steady-state resetting of a dynamic simulation. The application of these methods to solving the partial differential equations for two-dimensional heat flow is illustrated.

  1. Computing element evolution towards Exascale and its impact on legacy simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Colin de Verdiere, Guillaume J.L. [CEA, DAM, DIF, Arpajon (France)

    2015-12-15

    In the light of the current race towards the Exascale, this article highlights the main features of the forthcoming computing elements that will be at the core of next generations of supercomputers. The market analysis, underlying this work, shows that computers are facing a major evolution in terms of architecture. As a consequence, it is important to understand the impacts of those evolutions on legacy codes or programming methods. The problems of dissipated power and memory access are discussed and will lead to a vision of what should be an exascale system. To survive, programming languages had to respond to the hardware evolutions either by evolving or with the creation of new ones. From the previous elements, we elaborate why vectorization, multithreading, data locality awareness and hybrid programming will be the key to reach the exascale, implying that it is time to start rewriting codes. (orig.)

  2. A computer code for three-dimensional incompressible flows using nonorthogonal body-fitted coordinate systems

    Science.gov (United States)

    Chen, Y. S.

    1986-03-01

    In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.

  3. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.

  4. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il [Korea Atomic Energy Res. Inst., Taejon (Korea, Republic of)

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on `The Improvement of Level-1 PSA Computer Codes` is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author).

  5. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    Science.gov (United States)

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  6. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  7. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    Science.gov (United States)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  8. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gal, A.; Hansen, Kristoffer Arnsfelt; Koucky, Michal

    2013-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n)→{0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: 1) if d=2, then w=Θ(n (lgn/lglgn)2); 2) if d=3, then w...

  9. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    OpenAIRE

    Chia-Chang Hu

    2005-01-01

    A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF) of Goldstein and Reed. This multistage technique resu...

  10. Method for computing self-consistent solution in a gun code

    Science.gov (United States)

    Nelson, Eric M

    2014-09-23

    Complex gun code computations can be made to converge more quickly based on a selection of one or more relaxation parameters. An eigenvalue analysis is applied to error residuals to identify two error eigenvalues that are associated with respective error residuals. Relaxation values can be selected based on these eigenvalues so that error residuals associated with each can be alternately reduced in successive iterations. In some examples, relaxation values that would be unstable if used alone can be used.

  11. Tight bounds on computing error-correcting codes by bounded-depth circuits with arbitrary gates

    DEFF Research Database (Denmark)

    Gál, Anna; Hansen, Kristoffer Arnsfelt; Koucký, Michal;

    2012-01-01

    We bound the minimum number w of wires needed to compute any (asymptotically good) error-correcting code C:{0,1}Ω(n) -> {0,1}n with minimum distance Ω(n), using unbounded fan-in circuits of depth d with arbitrary gates. Our main results are: (1) If d=2 then w = Θ(n ({log n/ log log n})2). (2) If d...

  12. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finit...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  13. Presentation of computer code SPIRALI for incompressible, turbulent, plane and spiral grooved cylindrical and face seals

    Science.gov (United States)

    Walowit, Jed A.

    1994-01-01

    A viewgraph presentation is made showing the capabilities of the computer code SPIRALI. Overall capabilities of SPIRALI include: computes rotor dynamic coefficients, flow, and power loss for cylindrical and face seals; treats turbulent, laminar, Couette, and Poiseuille dominated flows; fluid inertia effects are included; rotor dynamic coefficients in three (face) or four (cylindrical) degrees of freedom; includes effects of spiral grooves; user definable transverse film geometry including circular steps and grooves; independent user definable friction factor models for rotor and stator; and user definable loss coefficients for sudden expansions and contractions.

  14. Development of a code system DEURACS for theoretical analysis and prediction of deuteron-induced reactions

    Directory of Open Access Journals (Sweden)

    Nakayama Shinsuke

    2017-01-01

    Full Text Available We have developed an integrated code system dedicated for theoretical analysis and prediction of deuteron-induced reactions, which is called DEUteron-induced Reaction Analysis Code System (DEURACS. DEURACS consists of several calculation codes based on theoretical models to describe respective reaction mechanisms and it was successfully applied to (d,xp and (d,xn reactions. In the present work, the analysis of (d,xn reactions is extended to higher incident energy up to nearly 100 MeV and also DEURACS is applied to (d,xd reactions at 80 and 100 MeV. The DEURACS calculations reproduce the experimental double-differential cross sections for the (d,xn and (d,xd reactions well.

  15. Development of a code system DEURACS for theoretical analysis and prediction of deuteron-induced reactions

    Science.gov (United States)

    Nakayama, Shinsuke; Kouno, Hiroshi; Watanabe, Yukinobu; Iwamoto, Osamu; Ye, Tao; Ogata, Kazuyuki

    2017-09-01

    We have developed an integrated code system dedicated for theoretical analysis and prediction of deuteron-induced reactions, which is called DEUteron-induced Reaction Analysis Code System (DEURACS). DEURACS consists of several calculation codes based on theoretical models to describe respective reaction mechanisms and it was successfully applied to (d,xp) and (d,xn) reactions. In the present work, the analysis of (d,xn) reactions is extended to higher incident energy up to nearly 100 MeV and also DEURACS is applied to (d,xd) reactions at 80 and 100 MeV. The DEURACS calculations reproduce the experimental double-differential cross sections for the (d,xn) and (d,xd) reactions well.

  16. Multilevel Coding Schemes for Compute-and-Forward with Flexible Decoding

    CERN Document Server

    Hern, Brett

    2011-01-01

    We consider the design of coding schemes for the wireless two-way relaying channel when there is no channel state information at the transmitter. In the spirit of the compute and forward paradigm, we present a multilevel coding scheme that permits computation (or, decoding) of a class of functions at the relay. The function to be computed (or, decoded) is then chosen depending on the channel realization. We define such a class of functions which can be decoded at the relay using the proposed coding scheme and derive rates that are universally achievable over a set of channel gains when this class of functions is used at the relay. We develop our framework with general modulation formats in mind, but numerical results are presented for the case where each node transmits using the QPSK constellation. Numerical results with QPSK show that the flexibility afforded by our proposed scheme results in substantially higher rates than those achievable by always using a fixed function or by adapting the function at the ...

  17. Three-dimensional protein structure prediction: Methods and computational strategies.

    Science.gov (United States)

    Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C

    2014-10-12

    A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction.

  18. Design and code coupling assessment based on defects prediction. Part 1

    Directory of Open Access Journals (Sweden)

    Arwa Abu Asad

    2013-10-01

    Full Text Available The article discusses an application of code metrics at object-oriented software design. Code metrics give an additional method to avoid errors except the obvious ones like thorough requirements, design, programming, testing, and consumer's feedback. Software metrics try to collect values and measurements from the software and predict possible current or future problems. This paper includes the development, analysis and evaluation of several software code metrics. The paper also investigates how could coupling metrics be utilized as early indicators of fault proneness. A tool is developed to parse through code projects and automatically collect those metrics. A case study of Scarab project is selected to evaluate coupling metrics ability to predict fault proneness. Results showed that the value of the evaluated metrics can vary in terms of their ability to judge the software design and fault proneness. Results showed also that CBO, RFC, MPC and ICP have more correlation with reported bugs in comparison with other collected and evaluated coupling metrics.

  19. PSPP: a protein structure prediction pipeline for computing clusters.

    Directory of Open Access Journals (Sweden)

    Michael S Lee

    Full Text Available BACKGROUND: Protein structures are critical for understanding the mechanisms of biological systems and, subsequently, for drug and vaccine design. Unfortunately, protein sequence data exceed structural data by a factor of more than 200 to 1. This gap can be partially filled by using computational protein structure prediction. While structure prediction Web servers are a notable option, they often restrict the number of sequence queries and/or provide a limited set of prediction methodologies. Therefore, we present a standalone protein structure prediction software package suitable for high-throughput structural genomic applications that performs all three classes of prediction methodologies: comparative modeling, fold recognition, and ab initio. This software can be deployed on a user's own high-performance computing cluster. METHODOLOGY/PRINCIPAL FINDINGS: The pipeline consists of a Perl core that integrates more than 20 individual software packages and databases, most of which are freely available from other research laboratories. The query protein sequences are first divided into domains either by domain boundary recognition or Bayesian statistics. The structures of the individual domains are then predicted using template-based modeling or ab initio modeling. The predicted models are scored with a statistical potential and an all-atom force field. The top-scoring ab initio models are annotated by structural comparison against the Structural Classification of Proteins (SCOP fold database. Furthermore, secondary structure, solvent accessibility, transmembrane helices, and structural disorder are predicted. The results are generated in text, tab-delimited, and hypertext markup language (HTML formats. So far, the pipeline has been used to study viral and bacterial proteomes. CONCLUSIONS: The standalone pipeline that we introduce here, unlike protein structure prediction Web servers, allows users to devote their own computing assets to process a

  20. Development and verification test of integral reactor major components - Development of MCP impeller design, performance prediction code and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung Kyoon; Oh, Woo Hyoung; Song, Jae Wook [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-03-01

    The present study is aimed at developing a computational code for design and performance prediction of an axial-flow pump. The proposed performance prediction method is tested against a model axial-flow pump streamline curvature method. The preliminary design is made by using the ideal velocity triangles at inlet and exit and the three dimensional blade shape is calculated by employing the free vortex design method. Then the detailed blading design is carried out by using experimental database of double circular arc cambered hydrofoils. To computationally determine the design incidence, deviation, blade camber, solidity and stagger angle, a number of correlation equations are developed form the experimental database and a theorical formula for the lift coefficient is adopted. A total of 8 equations are solved iteratively using an under-relaxation factor. An experimental measurement is conducted under a non-cavitating condition to obtain the off-design performance curve and also a cavitation test is carried out by reducing the suction pressure. The experimental results are very satisfactorily compared with the predictions by the streamline curvature method. 28 refs., 26 figs., 11 tabs. (Author)

  1. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    Science.gov (United States)

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  2. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, J.J. [Oak Ridge National Lab., TN (United States)

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  3. Computer Prediction of Air Quality in Livestock Buildings

    DEFF Research Database (Denmark)

    Svidt, Kjeld; Bjerg, Bjarne

    In modem livestock buildings the design of ventilation systems is important in order to obtain good air quality. The use of Computational Fluid Dynamics for predicting the air distribution makes it possible to include the effect of room geometry and heat sources in the design process. This paper...... presents numerical prediction of air flow in a livestock building compared with laboratory measurements. An example of the calculation of contaminant distribution is given, and the future possibilities of the method are discussed....

  4. A Survey of Computational Intelligence Techniques in Protein Function Prediction

    OpenAIRE

    Arvind Kumar Tiwari; Rajeev Srivastava

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational int...

  5. A survey of computational intelligence techniques in protein function prediction.

    Science.gov (United States)

    Tiwari, Arvind Kumar; Srivastava, Rajeev

    2014-01-01

    During the past, there was a massive growth of knowledge of unknown proteins with the advancement of high throughput microarray technologies. Protein function prediction is the most challenging problem in bioinformatics. In the past, the homology based approaches were used to predict the protein function, but they failed when a new protein was different from the previous one. Therefore, to alleviate the problems associated with homology based traditional approaches, numerous computational intelligence techniques have been proposed in the recent past. This paper presents a state-of-the-art comprehensive review of various computational intelligence techniques for protein function predictions using sequence, structure, protein-protein interaction network, and gene expression data used in wide areas of applications such as prediction of DNA and RNA binding sites, subcellular localization, enzyme functions, signal peptides, catalytic residues, nuclear/G-protein coupled receptors, membrane proteins, and pathway analysis from gene expression datasets. This paper also summarizes the result obtained by many researchers to solve these problems by using computational intelligence techniques with appropriate datasets to improve the prediction performance. The summary shows that ensemble classifiers and integration of multiple heterogeneous data are useful for protein function prediction.

  6. Multiphase integral reacting flow computer code (ICOMFLO): User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.L.; Lottes, S.A.; Petrick, M.

    1997-11-01

    A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air

  7. Special Issue: Big data and predictive computational modeling

    Science.gov (United States)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  8. Advances and Computational Tools towards Predictable Design in Biological Engineering

    Directory of Open Access Journals (Sweden)

    Lorenzo Pasotti

    2014-01-01

    Full Text Available The design process of complex systems in all the fields of engineering requires a set of quantitatively characterized components and a method to predict the output of systems composed by such elements. This strategy relies on the modularity of the used components or the prediction of their context-dependent behaviour, when parts functioning depends on the specific context. Mathematical models usually support the whole process by guiding the selection of parts and by predicting the output of interconnected systems. Such bottom-up design process cannot be trivially adopted for biological systems engineering, since parts function is hard to predict when components are reused in different contexts. This issue and the intrinsic complexity of living systems limit the capability of synthetic biologists to predict the quantitative behaviour of biological systems. The high potential of synthetic biology strongly depends on the capability of mastering this issue. This review discusses the predictability issues of basic biological parts (promoters, ribosome binding sites, coding sequences, transcriptional terminators, and plasmids when used to engineer simple and complex gene expression systems in Escherichia coli. A comparison between bottom-up and trial-and-error approaches is performed for all the discussed elements and mathematical models supporting the prediction of parts behaviour are illustrated.

  9. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding

    Science.gov (United States)

    Agarwal, Sapan; Quach, Tu-Thach; Parekh, Ojas; Hsia, Alexander H.; DeBenedictis, Erik P.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

    2016-01-01

    The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. PMID:26778946

  10. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  11. Error threshold in topological quantum-computing models with color codes

    Science.gov (United States)

    Katzgraber, Helmut; Bombin, Hector; Martin-Delgado, Miguel A.

    2009-03-01

    Dealing with errors in quantum computing systems is possibly one of the hardest tasks when attempting to realize physical devices. By encoding the qubits in topological properties of a system, an inherent protection of the quantum states can be achieved. Traditional topologically-protected approaches are based on the braiding of quasiparticles. Recently, a braid-less implementation using brane-net condensates in 3-colexes has been proposed. In 2D it allows the transversal implementation of the whole Clifford group of quantum gates. In this work, we compute the error threshold for this topologically-protected quantum computing system in 2D, by means of mapping its error correction process onto a random 3-body Ising model on a triangular lattice. Errors manifest themselves as random perturbation of the plaquette interaction terms thus introducing frustration. Our results from Monte Carlo simulations suggest that these topological color codes are similarly robust to perturbations as the toric codes. Furthermore, they provide more computational capabilities and the possibility of having more qubits encoded in the quantum memory.

  12. Computer code system for the R and D of nuclear fuel cycle with fast reactor. 5. Development and application of reactor analysis code system

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Kenji; Hazama, Taira; Chiba, Go; Ohki, Shigeo; Ishikawa, Makoto [Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan). Oarai Engineering Center

    2002-12-01

    In the core design of fast reactors (FRs), it is very important to improve the prediction accuracy of the nuclear characteristics for both reducing cost and ensuring reliability of FR plants. A nuclear reactor analysis code system for FRs has been developed by the Japan Nuclear Cycle Development Institute (JNC). This paper describes the outline of the calculation models and methods in the system consisting of several analysis codes, such as the cell calculation code CASUP, the core calculation code TRITAC and the sensitivity analysis code SAGEP. Some examples of verification results and improvement of the design accuracy are also introduced based on the measurement data from critical assemblies, e.g, the JUPITER experiment (USA/Japan), FCA (Japan), MASURCA (France), and BFS (Russia). Furthermore, application fields and future plans, such as the development of new generation nuclear constants and applications to MA{center_dot}FP transmutation, are described. (author)

  13. V.S.O.P. (99/09) Computer Code System for Reactor Physics and Fuel Cycle Simulation; Version 2009

    OpenAIRE

    Rütten, H.-J.; Haas, K. A.; Brockmann, H.; Ohlig, U.; Pohl, C.; Scherer, W.

    2010-01-01

    V.S.O.P.(99/ 09) represents the further development of V.S.O.P.(99/ 05). Compared to its precursor, the code system has been improved again in many details. The main motivation for this new code version was to update the basic nuclear libraries used by the code system. Thus, all cross section libraries involved in the code have now been based on ENDF/B-VII. V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of...

  14. Automatic Generation of OpenMP Directives and Its Application to Computational Fluid Dynamics Codes

    Science.gov (United States)

    Yan, Jerry; Jin, Haoqiang; Frumkin, Michael; Yan, Jerry (Technical Monitor)

    2000-01-01

    The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate OpenMP-based parallel programs with nominal user assistance. We outline techniques used in the implementation of the tool and discuss the application of this tool on the NAS Parallel Benchmarks and several computational fluid dynamics codes. This work demonstrates the great potential of using the tool to quickly port parallel programs and also achieve good performance that exceeds some of the commercial tools.

  15. A low computation cost method for seizure prediction.

    Science.gov (United States)

    Zhang, Yanli; Zhou, Weidong; Yuan, Qi; Wu, Qi

    2014-10-01

    The dynamic changes of electroencephalograph (EEG) signals in the period prior to epileptic seizures play a major role in the seizure prediction. This paper proposes a low computation seizure prediction algorithm that combines a fractal dimension with a machine learning algorithm. The presented seizure prediction algorithm extracts the Higuchi fractal dimension (HFD) of EEG signals as features to classify the patient's preictal or interictal state with Bayesian linear discriminant analysis (BLDA) as a classifier. The outputs of BLDA are smoothed by a Kalman filter for reducing possible sporadic and isolated false alarms and then the final prediction results are produced using a thresholding procedure. The algorithm was evaluated on the intracranial EEG recordings of 21 patients in the Freiburg EEG database. For seizure occurrence period of 30 min and 50 min, our algorithm obtained an average sensitivity of 86.95% and 89.33%, an average false prediction rate of 0.20/h, and an average prediction time of 24.47 min and 39.39 min, respectively. The results confirm that the changes of HFD can serve as a precursor of ictal activities and be used for distinguishing between interictal and preictal epochs. Both HFD and BLDA classifier have a low computational complexity. All of these make the proposed algorithm suitable for real-time seizure prediction.

  16. On the Computational Complexity of Sphere Decoder for Lattice Space-Time Coded MIMO Channel

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    The exact complexity analysis of the basic sphere decoder for general space-time codes applied to multi-input multi-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Specifically, we derive the asymptotic tail distribution of the decoder's computational complexity in the high signal-to-noise ratio (SNR) regime. For the uncoded $M\\times N$ MIMO channel (e.g., V-BLAST), the analysis in [6] revealed that the tail distribution of such a decoder is of a Pareto-type with tail exponent that is equivalent to $N-M+1$. In our analysis, we show that the tail exponent of the sphere decoder's complexity distribution is equivalent to the diversity-multiplexing tradeoff achieved by LAST coding and lattice decoding schemes. This leads to extend the channel's tradeoff to include the decoding complexity. Moreover, we show analytically how minimum-mean square-error decisio...

  17. Development of human reliability analysis methodology and its computer code during low power/shutdown operation

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)

    1997-07-01

    The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)

  18. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  19. A COMPARISON BETWEEN THREE PREDICTIVE MODELS OF COMPUTATIONAL INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    DUMITRU CIOBANU

    2013-12-01

    Full Text Available Time series prediction is an open problem and many researchers are trying to find new predictive methods and improvements for the existing ones. Lately methods based on neural networks are used extensively for time series prediction. Also, support vector machines have solved some of the problems faced by neural networks and they began to be widely used for time series prediction. The main drawback of those two methods is that they are global models and in the case of a chaotic time series it is unlikely to find such model. In this paper it is presented a comparison between three predictive from computational intelligence field one based on neural networks one based on support vector machine and another based on chaos theory. We show that the model based on chaos theory is an alternative to the other two methods.

  20. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    1983-02-01

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  1. A fully parallel, high precision, N-body code running on hybrid computing platforms

    CERN Document Server

    Capuzzo-Dolcetta, R; Punzo, D

    2012-01-01

    We present a new implementation of the numerical integration of the classical, gravitational, N-body problem based on a high order Hermite's integration scheme with block time steps, with a direct evaluation of the particle-particle forces. The main innovation of this code (called HiGPUs) is its full parallelization, exploiting both OpenMP and MPI in the use of the multicore Central Processing Units as well as either Compute Unified Device Architecture (CUDA) or OpenCL for the hosted Graphic Processing Units. We tested both performance and accuracy of the code using up to 256 GPUs in the supercomputer IBM iDataPlex DX360M3 Linux Infiniband Cluster provided by the italian supercomputing consortium CINECA, for values of N up to 8 millions. We were able to follow the evolution of a system of 8 million bodies for few crossing times, task previously unreached by direct summation codes. The code is freely available to the scientific community.

  2. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  3. The Proteus Navier-Stokes code. [two and three dimensional computational fluid dynamics

    Science.gov (United States)

    Towne, Charles E.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two and three dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. Proteus solves the Reynolds-averaged, unsteady, compressible Navier-Stokes equations in strong conservation law form. Turbulence is modeled using a Baldwin-Lomax based algebraic eddy viscosity model. In addition, options are available to solve thin layer or Euler equations, and to eliminate the energy equation by assuming constant stagnation enthalpy. An extensive series of validation cases have been run, primarily using the two dimensional planar/axisymmetric version of the code. Several flows were computed that have exact solution such as: fully developed channel and pipe flow; Couette flow with and without pressure gradients; unsteady Couette flow formation; flow near a suddenly accelerated flat plate; flow between concentric rotating cylinders; and flow near a rotating disk. The two dimensional version of the Proteus code has been released, and the three dimensional code is scheduled for release in late 1991.

  4. Interpreting functional effects of coding variants: challenges in proteome-scale prediction, annotation and assessment.

    Science.gov (United States)

    Shameer, Khader; Tripathi, Lokesh P; Kalari, Krishna R; Dudley, Joel T; Sowdhamini, Ramanathan

    2016-09-01

    Accurate assessment of genetic variation in human DNA sequencing studies remains a nontrivial challenge in clinical genomics and genome informatics. Ascribing functional roles and/or clinical significances to single nucleotide variants identified from a next-generation sequencing study is an important step in genome interpretation. Experimental characterization of all the observed functional variants is yet impractical; thus, the prediction of functional and/or regulatory impacts of the various mutations using in silico approaches is an important step toward the identification of functionally significant or clinically actionable variants. The relationships between genotypes and the expressed phenotypes are multilayered and biologically complex; such relationships present numerous challenges and at the same time offer various opportunities for the design of in silico variant assessment strategies. Over the past decade, many bioinformatics algorithms have been developed to predict functional consequences of single nucleotide variants in the protein coding regions. In this review, we provide an overview of the bioinformatics resources for the prediction, annotation and visualization of coding single nucleotide variants. We discuss the currently available approaches and major challenges from the perspective of protein sequence, structure, function and interactions that require consideration when interpreting the impact of putatively functional variants. We also discuss the relevance of incorporating integrated workflows for predicting the biomedical impact of the functionally important variations encoded in a genome, exome or transcriptome. Finally, we propose a framework to classify variant assessment approaches and strategies for incorporation of variant assessment within electronic health records.

  5. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  6. Computational Efficient Upscaling Methodology for Predicting Thermal Conductivity of Nuclear Waste forms

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dongsheng; Sun, Xin; Khaleel, Mohammad A.

    2011-09-28

    This study evaluated different upscaling methods to predict thermal conductivity in loaded nuclear waste form, a heterogeneous material system. The efficiency and accuracy of these methods were compared. Thermal conductivity in loaded nuclear waste form is an important property specific to scientific researchers, in waste form Integrated performance and safety code (IPSC). The effective thermal conductivity obtained from microstructure information and local thermal conductivity of different components is critical in predicting the life and performance of waste form during storage. How the heat generated during storage is directly related to thermal conductivity, which in turn determining the mechanical deformation behavior, corrosion resistance and aging performance. Several methods, including the Taylor model, Sachs model, self-consistent model, and statistical upscaling models were developed and implemented. Due to the absence of experimental data, prediction results from finite element method (FEM) were used as reference to determine the accuracy of different upscaling models. Micrographs from different loading of nuclear waste were used in the prediction of thermal conductivity. Prediction results demonstrated that in term of efficiency, boundary models (Taylor and Sachs model) are better than self consistent model, statistical upscaling method and FEM. Balancing the computation resource and accuracy, statistical upscaling is a computational efficient method in predicting effective thermal conductivity for nuclear waste form.

  7. Assessment of computer codes for VVER-440/213-type nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Szabados, L.; Ezsol, Gy.; Perneczky [Atomic Energy Research Institute, Budapest (Hungary)

    1995-09-01

    Nuclear power plant of VVER-440/213 designed by the former USSR have a number of special features. As a consequence of these features the transient behaviour of such a reactor system should be different from the PWR system behaviour. To study the transient behaviour of the Hungarian Paks Nuclear Power Plant of VVER-440/213-type both analytical and experimental activities have been performed. The experimental basis of the research in the PMK-2 integral-type test facility , which is a scaled down model of the plant. Experiments performed on this facility have been used to assess thermal-hydraulic system codes. Four tests were selected for {open_quotes}Standard Problem Exercises{close_quotes} of the International Atomic Energy Agency. Results of the 4th Exercise, of high international interest, are presented in the paper, focusing on the essential findings of the assessment of computer codes.

  8. Revised uranium--plutonium cycle PWR and BWR models for the ORIGEN computer code

    Energy Technology Data Exchange (ETDEWEB)

    Croff, A. G.; Bjerke, M. A.; Morrison, G. W.; Petrie, L. M.

    1978-09-01

    Reactor physics calculations and literature searches have been conducted, leading to the creation of revised enriched-uranium and enriched-uranium/mixed-oxide-fueled PWR and BWR reactor models for the ORIGEN computer code. These ORIGEN reactor models are based on cross sections that have been taken directly from the reactor physics codes and eliminate the need to make adjustments in uncorrected cross sections in order to obtain correct depletion results. Revised values of the ORIGEN flux parameters THERM, RES, and FAST were calculated along with new parameters related to the activation of fuel-assembly structural materials not located in the active fuel zone. Recommended fuel and structural material masses and compositions are presented. A summary of the new ORIGEN reactor models is given.

  9. A general panel sizing computer code and its application to composite structural panels

    Science.gov (United States)

    Anderson, M. S.; Stroud, W. J.

    1978-01-01

    A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.

  10. Development of system of computer codes for severe accident analysis and its applications

    Energy Technology Data Exchange (ETDEWEB)

    Jang, H. S.; Jeon, M. H.; Cho, N. J. and others [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1992-01-15

    The objectives of this study is to develop a system of computer codes for postulated severe accident analyses in nuclear power plants. This system of codes is necessary to conduct Individual Plant Examination for domestic nuclear power plants. As a result of this study, one can conduct severe accident assessments more easily, and can extract the plant-specific vulnerabilities for severe accidents and at the same time the ideas for enhancing overall accident-resistance. Severe accident can be mitigated by the proper accident management strategies. Some operator action for mitigation can lead to more disastrous result and thus uncertain severe accident phenomena must be well recognized. There must be further research for development of severe accident management strategies utilizing existing plant resources as well as new design concepts.

  11. Computational identification of human long intergenic non-coding RNAs using a GA-SVM algorithm.

    Science.gov (United States)

    Wang, Yanqiu; Li, Yang; Wang, Qi; Lv, Yingli; Wang, Shiyuan; Chen, Xi; Yu, Xuexin; Jiang, Wei; Li, Xia

    2014-01-01

    Long intergenic non-coding RNAs (lincRNAs) are a new type of non-coding RNAs and are closely related with the occurrence and development of diseases. In previous studies, most lincRNAs have been identified through next-generation sequencing. Because lincRNAs exhibit tissue-specific expression, the reproducibility of lincRNA discovery in different studies is very poor. In this study, not including lincRNA expression, we used the sequence, structural and protein-coding potential features as potential features to construct a classifier that can be used to distinguish lincRNAs from non-lincRNAs. The GA-SVM algorithm was performed to extract the optimized feature subset. Compared with several feature subsets, the five-fold cross validation results showed that this optimized feature subset exhibited the best performance for the identification of human lincRNAs. Moreover, the LincRNA Classifier based on Selected Features (linc-SF) was constructed by support vector machine (SVM) based on the optimized feature subset. The performance of this classifier was further evaluated by predicting lincRNAs from two independent lincRNA sets. Because the recognition rates for the two lincRNA sets were 100% and 99.8%, the linc-SF was found to be effective for the prediction of human lincRNAs.

  12. Thought Insertion as a Self-Disturbance: An Integration of Predictive Coding and Phenomenological Approaches

    Science.gov (United States)

    Sterzer, Philipp; Mishara, Aaron L.; Voss, Martin; Heinz, Andreas

    2016-01-01

    Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from “nowhere”, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: “Ichstörungen”) of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer) first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a “becoming sensory” of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, “internal” events such as thoughts (including volitions, emotions and memories) can also be associated with increased prediction-error signaling and are thus imbued

  13. A Scalable Multiple Description Scheme for 3D Video Coding Based on the Interlayer Prediction Structure

    Directory of Open Access Journals (Sweden)

    Lorenzo Favalli

    2010-01-01

    Full Text Available The most recent literature indicates multiple description coding (MDC as a promising coding approach to handle the problem of video transmission over unreliable networks with different quality and bandwidth constraints. Furthermore, following recent commercial availability of autostereoscopic 3D displays that allow 3D visual data to be viewed without the use of special headgear or glasses, it is anticipated that the applications of 3D video will increase rapidly in the near future. Moving from the concept of spatial MDC, in this paper we introduce some efficient algorithms to obtain 3D substreams that also exploit some form of scalability. These algorithms are then applied to both coded stereo sequences and to depth image-based rendering (DIBR. In these algorithms, we first generate four 3D subsequences by subsampling, and then two of these subsequences are jointly used to form each of the two descriptions. For each description, one of the original subsequences is predicted from the other one via some scalable algorithms, focusing on the inter layer prediction scheme. The proposed algorithms can be implemented as pre- and postprocessing of the standard H.264/SVC coder that remains fully compatible with any standard coder. The experimental results presented show that these algorithms provide excellent results.

  14. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    Science.gov (United States)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  15. Development of a computer code for dynamic analysis of the primary circuit of advanced reactors

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, Jussie Soares da; Lira, Carlos A.B.O.; Magalhaes, Mardson A. de Sa, E-mail: cabol@ufpe.b [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Dept. de Energia Nuclear

    2011-07-01

    Currently, advanced reactors are being developed, seeking for enhanced safety, better performance and low environmental impacts. Reactor designs must follow several steps and numerous tests before a conceptual project could be certified. In this sense, computational tools become indispensable in the preparation of such projects. Thus, this study aimed at the development of a computational tool for thermal-hydraulic analysis by coupling two computer codes to evaluate the influence of transients caused by pressure variations and flow surges in the region of the primary circuit of IRIS reactor between the core and the pressurizer. For the simulation, it was used a situation of 'insurge', characterized by the entry of water in the pressurizer, due to the expansion of the refrigerant in the primary circuit. This expansion was represented by a pressure disturbance in step form, through the block 'step' of SIMULINK, thus enabling the transient startup. The results showed that the dynamic tool, obtained through the coupling of the codes, generated very satisfactory responses within model limitations, preserving the most important phenomena in the process. (author)

  16. Computer simulation tool for predicting sound propagation in air-filled tubes with acoustic impedance discontinuities.

    Science.gov (United States)

    Albors, Gabriel O; Kyle, Aaron M; Wodicka, George R; Juan, Eduardo J

    2007-01-01

    A computer tool, based on an acoustic transmission line model, was developed for modeling and predicting sound propagation and reflections in cascaded tube segments. This subroutine considered the number of interconnected tubes, their dimensions and wall properties, as well as medium properties to create a network of cascaded transmission line model segments, from which the impulse response of the network was estimated. Acoustic propagation was examined in air-filled cascaded tube networks and model predictions were compared to measured acoustic pulse responses. The model was able to accurately predict the location and morphology of reflections. The developed code proved to be a useful design tool for applications such as the guidance of catheters through compliant air-filled biological conduits.

  17. Using the Orbit Tracking Code Z3CYCLONE to Predict the Beam Produced by a Cold Cathode PIG Ion Source for Cyclotrons under DC Extraction

    CERN Document Server

    Forringer, Edward

    2005-01-01

    Experimental measurements of the emittance and luminosity of beams produced by a cold-cathode Phillips Ionization Guage (PIG) ion source for cyclotrons under dc extraction are reviewed. (The source being studied is of the same style as ones that will be used in a series of 250 MeV proton cyclotrons being constructed for cancer therapy by ACCEL Inst, Gmbh, of Bergisch Gladbach, Germany.) The concepts of 'plasma boundary' and 'plasma temperature' are presented as a useful set of parameters for describing the initial conditions used in computational orbit tracking. Experimental results for r-pr and z-pz emittance are compared to predictions from the MSU orbit tracking code Z3CYCLONE with results indicating that the code is able to predict the beam produced by these ion sources with adequate accuracy such that construction of actual cyclotrons can proceed with reasonably prudent confidence that the cyclotron will perform as predicted.

  18. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    Science.gov (United States)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  19. A Bipartite Network-based Method for Prediction of Long Non-coding RNA–protein Interactions

    Directory of Open Access Journals (Sweden)

    Mengqu Ge

    2016-02-01

    Full Text Available As one large class of non-coding RNAs (ncRNAs, long ncRNAs (lncRNAs have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI. LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR and protein-based collaborative filtering (ProCF. Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  20. A Bipartite Network-based Method for Prediction of Long Non-coding RNA-protein Interactions

    Institute of Scientific and Technical Information of China (English)

    Mengqu Ge; Ao Li; Minghui Wang

    2016-01-01

    As one large class of non-coding RNAs (ncRNAs), long ncRNAs (lncRNAs) have gained considerable attention in recent years. Mutations and dysfunction of lncRNAs have been implicated in human disorders. Many lncRNAs exert their effects through interactions with the corresponding RNA-binding proteins. Several computational approaches have been developed, but only few are able to perform the prediction of these interactions from a network-based point of view. Here, we introduce a computational method named lncRNA–protein bipartite network inference (LPBNI). LPBNI aims to identify potential lncRNA–interacting proteins, by making full use of the known lncRNA–protein interactions. Leave-one-out cross validation (LOOCV) test shows that LPBNI significantly outperforms other network-based methods, including random walk (RWR) and protein-based collaborative filtering (ProCF). Furthermore, a case study was performed to demonstrate the performance of LPBNI using real data in predicting potential lncRNA–interacting proteins.

  1. Validation of the solar heating and cooling high speed performance (HISPER) computer code

    Science.gov (United States)

    Wallace, D. B.

    1980-10-01

    Developed to give a quick and accurate predictions HISPER, a simplification of the TRNSYS program, achieves its computational speed by not simulating detailed system operations or performing detailed load computations. In order to validate the HISPER computer for air systems the simulation was compared to the actual performance of an operational test site. Solar insolation, ambient temperature, water usage rate, and water main temperatures from the data tapes for an office building in Huntsville, Alabama were used as input. The HISPER program was found to predict the heating loads and solar fraction of the loads with errors of less than ten percent. Good correlation was found on both a seasonal basis and a monthly basis. Several parameters (such as infiltration rate and the outside ambient temperature above which heating is not required) were found to require careful selection for accurate simulation.

  2. A Review of Computational Methods for Predicting Drug Targets.

    Science.gov (United States)

    Huang, Guohua; Yan, Fengxia; Tan, Duoduo

    2016-11-14

    Drug discovery and development is not only a time-consuming and labor-intensive process but also full of risk. Identifying targets of small molecules helps evaluate safety of drugs and find new therapeutic applications. The biotechnology measures a wide variety of properties related to drug and targets from different perspectives, thus generating a large body of data. This undoubtedly provides a solid foundation to explore relationships between drugs and targets. A large number of computational techniques have recently been developed for drug target prediction. In this paper, we summarize these computational methods and classify them into structure-based, molecular activity-based, side-effect-based and multi-omics-based predictions according to the used data for inference. The multi-omics-based methods are further grouped into two types: classifier-based and network-based predictions. Furthermore,the advantages and limitations of each type of methods are discussed. Finally, we point out the future directions of computational predictions for drug targets.

  3. A Brain Computer Interface for Robust Wheelchair Control Application Based on Pseudorandom Code Modulated Visual Evoked Potential

    DEFF Research Database (Denmark)

    Mohebbi, Ali; Engelsholm, Signe K.D.; Puthusserypady, Sadasivan

    2015-01-01

    In this pilot study, a novel and minimalistic Brain Computer Interface (BCI) based wheelchair control application was developed. The system was based on pseudorandom code modulated Visual Evoked Potentials (c-VEPs). The visual stimuli in the scheme were generated based on the Gold code...

  4. Development of an aeroelastic code based on three-dimensional viscous–inviscid method for wind turbine computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Sørensen, Jens Nørkær

    2017-01-01

    Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely estimated in the wind energy field using computational tools known as aeroelastic codes. Most aeroelastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics and a modal...

  5. Simulation of the preliminary General Electric SP-100 space reactor concept using the ATHENA computer code

    Science.gov (United States)

    Fletcher, C. D.

    The capability to perform thermal-hydraulic analyses of a space reactor using the ATHENA computer code is demonstrated. The fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of the preliminary General electric SP-100 design were modeled with ATHENA. Two demonstration transient calculations were performed simulating accident conditions. Calculated results are available for display using the Nuclear Plant Analyzer color graphics analysis tool in addition to traditional plots. ATHENA-calculated results appear reasonable, both for steady state full power conditions, and for the two transients. This analysis represents the first known transient thermal-hydraulic simulation using an integral space reactor system model incorporating heat pipes.

  6. Capabilities of the ATHENA computer code for modeling the SP-100 space reactor concept

    Science.gov (United States)

    Fletcher, C. D.

    1985-09-01

    The capability to perform thermal-hydraulic analyses of an SP-100 space reactor was demonstrated using the ATHENA computer code. The preliminary General Electric SP-100 design was modeled using Athena. The model simulates the fast reactor, liquid-lithium coolant loops, and lithium-filled heat pipes of this design. Two ATHENA demonstration calculations were performed simulating accident scenarios. A mask for the SP-100 model and an interface with the Nuclear Plant Analyzer (NPA) were developed, allowing a graphic display of the calculated results on the NPA.

  7. Discrete logarithm computations over finite fields using Reed-Solomon codes

    OpenAIRE

    Augot, Daniel; Morain, François

    2012-01-01

    Cheng and Wan have related the decoding of Reed-Solomon codes to the computation of discrete logarithms over finite fields, with the aim of proving the hardness of their decoding. In this work, we experiment with solving the discrete logarithm over GF(q^h) using Reed-Solomon decoding. For fixed h and q going to infinity, we introduce an algorithm (RSDL) needing O~(h! q^2) operations over GF(q), operating on a q x q matrix with (h+2) q non-zero coefficients. We give faster variants including a...

  8. Resin Matrix/Fiber Reinforced Composite Material, Ⅱ: Method of Solution and Computer Code

    Institute of Scientific and Technical Information of China (English)

    Li Chensha(李辰砂); Jiao Caishan; Liu Ying; Wang Zhengping; Wang Hongjie; Cao Maosheng

    2003-01-01

    According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.

  9. Fuel burnup analysis for Thai research reactor by using MCNPX computer code

    Science.gov (United States)

    Sangkaew, S.; Angwongtrakool, T.; Srimok, B.

    2017-06-01

    This paper presents the fuel burnup analysis of the Thai research reactor (TRR-1/M1), TRIGA Mark-III, operated by Thailand Institute of Nuclear Technology (TINT) in Bangkok, Thailand. The modelling software used in this analysis is MCNPX (MCNP eXtended) version 2.6.0, a Fortran90 Monte Carlo radiation transport computer code. The analysis results will cover the core excess reactivity, neutron fluxes at the irradiation positions and neutron detector tubes, power distribution, fuel burnup, and fission products based on fuel cycle of first reactor core arrangement.

  10. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    Science.gov (United States)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  11. Apparatus, Method, and Computer Program for a Resolution-Enhanced Pseudo-Noise Code Technique

    Science.gov (United States)

    Li, Steven X. (Inventor)

    2015-01-01

    An apparatus, method, and computer program for a resolution enhanced pseudo-noise coding technique for 3D imaging is provided. In one embodiment, a pattern generator may generate a plurality of unique patterns for a return to zero signal. A plurality of laser diodes may be configured such that each laser diode transmits the return to zero signal to an object. Each of the return to zero signal includes one unique pattern from the plurality of unique patterns to distinguish each of the transmitted return to zero signals from one another.

  12. Fundamental algorithm and computational codes for the light beam propagation in high power laser system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The fundamental algorithm of light beam propagation in high powerlaser system is investigated and the corresponding computational codes are given. It is shown that the number of modulation ring due to the diffraction is related to the size of the pinhole in spatial filter (in terms of the times of diffraction limitation, i.e. TDL) and the Fresnel number of the laser system; for the complex laser system with multi-spatial filters and free space, the system can be investigated by the reciprocal rule of operators.

  13. Modeling of field lysimeter release data using the computer code dust

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M.; Fitzgerald, I.T. (Brookhaven National Lab., Upton, NY (United States)); McConnell, J.W.; Rogers, R.D. (Idaho National Engineering Lab., Idaho Falls, ID (United States))

    1993-01-01

    In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.

  14. Modeling of field lysimeter release data using the computer code dust

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, T.M.; Fitzgerald, I.T. [Brookhaven National Lab., Upton, NY (United States); McConnell, J.W.; Rogers, R.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1993-03-01

    In this study, it was attempted to match the experimentally measured mass release data collected over a period of seven years by investigators from Idaho National Engineering Laboratory from the lysimeters at Oak Ridge National Laboratory and Argonne National Laboratory using the computer code DUST. The influence of the dispersion coefficient and distribution coefficient on mass release was investigated. Both were found to significantly influence mass release over the seven year period. It is recommended that these parameters be measured on a site specific basis to enhance the understanding of the system.

  15. Graphics Processing Unit-Accelerated Code for Computing Second-Order Wiener Kernels and Spike-Triggered Covariance

    Science.gov (United States)

    Mano, Omer

    2017-01-01

    Sensory neuroscience seeks to understand and predict how sensory neurons respond to stimuli. Nonlinear components of neural responses are frequently characterized by the second-order Wiener kernel and the closely-related spike-triggered covariance (STC). Recent advances in data acquisition have made it increasingly common and computationally intensive to compute second-order Wiener kernels/STC matrices. In order to speed up this sort of analysis, we developed a graphics processing unit (GPU)-accelerated module that computes the second-order Wiener kernel of a system’s response to a stimulus. The generated kernel can be easily transformed for use in standard STC analyses. Our code speeds up such analyses by factors of over 100 relative to current methods that utilize central processing units (CPUs). It works on any modern GPU and may be integrated into many data analysis workflows. This module accelerates data analysis so that more time can be spent exploring parameter space and interpreting data. PMID:28068420

  16. A Fusion Model for CPU Load Prediction in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dayu Xu

    2013-11-01

    Full Text Available Load prediction plays a key role in cost-optimal resource allocation and datacenter energy saving. In this paper, we use real-world traces from Cloud platform and propose a fusion model to forecast the future CPU loads. First, long CPU load time series data are divided into short sequences with same length from the historical data on the basis of cloud control cycle. Then we use kernel fuzzy c-means clustering algorithm to put the subsequences into different clusters. For each cluster, with current load sequence, a genetic algorithm optimized wavelet Elman neural network prediction model is exploited to predict the CPU load in next time interval. Finally, we obtain the optimal cloud computing CPU load prediction results from the cluster and its corresponding predictor with minimum forecasting error. Experimental results show that our algorithm performs better than other models reported in previous works.

  17. A Computationally Efficient Aggregation Optimization Strategy of Model Predictive Control

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Model Predictive Control (MPC) is a popular technique and has been successfully used in various industrial applications. However, the big drawback of MPC involved in the formidable on-line computational effort limits its applicability to relatively slow and/or small processes with a moderate number of inputs. This paper develops an aggregation optimization strategy for MPC that can improve the computational efficiency of MPC. For the regulation problem, an input decaying aggregation optimization algorithm is presented by aggregating all the original optimized variables on control horizon with the decaying sequence in respect of the current control action.

  18. Aircrew dosimetry using the Predictive Code for Aircrew Radiation Exposure (PCAIRE).

    Science.gov (United States)

    Lewis, B J; Bennett, L G I; Green, A R; Butler, A; Desormeaux, M; Kitching, F; McCall, M J; Ellaschuk, B; Pierre, M

    2005-01-01

    During 2003, a portable instrument suite was used to conduct cosmic radiation measurements on 49 jet-altitude flights, which brings the total number of in-flight measurements by this research group to over 160 flights since 1999. From previous measurements, correlations have been developed to allow for the interpolation of the dose-equivalent rate for any global position, altitude and date. The result was a Predictive Code for Aircrew Radiation Exposure (PCAIRE), which has since been improved. This version of the PCAIRE has been validated against the integral route dose measurements made at commercial aircraft altitudes during the 49 flights. On most flights, the code gave predictions that agreed to the measured data (within +/- 25%), providing confidence in the use of PCAIRE to predict aircrew exposure to galactic cosmic radiation. An empirical correlation, based on ground-level neutron monitoring data, has also been developed for the estimation of aircrew exposure from solar energetic particle (SEP) events. This model has been used to determine the significance of SEP exposure on a theoretical jet altitude flight during GLE 42.

  19. Computational Prediction of RNA-Binding Proteins and Binding Sites

    Directory of Open Access Journals (Sweden)

    Jingna Si

    2015-11-01

    Full Text Available Proteins and RNA interaction have vital roles in many cellular processes such as protein synthesis, sequence encoding, RNA transfer, and gene regulation at the transcriptional and post-transcriptional levels. Approximately 6%–8% of all proteins are RNA-binding proteins (RBPs. Distinguishing these RBPs or their binding residues is a major aim of structural biology. Previously, a number of experimental methods were developed for the determination of protein–RNA interactions. However, these experimental methods are expensive, time-consuming, and labor-intensive. Alternatively, researchers have developed many computational approaches to predict RBPs and protein–RNA binding sites, by combining various machine learning methods and abundant sequence and/or structural features. There are three kinds of computational approaches, which are prediction from protein sequence, prediction from protein structure, and protein-RNA docking. In this paper, we review all existing studies of predictions of RNA-binding sites and RBPs and complexes, including data sets used in different approaches, sequence and structural features used in several predictors, prediction method classifications, performance comparisons, evaluation methods, and future directions.

  20. Computational Prediction of RNA-Binding Proteins and Binding Sites.

    Science.gov (United States)

    Si, Jingna; Cui, Jing; Cheng, Jin; Wu, Rongling

    2015-01-01

    Proteins and RNA interaction have vital roles in many cellular processes such as protein synthesis, sequence encoding, RNA transfer, and gene regulation at the transcriptional and post-transcriptional levels. Approximately 6%-8% of all proteins are RNA-binding proteins (RBPs). Distinguishing these RBPs or their binding residues is a major aim of structural biology. Previously, a number of experimental methods were developed for the determination of protein-RNA interactions. However, these experimental methods are expensive, time-consuming, and labor-intensive. Alternatively, researchers have developed many computational approaches to predict RBPs and protein-RNA binding sites, by combining various machine learning methods and abundant sequence and/or structural features. There are three kinds of computational approaches, which are prediction from protein sequence, prediction from protein structure, and protein-RNA docking. In this paper, we review all existing studies of predictions of RNA-binding sites and RBPs and complexes, including data sets used in different approaches, sequence and structural features used in several predictors, prediction method classifications, performance comparisons, evaluation methods, and future directions.

  1. Digital Poetry: A Narrow Relation between Poetics and the Codes of the Computational Logic

    Science.gov (United States)

    Laurentiz, Silvia

    The project "Percorrendo Escrituras" (Walking Through Writings Project) has been developed at ECA-USP Fine Arts Department. Summarizing, it intends to study different structures of digital information that share the same universe and are generators of a new aesthetics condition. The aim is to search which are the expressive possibilities of the computer among the algorithm functions and other of its specific properties. It is a practical, theoretical and interdisciplinary project where the study of programming evolutionary language, logic and mathematics take us to poetic experimentations. The focus of this research is the digital poetry, and it comes from poetics of permutation combinations and culminates with dynamic and complex systems, autonomous, multi-user and interactive, through agents generation derivations, filtration and emergent standards. This lecture will present artworks that use some mechanisms introduced by cybernetics and the notion of system in digital poetry that demonstrate the narrow relationship between poetics and the codes of computational logic.

  2. An implementation of a tree code on a SIMD, parallel computer

    Science.gov (United States)

    Olson, Kevin M.; Dorband, John E.

    1994-01-01

    We describe a fast tree algorithm for gravitational N-body simulation on SIMD parallel computers. The tree construction uses fast, parallel sorts. The sorted lists are recursively divided along their x, y and z coordinates. This data structure is a completely balanced tree (i.e., each particle is paired with exactly one other particle) and maintains good spatial locality. An implementation of this tree-building algorithm on a 16k processor Maspar MP-1 performs well and constitutes only a small fraction (approximately 15%) of the entire cycle of finding the accelerations. Each node in the tree is treated as a monopole. The tree search and the summation of accelerations also perform well. During the tree search, node data that is needed from another processor is simply fetched. Roughly 55% of the tree search time is spent in communications between processors. We apply the code to two problems of astrophysical interest. The first is a simulation of the close passage of two gravitationally, interacting, disk galaxies using 65,636 particles. We also simulate the formation of structure in an expanding, model universe using 1,048,576 particles. Our code attains speeds comparable to one head of a Cray Y-MP, so single instruction, multiple data (SIMD) type computers can be used for these simulations. The cost/performance ratio for SIMD machines like the Maspar MP-1 make them an extremely attractive alternative to either vector processors or large multiple instruction, multiple data (MIMD) type parallel computers. With further optimizations (e.g., more careful load balancing), speeds in excess of today's vector processing computers should be possible.

  3. Wavelet subband coding of computer simulation output using the A++ array class library

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D. [Los Alamos National Lab., NM (United States); Nuri, V. [Washington State Univ., Pullman, WA (United States). School of EECS

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.

  4. Computer simulation of Angra-2 PWR nuclear reactor core using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Medeiros, Marcos P.C. de; Rebello, Wilson F., E-mail: eng.cavaliere@ime.eb.br, E-mail: rebello@ime.eb.br [Instituto Militar de Engenharia - Secao de Engenharia Nuclear, Rio de Janeiro, RJ (Brazil); Oliveira, Claudio L. [Universidade Gama Filho, Departamento de Matematica, Rio de Janeiro, RJ (Brazil); Vellozo, Sergio O., E-mail: vellozo@cbpf.br [Centro Tecnologico do Exercito. Divisao de Defesa Quimica, Biologica e Nuclear, Rio de Janeiro, RJ (Brazil); Silva, Ademir X. da, E-mail: ademir@nuclear.ufrj.br [Coordenacao dos Programas de Pos Gaduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2011-07-01

    In this work the MCNPX (Monte Carlo N-Particle Transport Code) code was used to develop a computerized model of the core of Angra 2 PWR (Pressurized Water Reactor) nuclear reactor. The model was created without any kind of homogenization, but using real geometric information and material composition of that reactor, obtained from the FSAR (Final Safety Analysis Report). The model is still being improved and the version presented in this work is validated by comparing values calculated by MCNPX with results calculated by others means and presented on FSAR. This paper shows the results already obtained to K{sub eff} and K{infinity}, general parameters of the core, considering the reactor operating under stationary conditions of initial testing and operation. Other stationary operation conditions have been simulated and, in all tested cases, there was a close agreement between values calculated computationally through this model and data presented on the FSAR, which were obtained by other codes. This model is expected to become a valuable tool for many future applications. (author)

  5. Development of computer code models for analysis of subassembly voiding in the LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Hinkle, W [ed.

    1979-12-01

    The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered.

  6. Sodium combustion computer code ASSCOPS version 2.0 user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Hiroyasu; Futagami, Satoshi; Ohno, Shuji; Seino, Hiroshi; Miyake, Osamu [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1997-12-01

    ASSCOPS (Analysis of Simultaneous Sodium Combustion in Pool and Spray) has been developed for analyses of thermal consequences of sodium leak and fire accidents in LMFBRs. This report presents a description of the computational models, input, and output as the user`s manual of ASSCOPS version 2.0. ASSCOPS is an integrated code based on the sodium pool fire code SOFIRE II developed by the Atomics International Division of Rockwell International, and the sodium spray fire code SPRAY developed by the Hanford Engineering Development Laboratory in the U.S. The experimental studies conducted at PNC have been reflected in the ASSCOPS improvement. The users of ASSCOPS need to specify the sodium leak conditions (leak flow rate and temperature, etc.), the cell geometries (volume and structure surface area and thickness, etc.), and the atmospheric initial conditions, such as gas temperature, pressure, and gas composition. ASSCOPS calculates the time histories of atmospheric pressure and temperature changes along with those of the structural temperatures. (author)

  7. Hierarchical surface code for network quantum computing with modules of arbitrary size

    Science.gov (United States)

    Li, Ying; Benjamin, Simon C.

    2016-10-01

    The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.

  8. High Pitch Delay Resolution Technique for Tonal Language Speech Coding Based on Multi-Pulse Based Code Excited Linear Prediction Algorithm

    Directory of Open Access Journals (Sweden)

    Suphattharachai Chomphan

    2011-01-01

    Full Text Available Problem statement: In spontaneous speech communication, speech coding is an important process that should be taken into account, since the quality of coded speech depends on the efficiency of the speech coding algorithm. As for tonal language which tone plays important role not only on the naturalness and also the intelligibility of the speech, tone must be treated appropriately. Approach: This study proposes a modification of flexible Multi-Pulse based Code Excited Linear Predictive (MP-CELP coder with multiple bitrates and bitrate scalabilities for tonal language speech in the multimedia applications. The coder consists of a core coder and bitrate scalable tools. The High Pitch Delay Resolutions (HPDR are applied to the adaptive codebook of core coder for tonal language speech quality improvement. The bitrate scalable tool employs multi-stage excitation coding based on an embedded-coding approach. The multi-pulse excitation codebook at each stage is adaptively produced depending on the selected excitation signal at the previous stage. Results: The experimental results show that the speech quality of the proposed coder is improved above the speech quality of the conventional coder without pitch-resolution adaptation. Conclusion: From the study, it is a strong evidence to further apply the proposed technique in the speech coding systems or other speech processing technologies.

  9. Computational Methods for Predictive Simulation of Stochastic Turbulence Systems

    Science.gov (United States)

    2015-11-05

    AFRL-AFOSR-VA-TR-2015-0363 Computational Methods for Predictive Simulation of Stochastic Turbulence Systems Catalin Trenchea UNIVERSITY OF PITTSBURGH...STOCHASTIC TURBULENCE SYSTEMS AFOSR GRANT FA 9550-12-1-0191 William Layton and Catalin Trenchea Department of Mathematics University of Pittsburgh...During Duration of Grant Nan Jian Graduate student, Univ . of Pittsburgh (currently Postdoc at FSU) Sarah Khankan Graduate student, Univ . of Pittsburgh

  10. Predicting Neck Abscess with Contrast-Enhanced Computed Tomography

    OpenAIRE

    2014-01-01

    Neck abscesses are difficult to diagnose and treat. Currently, contrast-enhanced computed tomography (CECT) is the imaging modality of choice. The study aims to determine the predictive value of CECT findings in diagnosing neck abscess, causes of neck abscess and the most common neck space involved in the local population. 84 consecutive patients clinically suspected to have neck abscess who underwent CECT and surgical confirmation of pus were included. Demographic and clinical data were reco...

  11. A code-independent technique for computational verification of fluid mechanics and heat transfer problems

    Institute of Scientific and Technical Information of China (English)

    M. Garbey; C. Picard

    2008-01-01

    The goal of this paper is to present a versatile framework for solution verification of PDE's.We first generalize the Richardson Extrapolation technique to an optimized extrapolation solution procedure that constructs the best consistent solution from a set of two or three coarse grid solution in the discrete norm of choice. This technique generalizes the Least Square Extrapolation method introduced by one of the author and W. Shyy. We second establish the conditioning number of the problem in a reduced space that approximates the main feature of the numerical solution thanks to a sensitivity analysis. Overall our method produces an a posteriori error estimation in this reduced space of approximation. The key feature of our method is that our construction does not require an internal knowledge of the software neither the source code that produces the solution to be verified. It can be applied in principle as a postprocessing procedure to off the shelf commercial code. We demonstrate the robustness of our method with two steady problems that are separately an incompressible back step flow test case and a heat transfer problem for a battery. Our error estimate might be ultimately verified with a near by manufactured solution. While our procedure is systematic and requires numerous computation of residuals, one can take advantage of distributed computing to get quickly the error estimate.

  12. Computational Methods for Protein Structure Prediction and Modeling Volume 2: Structure Prediction

    CERN Document Server

    Xu, Ying; Liang, Jie

    2007-01-01

    Volume 2 of this two-volume sequence focuses on protein structure prediction and includes protein threading, De novo methods, applications to membrane proteins and protein complexes, structure-based drug design, as well as structure prediction as a systems problem. A series of appendices review the biological and chemical basics related to protein structure, computer science for structural informatics, and prerequisite mathematics and statistics.

  13. DEVELOPMENT OF A COMPUTER PROGRAM TO SUPPORT AN EFFICIENT NON-REGRESSION TEST OF A THERMAL-HYDRAULIC SYSTEM CODE

    Directory of Open Access Journals (Sweden)

    JUN YEOB LEE

    2014-10-01

    Full Text Available During the development process of a thermal-hydraulic system code, a non-regression test (NRT must be performed repeatedly in order to prevent software regression. The NRT process, however, is time-consuming and labor-intensive. Thus, automation of this process is an ideal solution. In this study, we have developed a program to support an efficient NRT for the SPACE code and demonstrated its usability. This results in a high degree of efficiency for code development. The program was developed using the Visual Basic for Applications and designed so that it can be easily customized for the NRT of other computer codes.

  14. Evolutionary computer programming of protein folding and structure predictions.

    Science.gov (United States)

    Nölting, Bengt; Jülich, Dennis; Vonau, Winfried; Andert, Karl

    2004-07-07

    In order to understand the mechanism of protein folding and to assist the rational de-novo design of fast-folding, non-aggregating and stable artificial enzymes it is very helpful to be able to simulate protein folding reactions and to predict the structures of proteins and other biomacromolecules. Here, we use a method of computer programming called "evolutionary computer programming" in which a program evolves depending on the evolutionary pressure exerted on the program. In the case of the presented application of this method on a computer program for folding simulations, the evolutionary pressure exerted was towards faster finding deep minima in the energy landscape of protein folding. Already after 20 evolution steps, the evolved program was able to find deep minima in the energy landscape more than 10 times faster than the original program prior to the evolution process.

  15. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Functional modules, F9-F11

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume consists of the section of the manual dealing with three of the functional modules in the code. Those are the Morse-SGC for the SCALE system, Heating 7.2, and KENO V.a. The manual describes the latest released versions of the codes.

  16. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    Science.gov (United States)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  17. Improved inter-layer prediction for light field content coding with display scalability

    Science.gov (United States)

    Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo

    2016-09-01

    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.

  18. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    Energy Technology Data Exchange (ETDEWEB)

    Geffraye, G.; Bazin, P.; Pichon, P. [CEA/DRN/STR, Grenoble (France)

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  19. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  20. Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules.

    Directory of Open Access Journals (Sweden)

    Konda Leela Sarath Kumar

    Full Text Available Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage.The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with 'High' reliability scoring, DEREK (accuracy = 72.73% and CCR = 71.44% and TOPKAT (accuracy = 60.00% and CCR = 61.67%. Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%, the coverage was very low (only 10 out of 77 molecules were predicted reliably.Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing.

  1. Development of additional module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation

    Energy Technology Data Exchange (ETDEWEB)

    Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)

    2007-07-01

    The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.

  2. Experimental verification of NOVICE transport code predictions of electron distributions from targets

    CERN Document Server

    Kronenberg, S; Jordan, T; Bechtel, E; Gentner, F; Groeber, E

    2002-01-01

    This paper reports the results of experiments that were designed to check the validity of the NOVICE Adjoint Monte Carlo Transport code in predicting emission-electron distributions from irradiated targets. Previous work demonstrated that the code accurately calculated total electron yields from irradiated targets. In this investigation, a gold target was irradiated by X-rays with effective quantum energies of 79, 127, 174, 216, and 250 keV. Spectra of electrons from the target were measured for an incident photon angle of 45 deg., an emission-electron polar angle of 45 deg., azimuthal angles of 0 deg. and 180 deg., and in both the forward and backward directions. NOVICE was used to predict those electron-energy-distributions for the same set of experimental conditions. The agreement in shape of the theoretical and experimental distributions was good, whereas the absolute agreement in amplitude was within about a factor of 2 over most of the energy range of the spectra. Previous experimental and theoretical c...

  3. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder.

    Science.gov (United States)

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo; Ibanez, Agustin

    2015-11-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8-15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD.

  4. Computer Simulation Modeling: A Method for Predicting the Utilities of Alternative Computer-Aided Treat Evaluation Algorithms

    Science.gov (United States)

    1990-09-01

    0 Technical Report 911 D~i. FiLE COPY Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative Computer-Aided Threat...63007A 793 1202 HI 11. TITLE (Include Security Classification) Computer Simulation Modeling : A Method for Predicting the Utilities of Alternative...SECURITY CLASSIFICATION OF THIS PAGE("wn Data Entered) ii Technical Report 911 Computer Simulation Modeling : A Method for Predicting the Utilities of

  5. High-frequency combination coding-based steady-state visual evoked potential for brain computer interface

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Feng; Zhang, Xin; Xie, Jun; Li, Yeping; Han, Chengcheng; Lili, Li; Wang, Jing [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); Xu, Guang-Hua [School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049 (China); State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710054 (China)

    2015-03-10

    This study presents a new steady-state visual evoked potential (SSVEP) paradigm for brain computer interface (BCI) systems. The goal of this study is to increase the number of targets using fewer stimulation high frequencies, with diminishing subject’s fatigue and reducing the risk of photosensitive epileptic seizures. The new paradigm is High-Frequency Combination Coding-Based High-Frequency Steady-State Visual Evoked Potential (HFCC-SSVEP).Firstly, we studied SSVEP high frequency(beyond 25 Hz)response of SSVEP, whose paradigm is presented on the LED. The SNR (Signal to Noise Ratio) of high frequency(beyond 40 Hz) response is very low, which is been unable to be distinguished through the traditional analysis method; Secondly we investigated the HFCC-SSVEP response (beyond 25 Hz) for 3 frequencies (25Hz, 33.33Hz, and 40Hz), HFCC-SSVEP produces n{sup n} with n high stimulation frequencies through Frequence Combination Code. Further, Animproved Hilbert-huang transform (IHHT)-based variable frequency EEG feature extraction method and a local spectrum extreme target identification algorithmare adopted to extract time-frequency feature of the proposed HFCC-SSVEP response.Linear predictions and fixed sifting (iterating) 10 time is used to overcome the shortage of end effect and stopping criterion,generalized zero-crossing (GZC) is used to compute the instantaneous frequency of the proposed SSVEP respondent signals, the improved HHT-based feature extraction method for the proposed SSVEP paradigm in this study increases recognition efficiency, so as to improve ITR and to increase the stability of the BCI system. what is more, SSVEPs evoked by high-frequency stimuli (beyond 25Hz) minimally diminish subject’s fatigue and prevent safety hazards linked to photo-induced epileptic seizures, So as to ensure the system efficiency and undamaging.This study tests three subjects in order to verify the feasibility of the proposed method.

  6. Computational neurorehabilitation: modeling plasticity and learning to predict recovery.

    Science.gov (United States)

    Reinkensmeyer, David J; Burdet, Etienne; Casadio, Maura; Krakauer, John W; Kwakkel, Gert; Lang, Catherine E; Swinnen, Stephan P; Ward, Nick S; Schweighofer, Nicolas

    2016-01-01

    Despite progress in using computational approaches to inform medicine and neuroscience in the last 30 years, there have been few attempts to model the mechanisms underlying sensorimotor rehabilitation. We argue that a fundamental understanding of neurologic recovery, and as a result accurate predictions at the individual level, will be facilitated by developing computational models of the salient neural processes, including plasticity and learning systems of the brain, and integrating them into a context specific to rehabilitation. Here, we therefore discuss Computational Neurorehabilitation, a newly emerging field aimed at modeling plasticity and motor learning to understand and improve movement recovery of individuals with neurologic impairment. We first explain how the emergence of robotics and wearable sensors for rehabilitation is providing data that make development and testing of such models increasingly feasible. We then review key aspects of plasticity and motor learning that such models will incorporate. We proceed by discussing how computational neurorehabilitation models relate to the current benchmark in rehabilitation modeling - regression-based, prognostic modeling. We then critically discuss the first computational neurorehabilitation models, which have primarily focused on modeling rehabilitation of the upper extremity after stroke, and show how even simple models have produced novel ideas for future investigation. Finally, we conclude with key directions for future research, anticipating that soon we will see the emergence of mechanistic models of motor recovery that are informed by clinical imaging results and driven by the actual movement content of rehabilitation therapy as well as wearable sensor-based records of daily activity.

  7. Expanding Capacity and Promoting Inclusion in Introductory Computer Science: A Focus on Near-Peer Mentor Preparation and Code Review

    Science.gov (United States)

    Pon-Barry, Heather; Packard, Becky Wai-Ling; St. John, Audrey

    2017-01-01

    A dilemma within computer science departments is developing sustainable ways to expand capacity within introductory computer science courses while remaining committed to inclusive practices. Training near-peer mentors for peer code review is one solution. This paper describes the preparation of near-peer mentors for their role, with a focus on…

  8. Application of advanced computational codes in the design of an experiment for a supersonic throughflow fan rotor

    Science.gov (United States)

    Wood, Jerry R.; Schmidt, James F.; Steinke, Ronald J.; Chima, Rodrick V.; Kunik, William G.

    1987-01-01

    Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advanced computational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

  9. SIEX: a correlated code for the prediction of Liquid Metal Fast Breeder Reactor (LMFBR) fuel thermal performance

    Energy Technology Data Exchange (ETDEWEB)

    Dutt, D.S.; Baker, R.B.

    1974-01-01

    The SIEX computer program is a steady state heat transfer code developed to provide thermal performance calculations for a mixed-oxide fuel element in a fast neutron environment. Fuel restructuring, fuel-cladding heat conduction and fission gas release are modeled to provide assessment of the temperatures. Modeling emphasis has been placed on correlations to measurable quantities from EBR-II irradiation tests and the inclusion of these correlations in a physically based computational scheme. SIEX is completely modular in construction allowing the user options for material properties and correlated models. Required code input is limited to geometric and environmental parameters, with a ``consistant`` set of material properties and correlated models provided by the code. The development of physically based correlations to model certain of the phenomana has resulted in a computer program which provides reliable estimates of thermal performance characteristics, yet requires a small amount of core storage and computer running time.

  10. FRAPCON-2: A Computer Code for the Calculation of Steady State Thermal-Mechanical Behavior of Oxide Fuel Rods

    Energy Technology Data Exchange (ETDEWEB)

    Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.

    1981-01-01

    FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.

  11. Computational Approaches Reveal New Insights into Regulation and Function of Non; coding RNAs and their Targets

    KAUST Repository

    Alam, Tanvir

    2016-11-28

    Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine-learning model to distinguish promoters of long ncRNA (lncRNA) genes from those of protein-coding genes. This represents the first attempt to make this distinction based on properties of the associated gene promoters. From our analyses, several transcription factors (TFs), which are known to be regulated by lncRNAs, also emerged as potential global regulators of lncRNAs, suggesting that lncRNAs and TFs may participate in bidirectional feedback regulatory network. Our results also raise the possibility that, due to the historical dependence on protein-coding gene in defining the chromatin states of active promoters, an adjustment of these chromatin signature profiles to incorporate lncRNAs is warranted in the future. Secondly, we developed a novel method to infer functions for lncRNA and microRNA (miRNA) transcripts based on their transcriptional regulatory networks in 119 tissues and 177 primary cells of human. This method for the first time combines information of cell/tissueVspecific expression of a transcript and the TFs and transcription coVfactors (TcoFs) that control activation of that transcript. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues and associated knowledgebase (FARNA) is developed. FARNA, having the most comprehensive function annotation of considered ncRNAs across the widest spectrum of cells/tissues, has a potential to contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. Thirdly, we developed a novel machine-learning model to identify LD motif (a protein interaction motif) of paxillin, a ncRNA target that is involved in cell motility and cancer metastasis. Our recognition model identified new proteins not

  12. Computational protein biomarker prediction: a case study for prostate cancer

    Directory of Open Access Journals (Sweden)

    Adam Bao-Ling

    2004-03-01

    Full Text Available Abstract Background Recent technological advances in mass spectrometry pose challenges in computational mathematics and statistics to process the mass spectral data into predictive models with clinical and biological significance. We discuss several classification-based approaches to finding protein biomarker candidates using protein profiles obtained via mass spectrometry, and we assess their statistical significance. Our overall goal is to implicate peaks that have a high likelihood of being biologically linked to a given disease state, and thus to narrow the search for biomarker candidates. Results Thorough cross-validation studies and randomization tests are performed on a prostate cancer dataset with over 300 patients, obtained at the Eastern Virginia Medical School using SELDI-TOF mass spectrometry. We obtain average classification accuracies of 87% on a four-group classification problem using a two-stage linear SVM-based procedure and just 13 peaks, with other methods performing comparably. Conclusions Modern feature selection and classification methods are powerful techniques for both the identification of biomarker candidates and the related problem of building predictive models from protein mass spectrometric profiles. Cross-validation and randomization are essential tools that must be performed carefully in order not to bias the results unfairly. However, only a biological validation and identification of the underlying proteins will ultimately confirm the actual value and power of any computational predictions.

  13. Predictive combinations of monitor alarms preceding in-hospital code blue events.

    Science.gov (United States)

    Hu, Xiao; Sapo, Monica; Nenov, Val; Barry, Tod; Kim, Sunghan; Do, Duc H; Boyle, Noel; Martin, Neil

    2012-10-01

    Bedside monitors are ubiquitous in acute care units of modern healthcare enterprises. However, they have been criticized for generating an excessive number of false positive alarms causing alarm fatigue among care givers and potentially compromising patient safety. We hypothesize that combinations of regular monitor alarms denoted as SuperAlarm set may be more indicative of ongoing patient deteriorations and hence predictive of in-hospital code blue events. The present work develops and assesses an alarm mining approach based on finding frequent combinations of single alarms that are also specific to code blue events to compose a SuperAlarm set. We use 4-way analysis of variance (ANOVA) to investigate the influence of four algorithm parameters on the performance of the data mining approach. The results are obtained from millions of monitor alarms from a cohort of 223 adult code blue and 1768 control patients using a multiple 10-fold cross-validation experiment setup. Using the optimal setting of parameters determined in the cross-validation experiment, final SuperAlarm sets are mined from the training data and used on an independent test data set to simulate running a SuperAlarm set against live regular monitor alarms. The ANOVA shows that the content of a SuperAlarm set is influenced by a subset of key algorithm parameters. Simulation of the extracted SuperAlarm set shows that it can predict code blue events one hour ahead with sensitivity between 66.7% and 90.9% while producing false SuperAlarms for control patients that account for between 2.2% and 11.2% of regular monitor alarms depending on user-supplied acceptable false positive rate. We conclude that even though the present work is still preliminary due to the usage of a moderately-sized database to test our hypothesis it represents an effort to develop algorithms to alleviate the alarm fatigue issue in a unique way. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation: Control modules C4, C6

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U. S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This volume is part of the manual related to the control modules for the newest updated version of this computational package.

  15. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  16. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    Science.gov (United States)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are

  17. Predicting AD conversion: comparison between prodromal AD guidelines and computer assisted PredictAD tool.

    Directory of Open Access Journals (Sweden)

    Yawu Liu

    Full Text Available To compare the accuracies of predicting AD conversion by using a decision support system (PredictAD tool and current research criteria of prodromal AD as identified by combinations of episodic memory impairment of hippocampal type and visual assessment of medial temporal lobe atrophy (MTA on MRI and CSF biomarkers.Altogether 391 MCI cases (158 AD converters were selected from the ADNI cohort. All the cases had baseline cognitive tests, MRI and/or CSF levels of Aβ1-42 and Tau. Using baseline data, the status of MCI patients (AD or MCI three years later was predicted using current diagnostic research guidelines and the PredictAD software tool designed for supporting clinical diagnostics. The data used were 1 clinical criteria for episodic memory loss of the hippocampal type, 2 visual MTA, 3 positive CSF markers, 4 their combinations, and 5 when the PredictAD tool was applied, automatically computed MRI measures were used instead of the visual MTA results. The accuracies of diagnosis were evaluated with the diagnosis made 3 years later.The PredictAD tool achieved the overall accuracy of 72% (sensitivity 73%, specificity 71% in predicting the AD diagnosis. The corresponding number for a clinician's prediction with the assistance of the PredictAD tool was 71% (sensitivity 75%, specificity 68%. Diagnosis with the PredictAD tool was significantly better than diagnosis by biomarkers alone or the combinations of clinical diagnosis of hippocampal pattern for the memory loss and biomarkers (p≤0.037.With the assistance of PredictAD tool, the clinician can predict AD conversion more accurately than the current diagnostic criteria.

  18. Interface design of VSOP'94 computer code for safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Natsir, Khairina, E-mail: yenny@batan.go.id; Andiwijayakusuma, D.; Wahanani, Nursinta Adi [Center for Development of Nuclear Informatics - National Nuclear Energy Agency, PUSPIPTEK, Serpong, Tangerang, Banten (Indonesia); Yazid, Putranto Ilham [Center for Nuclear Technology, Material and Radiometry- National Nuclear Energy Agency, Jl. Tamansari No.71, Bandung 40132 (Indonesia)

    2014-09-30

    Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.

  19. A computational model of cellular mechanisms of temporal coding in the medial geniculate body (MGB.

    Directory of Open Access Journals (Sweden)

    Cal F Rabang

    Full Text Available Acoustic stimuli are often represented in the early auditory pathway as patterns of neural activity synchronized to time-varying features. This phase-locking predominates until the level of the medial geniculate body (MGB, where previous studies have identified two main, largely segregated response types: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms underlying this transformation from phase-locked to rate code are not well understood. We use a computational model of a MGB thalamocortical neuron to test the hypothesis that these response classes arise from inferior colliculus (IC excitatory afferents with divergent properties similar to those observed in brain slice studies. Large-conductance inputs exhibiting synaptic depression preserved input synchrony as short as 12.5 ms interclick intervals, while maintaining low firing rates and low-pass filtering responses. By contrast, small-conductance inputs with Mixed plasticity (depression of AMPA-receptor component and facilitation of NMDA-receptor component desynchronized afferent inputs, generated a click-rate dependent increase in firing rate, and high-pass filtered the inputs. Synaptic inputs with facilitation often permitted band-pass synchrony along with band-pass rate tuning. These responses could be tuned by changes in membrane potential, strength of the NMDA component, and characteristics of synaptic plasticity. These results demonstrate how the same synchronized input spike trains from the inferior colliculus can be transformed into different representations of temporal modulation by divergent synaptic properties.

  20. The DoD's High Performance Computing Modernization Program - Ensuing the National Earth Systems Prediction Capability Becomes Operational

    Science.gov (United States)

    Burnett, W.

    2016-12-01

    The Department of Defense's (DoD) High Performance Computing Modernization Program (HPCMP) provides high performance computing to address the most significant challenges in computational resources, software application support and nationwide research and engineering networks. Today, the HPCMP has a critical role in ensuring the National Earth System Prediction Capability (N-ESPC) achieves initial operational status in 2019. A 2015 study commissioned by the HPCMP found that N-ESPC computational requirements will exceed interconnect bandwidth capacity due to the additional load from data assimilation and passing connecting data between ensemble codes. Memory bandwidth and I/O bandwidth will continue to be significant bottlenecks for the Navy's Hybrid Coordinate Ocean Model (HYCOM) scalability - by far the major driver of computing resource requirements in the N-ESPC. The study also found that few of the N-ESPC model developers have detailed plans to ensure their respective codes scale through 2024. Three HPCMP initiatives are designed to directly address and support these issues: Productivity Enhancement, Technology, Transfer and Training (PETTT), the HPCMP Applications Software Initiative (HASI), and Frontier Projects. PETTT supports code conversion by providing assistance, expertise and training in scalable and high-end computing architectures. HASI addresses the continuing need for modern application software that executes effectively and efficiently on next-generation high-performance computers. Frontier Projects enable research and development that could not be achieved using typical HPCMP resources by providing multi-disciplinary teams access to exceptional amounts of high performance computing resources. Finally, the Navy's DoD Supercomputing Resource Center (DSRC) currently operates a 6 Petabyte system, of which Naval Oceanography receives 15% of operational computational system use, or approximately 1 Petabyte of the processing capability. The DSRC will