WorldWideScience

Sample records for single-precision computer simulations

  1. Reliable low precision simulations in land surface models

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  2. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  3. OSL sensitivity changes during single aliquot procedures: Computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Agersnap Larsen, N.; Bøtter-Jensen, L.

    1997-01-01

    We present computer simulations of sensitivity changes obtained during single aliquot, regeneration procedures. The simulations indicate that the sensitivity changes are the combined result of shallow trap and deep trap effects. Four separate processes have been identified. Although procedures can...... be suggested to eliminate the shallow trap effects, it appears that the deep trap effects cannot be removed. The character of the sensitivity changes which result from these effects is seen to be dependent upon several external parameters, including the extent of bleaching of the OSL signal, the laboratory...

  4. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  5. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  6. A precise goniometer/tensiometer using a low cost single-board computer

    Science.gov (United States)

    Favier, Benoit; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.

    2017-12-01

    Measuring the surface tension and the Young contact angle of a droplet is extremely important for many industrial applications. Here, considering the booming interest for small and cheap but precise experimental instruments, we have constructed a low-cost contact angle goniometer/tensiometer, based on a single-board computer (Raspberry Pi). The device runs an axisymmetric drop shape analysis (ADSA) algorithm written in Python. The code, here named DropToolKit, was developed in-house. We initially present the mathematical framework of our algorithm and then we validate our software tool against other well-established ADSA packages, including the commercial ramé-hart DROPimage Advanced as well as the DropAnalysis plugin in ImageJ. After successfully testing for various combinations of liquids and solid surfaces, we concluded that our prototype device would be highly beneficial for industrial applications as well as for scientific research in wetting phenomena compared to the commercial solutions.

  7. FPGA-accelerated simulation of computer systems

    CERN Document Server

    Angepat, Hari; Chung, Eric S; Hoe, James C; Chung, Eric S

    2014-01-01

    To date, the most common form of simulators of computer systems are software-based running on standard computers. One promising approach to improve simulation performance is to apply hardware, specifically reconfigurable hardware in the form of field programmable gate arrays (FPGAs). This manuscript describes various approaches of using FPGAs to accelerate software-implemented simulation of computer systems and selected simulators that incorporate those techniques. More precisely, we describe a simulation architecture taxonomy that incorporates a simulation architecture specifically designed f

  8. rpe v5: an emulator for reduced floating-point precision in large numerical simulations

    Science.gov (United States)

    Dawson, Andrew; Düben, Peter D.

    2017-06-01

    This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.

  9. Exploring Biomolecular Interactions Through Single-Molecule Force Spectroscopy and Computational Simulation

    OpenAIRE

    Yang, Darren

    2016-01-01

    Molecular interactions between cellular components such as proteins and nucleic acids govern the fundamental processes of living systems. Technological advancements in the past decade have allowed the characterization of these molecular interactions at the single-molecule level with high temporal and spatial resolution. Simultaneously, progress in computer simulation has enabled theoretical research at the atomistic level, assisting in the interpretation of experimental results. This thesi...

  10. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    Science.gov (United States)

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  11. Atomistic computer simulations a practical guide

    CERN Document Server

    Brazdova, Veronika

    2013-01-01

    Many books explain the theory of atomistic computer simulations; this book teaches you how to run them This introductory ""how to"" title enables readers to understand, plan, run, and analyze their own independent atomistic simulations, and decide which method to use and which questions to ask in their research project. It is written in a clear and precise language, focusing on a thorough understanding of the concepts behind the equations and how these are used in the simulations. As a result, readers will learn how to design the computational model and which parameters o

  12. Linking computer-aided design (CAD) to Geant4-based Monte Carlo simulations for precise implementation of complex treatment head geometries

    International Nuclear Information System (INIS)

    Constantin, Magdalena; Constantin, Dragos E; Keall, Paul J; Narula, Anisha; Svatos, Michelle; Perl, Joseph

    2010-01-01

    Most of the treatment head components of medical linear accelerators used in radiation therapy have complex geometrical shapes. They are typically designed using computer-aided design (CAD) applications. In Monte Carlo simulations of radiotherapy beam transport through the treatment head components, the relevant beam-generating and beam-modifying devices are inserted in the simulation toolkit using geometrical approximations of these components. Depending on their complexity, such approximations may introduce errors that can be propagated throughout the simulation. This drawback can be minimized by exporting a more precise geometry of the linac components from CAD and importing it into the Monte Carlo simulation environment. We present a technique that links three-dimensional CAD drawings of the treatment head components to Geant4 Monte Carlo simulations of dose deposition. (note)

  13. Improving Precision and Reducing Runtime of Microscopic Traffic Simulators through Stratified Sampling

    Directory of Open Access Journals (Sweden)

    Khewal Bhupendra Kesur

    2013-01-01

    Full Text Available This paper examines the application of Latin Hypercube Sampling (LHS and Antithetic Variables (AVs to reduce the variance of estimated performance measures from microscopic traffic simulators. LHS and AV allow for a more representative coverage of input probability distributions through stratification, reducing the standard error of simulation outputs. Two methods of implementation are examined, one where stratification is applied to headways and routing decisions of individual vehicles and another where vehicle counts and entry times are more evenly sampled. The proposed methods have wider applicability in general queuing systems. LHS is found to outperform AV, and reductions of up to 71% in the standard error of estimates of traffic network performance relative to independent sampling are obtained. LHS allows for a reduction in the execution time of computationally expensive microscopic traffic simulators as fewer simulations are required to achieve a fixed level of precision with reductions of up to 84% in computing time noted on the test cases considered. The benefits of LHS are amplified for more congested networks and as the required level of precision increases.

  14. Computer-controlled detection system for high-precision isotope ratio measurements

    International Nuclear Information System (INIS)

    McCord, B.R.; Taylor, J.W.

    1986-01-01

    In this paper the authors describe a detection system for high-precision isotope ratio measurements. In this new system, the requirement for a ratioing digital voltmeter has been eliminated, and a standard digital voltmeter interfaced to a computer is employed. Instead of measuring the ratio of the two steadily increasing output voltages simultaneously, the digital voltmeter alternately samples the outputs at a precise rate over a certain period of time. The data are sent to the computer which calculates the rate of charge of each amplifier and divides the two rates to obtain the isotopic ratio. These results simulate a coincident measurement of the output of both integrators. The charge rate is calculated by using a linear regression method, and the standard error of the slope gives a measure of the stability of the system at the time the measurement was taken

  15. Binary Factorization in Hopfield-Like Neural Networks: Single-Step Approximation and Computer Simulations

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Sirota, A.M.; Húsek, Dušan; Muraviev, I. P.

    2004-01-01

    Roč. 14, č. 2 (2004), s. 139-152 ISSN 1210-0552 R&D Projects: GA ČR GA201/01/1192 Grant - others:BARRANDE(EU) 99010-2/99053; Intellectual computer Systems(EU) Grant 2.45 Institutional research plan: CEZ:AV0Z1030915 Keywords : nonlinear binary factor analysis * feature extraction * recurrent neural network * Single-Step approximation * neurodynamics simulation * attraction basins * Hebbian learning * unsupervised learning * neuroscience * brain function modeling Subject RIV: BA - General Mathematics

  16. Agreement and precision of periprosthetic bone density measurements in micro-CT, single and dual energy CT.

    Science.gov (United States)

    Mussmann, Bo; Overgaard, Søren; Torfing, Trine; Traise, Peter; Gerke, Oke; Andersen, Poul Erik

    2017-07-01

    The objective of this study was to test the precision and agreement between bone mineral density measurements performed in micro CT, single and dual energy computed tomography, to determine how the keV level influences density measurements and to assess the usefulness of quantitative dual energy computed tomography as a research tool for longitudinal studies aiming to measure bone loss adjacent to total hip replacements. Samples from 10 fresh-frozen porcine femoral heads were placed in a Perspex phantom and computed tomography was performed with two acquisition modes. Bone mineral density was calculated and compared with measurements derived from micro CT. Repeated scans and dual measurements were performed in order to measure between- and within-scan precision. Mean density difference between micro CT and single energy computed tomography was 72 mg HA/cm 3 . For dual energy CT, the mean difference at 100 keV was 128 mg HA/cm 3 while the mean difference at 110-140 keV ranged from -84 to -67 mg HA/cm 3 compared with micro CT. Rescanning the samples resulted in a non-significant overall between-scan difference of 13 mg HA/cm 3 . Bland-Altman limits of agreement were wide and intraclass correlation coefficients ranged from 0.29 to 0.72, while 95% confidence intervals covered almost the full possible range. Repeating the density measurements for within-scan precision resulted in ICCs >0.99 and narrow limits of agreement. Single and dual energy quantitative CT showed excellent within-scan precision, but poor between-scan precision. No significant density differences were found in dual energy quantitative CT at keV-levels above 110 keV. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1470-1477, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  17. Exponentially more precise quantum simulation of fermions in the configuration interaction representation

    Science.gov (United States)

    Babbush, Ryan; Berry, Dominic W.; Sanders, Yuval R.; Kivlichan, Ian D.; Scherer, Artur; Wei, Annie Y.; Love, Peter J.; Aspuru-Guzik, Alán

    2018-01-01

    We present a quantum algorithm for the simulation of molecular systems that is asymptotically more efficient than all previous algorithms in the literature in terms of the main problem parameters. As in Babbush et al (2016 New Journal of Physics 18, 033032), we employ a recently developed technique for simulating Hamiltonian evolution using a truncated Taylor series to obtain logarithmic scaling with the inverse of the desired precision. The algorithm of this paper involves simulation under an oracle for the sparse, first-quantized representation of the molecular Hamiltonian known as the configuration interaction (CI) matrix. We construct and query the CI matrix oracle to allow for on-the-fly computation of molecular integrals in a way that is exponentially more efficient than classical numerical methods. Whereas second-quantized representations of the wavefunction require \\widetilde{{ O }}(N) qubits, where N is the number of single-particle spin-orbitals, the CI matrix representation requires \\widetilde{{ O }}(η ) qubits, where η \\ll N is the number of electrons in the molecule of interest. We show that the gate count of our algorithm scales at most as \\widetilde{{ O }}({η }2{N}3t).

  18. CUBESIM, Hypercube and Denelcor Hep Parallel Computer Simulation

    International Nuclear Information System (INIS)

    Dunigan, T.H.

    1988-01-01

    1 - Description of program or function: CUBESIM is a set of subroutine libraries and programs for the simulation of message-passing parallel computers and shared-memory parallel computers. Subroutines are supplied to simulate the Intel hypercube and the Denelcor HEP parallel computers. The system permits a user to develop and test parallel programs written in C or FORTRAN on a single processor. The user may alter such hypercube parameters as message startup times, packet size, and the computation-to-communication ratio. The simulation generates a trace file that can be used for debugging, performance analysis, or graphical display. 2 - Method of solution: The CUBESIM simulator is linked with the user's parallel application routines to run as a single UNIX process. The simulator library provides a small operating system to perform process and message management. 3 - Restrictions on the complexity of the problem: Up to 128 processors can be simulated with a virtual memory limit of 6 million bytes. Up to 1000 processes can be simulated

  19. Precise detection of de novo single nucleotide variants in human genomes.

    Science.gov (United States)

    Gómez-Romero, Laura; Palacios-Flores, Kim; Reyes, José; García, Delfino; Boege, Margareta; Dávila, Guillermo; Flores, Margarita; Schatz, Michael C; Palacios, Rafael

    2018-05-07

    The precise determination of de novo genetic variants has enormous implications across different fields of biology and medicine, particularly personalized medicine. Currently, de novo variations are identified by mapping sample reads from a parent-offspring trio to a reference genome, allowing for a certain degree of differences. While widely used, this approach often introduces false-positive (FP) results due to misaligned reads and mischaracterized sequencing errors. In a previous study, we developed an alternative approach to accurately identify single nucleotide variants (SNVs) using only perfect matches. However, this approach could be applied only to haploid regions of the genome and was computationally intensive. In this study, we present a unique approach, coverage-based single nucleotide variant identification (COBASI), which allows the exploration of the entire genome using second-generation short sequence reads without extensive computing requirements. COBASI identifies SNVs using changes in coverage of exactly matching unique substrings, and is particularly suited for pinpointing de novo SNVs. Unlike other approaches that require population frequencies across hundreds of samples to filter out any methodological biases, COBASI can be applied to detect de novo SNVs within isolated families. We demonstrate this capability through extensive simulation studies and by studying a parent-offspring trio we sequenced using short reads. Experimental validation of all 58 candidate de novo SNVs and a selection of non-de novo SNVs found in the trio confirmed zero FP calls. COBASI is available as open source at https://github.com/Laura-Gomez/COBASI for any researcher to use. Copyright © 2018 the Author(s). Published by PNAS.

  20. Simulation Study of Single Photon Emission Computed Tomography for Industrial Applications

    International Nuclear Information System (INIS)

    Roy, Tushar; Sarkar, P. S.; Sinha, Amar

    2008-01-01

    SPECT (Single Photon Emission Computed Tomography) provides for an invaluable non-invasive technique for the characterization and activity distribution of the gamma-emitting source. For many applications of radioisotopes for medical and industrial application, not only the positional information of the distribution of radioisotopes is needed but also its strength. The well-established X-ray radiography or transmission tomography techniques do not yield sufficient quantitative information about these objects. Emission tomography is one of the important methods for such characterization. Application of parallel beam, fan beam and 3D cone beam emission tomography methods have been discussed in this paper. Simulation studies to test these algorithms have been carried out to validate the technique.

  1. Computational fluid dynamics simulations of light water reactor flows

    International Nuclear Information System (INIS)

    Tzanos, C.P.; Weber, D.P.

    1999-01-01

    Advances in computational fluid dynamics (CFD), turbulence simulation, and parallel computing have made feasible the development of three-dimensional (3-D) single-phase and two-phase flow CFD codes that can simulate fluid flow and heat transfer in realistic reactor geometries with significantly reduced reliance, especially in single phase, on empirical correlations. The objective of this work was to assess the predictive power and computational efficiency of a CFD code in the analysis of a challenging single-phase light water reactor problem, as well as to identify areas where further improvements are needed

  2. Precision of EM Simulation Based Wireless Location Estimation in Multi-Sensor Capsule Endoscopy.

    Science.gov (United States)

    Khan, Umair; Ye, Yunxing; Aisha, Ain-Ul; Swar, Pranay; Pahlavan, Kaveh

    2018-01-01

    In this paper, we compute and examine two-way localization limits for an RF endoscopy pill as it passes through an individuals gastrointestinal (GI) tract. We obtain finite-difference time-domain and finite element method-based simulation results position assessment employing time of arrival (TOA). By means of a 3-D human body representation from a full-wave simulation software and lognormal models for TOA propagation from implant organs to body surface, we calculate bounds on location estimators in three digestive organs: stomach, small intestine, and large intestine. We present an investigation of the causes influencing localization precision, consisting of a range of organ properties; peripheral sensor array arrangements, number of pills in cooperation, and the random variations in transmit power of sensor nodes. We also perform a localization precision investigation for the situation where the transmission signal of the antenna is arbitrary with a known probability distribution. The computational solver outcome shows that the number of receiver antennas on the exterior of the body has higher impact on the precision of the location than the amount of capsules in collaboration within the GI region. The large intestine is influenced the most by the transmitter power probability distribution.

  3. Computational simulation of weld microstructure and distortion by considering process mechanics

    Science.gov (United States)

    Mochizuki, M.; Mikami, Y.; Okano, S.; Itoh, S.

    2009-05-01

    Highly precise fabrication of welded materials is in great demand, and so microstructure and distortion controls are essential. Furthermore, consideration of process mechanics is important for intelligent fabrication. In this study, the microstructure and hardness distribution in multi-pass weld metal are evaluated by computational simulations under the conditions of multiple heat cycles and phase transformation. Because conventional CCT diagrams of weld metal are not available even for single-pass weld metal, new diagrams for multi-pass weld metals are created. The weld microstructure and hardness distribution are precisely predicted when using the created CCT diagram for multi-pass weld metal and calculating the weld thermal cycle. Weld distortion is also investigated by using numerical simulation with a thermal elastic-plastic analysis. In conventional evaluations of weld distortion, the average heat input has been used as the dominant parameter; however, it is difficult to consider the effect of molten pool configurations on weld distortion based only on the heat input. Thus, the effect of welding process conditions on weld distortion is studied by considering molten pool configurations, determined by temperature distribution and history.

  4. Precise predictions of H2O line shapes over a wide pressure range using simulations corrected by a single measurement

    Science.gov (United States)

    Ngo, N. H.; Nguyen, H. T.; Tran, H.

    2018-03-01

    In this work, we show that precise predictions of the shapes of H2O rovibrational lines broadened by N2, over a wide pressure range, can be made using simulations corrected by a single measurement. For that, we use the partially-correlated speed-dependent Keilson-Storer (pcsdKS) model whose parameters are deduced from molecular dynamics simulations and semi-classical calculations. This model takes into account the collision-induced velocity-changes effects, the speed dependences of the collisional line width and shift as well as the correlation between velocity and internal-state changes. For each considered transition, the model is corrected by using a parameter deduced from its broadening coefficient measured for a single pressure. The corrected-pcsdKS model is then used to simulate spectra for a wide pressure range. Direct comparisons of the corrected-pcsdKS calculated and measured spectra of 5 rovibrational lines of H2O for various pressures, from 0.1 to 1.2 atm, show very good agreements. Their maximum differences are in most cases well below 1%, much smaller than residuals obtained when fitting the measurements with the Voigt line shape. This shows that the present procedure can be used to predict H2O line shapes for various pressure conditions and thus the simulated spectra can be used to deduce the refined line-shape parameters to complete spectroscopic databases, in the absence of relevant experimental values.

  5. GPU acceleration of Monte Carlo simulations for polarized photon scattering in anisotropic turbid media.

    Science.gov (United States)

    Li, Pengcheng; Liu, Celong; Li, Xianpeng; He, Honghui; Ma, Hui

    2016-09-20

    In earlier studies, we developed scattering models and the corresponding CPU-based Monte Carlo simulation programs to study the behavior of polarized photons as they propagate through complex biological tissues. Studying the simulation results in high degrees of freedom that created a demand for massive simulation tasks. In this paper, we report a parallel implementation of the simulation program based on the compute unified device architecture running on a graphics processing unit (GPU). Different schemes for sphere-only simulations and sphere-cylinder mixture simulations were developed. Diverse optimizing methods were employed to achieve the best acceleration. The final-version GPU program is hundreds of times faster than the CPU version. Dependence of the performance on input parameters and precision were also studied. It is shown that using single precision in the GPU simulations results in very limited losses in accuracy. Consumer-level graphics cards, even those in laptop computers, are more cost-effective than scientific graphics cards for single-precision computation.

  6. Assessing the relationship between computational speed and precision: a case study comparing an interpreted versus compiled programming language using a stochastic simulation model in diabetes care.

    Science.gov (United States)

    McEwan, Phil; Bergenheim, Klas; Yuan, Yong; Tetlow, Anthony P; Gordon, Jason P

    2010-01-01

    Simulation techniques are well suited to modelling diseases yet can be computationally intensive. This study explores the relationship between modelled effect size, statistical precision, and efficiency gains achieved using variance reduction and an executable programming language. A published simulation model designed to model a population with type 2 diabetes mellitus based on the UKPDS 68 outcomes equations was coded in both Visual Basic for Applications (VBA) and C++. Efficiency gains due to the programming language were evaluated, as was the impact of antithetic variates to reduce variance, using predicted QALYs over a 40-year time horizon. The use of C++ provided a 75- and 90-fold reduction in simulation run time when using mean and sampled input values, respectively. For a series of 50 one-way sensitivity analyses, this would yield a total run time of 2 minutes when using C++, compared with 155 minutes for VBA when using mean input values. The use of antithetic variates typically resulted in a 53% reduction in the number of simulation replications and run time required. When drawing all input values to the model from distributions, the use of C++ and variance reduction resulted in a 246-fold improvement in computation time compared with VBA - for which the evaluation of 50 scenarios would correspondingly require 3.8 hours (C++) and approximately 14.5 days (VBA). The choice of programming language used in an economic model, as well as the methods for improving precision of model output can have profound effects on computation time. When constructing complex models, more computationally efficient approaches such as C++ and variance reduction should be considered; concerns regarding model transparency using compiled languages are best addressed via thorough documentation and model validation.

  7. Development and simulation of microfluidic Wheatstone bridge for high-precision sensor

    International Nuclear Information System (INIS)

    Shipulya, N D; Konakov, S A; Krzhizhanovskaya, V V

    2016-01-01

    In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems. (paper)

  8. LDPC decoder with a limited-precision FPGA-based floating-point multiplication coprocessor

    Science.gov (United States)

    Moberly, Raymond; O'Sullivan, Michael; Waheed, Khurram

    2007-09-01

    Implementing the sum-product algorithm, in an FPGA with an embedded processor, invites us to consider a tradeoff between computational precision and computational speed. The algorithm, known outside of the signal processing community as Pearl's belief propagation, is used for iterative soft-decision decoding of LDPC codes. We determined the feasibility of a coprocessor that will perform product computations. Our FPGA-based coprocessor (design) performs computer algebra with significantly less precision than the standard (e.g. integer, floating-point) operations of general purpose processors. Using synthesis, targeting a 3,168 LUT Xilinx FPGA, we show that key components of a decoder are feasible and that the full single-precision decoder could be constructed using a larger part. Soft-decision decoding by the iterative belief propagation algorithm is impacted both positively and negatively by a reduction in the precision of the computation. Reducing precision reduces the coding gain, but the limited-precision computation can operate faster. A proposed solution offers custom logic to perform computations with less precision, yet uses the floating-point format to interface with the software. Simulation results show the achievable coding gain. Synthesis results help theorize the the full capacity and performance of an FPGA-based coprocessor.

  9. A Fast GPU-accelerated Mixed-precision Strategy for Fully NonlinearWater Wave Computations

    DEFF Research Database (Denmark)

    Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter; Madsen, Morten G.

    2011-01-01

    We present performance results of a mixed-precision strategy developed to improve a recently developed massively parallel GPU-accelerated tool for fast and scalable simulation of unsteady fully nonlinear free surface water waves over uneven depths (Engsig-Karup et.al. 2011). The underlying wave......-preconditioned defect correction method. The improved strategy improves the performance by exploiting architectural features of modern GPUs for mixed precision computations and is tested in a recently developed generic library for fast prototyping of PDE solvers. The new wave tool is applicable to solve and analyze...

  10. Computer-assisted preoperative simulation for positioning of plate fixation in Lefort I osteotomy: A case report

    Directory of Open Access Journals (Sweden)

    Hideyuki Suenaga

    2016-06-01

    Full Text Available Computed tomography images are used for three-dimensional planning in orthognathic surgery. This facilitates the actual surgery by simulating the surgical scenario. We performed a computer-assisted virtual orthognathic surgical procedure using optically scanned three-dimensional (3D data and real computed tomography data on a personal computer. It helped maxillary bone movement and positioning and the titanium plate temporary fixation and positioning. This simulated the surgical procedure, which made the procedure easy, and we could perform precise actual surgery and could forecast the postsurgery outcome. This simulation method promises great potential in orthognathic surgery to help surgeons plan and perform operative procedures more precisely.

  11. Development of 3-axis precise positioning seismic physical modeling system in the simulation of marine seismic exploration

    Science.gov (United States)

    Kim, D.; Shin, S.; Ha, J.; Lee, D.; Lim, Y.; Chung, W.

    2017-12-01

    Seismic physical modeling is a laboratory-scale experiment that deals with the actual and physical phenomena that may occur in the field. In seismic physical modeling, field conditions are downscaled and used. For this reason, even a small error may lead to a big error in an actual field. Accordingly, the positions of the source and the receiver must be precisely controlled in scale modeling. In this study, we have developed a seismic physical modeling system capable of precisely controlling the 3-axis position. For automatic and precise position control of an ultrasonic transducer(source and receiver) in the directions of the three axes(x, y, and z), a motor was mounted on each of the three axes. The motor can automatically and precisely control the positions with positional precision of 2''; for the x and y axes and 0.05 mm for the z axis. As it can automatically and precisely control the positions in the directions of the three axes, it has an advantage in that simulations can be carried out using the latest exploration techniques, such as OBS and Broadband Seismic. For the signal generation section, a waveform generator that can produce a maximum of two sources was used, and for the data acquisition section, which receives and stores reflected signals, an A/D converter that can receive a maximum of four signals was used. As multiple sources and receivers could be used at the same time, the system was set up in such a way that diverse exploration methods, such as single channel, multichannel, and 3-D exploration, could be realized. A computer control program based on LabVIEW was created, so that it could control the position of the transducer, determine the data acquisition parameters, and check the exploration data and progress in real time. A marine environment was simulated using a water tank 1 m wide, 1 m long, and 0.9 m high. To evaluate the performance and applicability of the seismic physical modeling system developed in this study, single channel and

  12. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: Experiments and computer simulations

    DEFF Research Database (Denmark)

    McKeever, S.W.S.; Bøtter-Jensen, L.; Agersnap Larsen, N.

    1996-01-01

    believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal......As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes...... annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of reuse, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow...

  13. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  14. Optically stimulated luminescence sensitivity changes in quartz due to repeated use in single aliquot readout: experiments and computer simulations

    International Nuclear Information System (INIS)

    McKeever, S.W.S.; Oklahoma State Univ., Stillwater, OK; Boetter-Jensen, L.; Agersnap Larsen, N.; Mejdahl, V.; Poolton, N.R.J.

    1996-01-01

    As part of a study to examine sensitivity changes in single aliquot techniques using optically stimulated luminescence (OSL) a series of experiments has been conducted with single aliquots of natural quartz, and the data compared with the results of computer simulations of the type of processes believed to be occurring. The computer model used includes both shallow and deep ('hard-to-bleach') traps, OSL ('easy-to-bleach') traps, and radiative and non-radiative recombination centres. The model has previously been used successfully to account for sensitivity changes in quartz due to thermal annealing. The simulations are able to reproduce qualitatively the main features of the experimental results including sensitivity changes as a function of re-use, and their dependence upon bleaching time and laboratory dose. The sensitivity changes are believed to be the result of a combination of shallow trap and deep trap effects. (author)

  15. SPOTTED STAR LIGHT CURVES WITH ENHANCED PRECISION

    International Nuclear Information System (INIS)

    Wilson, R. E.

    2012-01-01

    The nearly continuous timewise coverage of recent photometric surveys is free of the large gaps that compromise attempts to follow starspot growth and decay as well as motions, thereby giving incentive to improve computational precision for modeled spots. Due to the wide variety of star systems in the surveys, such improvement should apply to light/velocity curve models that accurately include all the main phenomena of close binaries and rotating single stars. The vector fractional area (VFA) algorithm that is introduced here represents surface elements by small sets of position vectors so as to allow accurate computation of circle-triangle overlap by spherical geometry. When computed by VFA, spots introduce essentially no noticeable scatter in light curves at the level of one part in 10,000. VFA has been put into the Wilson-Devinney light/velocity curve program and all logic and mathematics are given so as to facilitate entry into other such programs. Advantages of precise spot computation include improved statistics of spot motions and aging, reduced computation time (intrinsic precision relaxes needs for grid fineness), noise-free illustration of spot effects in figures, and help in guarding against false positives in exoplanet searches, where spots could approximately mimic transiting planets in unusual circumstances. A simple spot growth and decay template quantifies time profiles, and specifics of its utilization in differential corrections solutions are given. Computational strategies are discussed, the overall process is tested in simulations via solutions of synthetic light curve data, and essential simulation results are described. An efficient time smearing facility by Gaussian quadrature can deal with Kepler mission data that are in 30 minute time bins.

  16. Cluster computing software for GATE simulations

    International Nuclear Information System (INIS)

    Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.

    2007-01-01

    Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values

  17. Computational and empirical simulations of selective memory impairments: Converging evidence for a single-system account of memory dissociations.

    Science.gov (United States)

    Curtis, Evan T; Jamieson, Randall K

    2018-04-01

    Current theory has divided memory into multiple systems, resulting in a fractionated account of human behaviour. By an alternative perspective, memory is a single system. However, debate over the details of different single-system theories has overshadowed the converging agreement among them, slowing the reunification of memory. Evidence in favour of dividing memory often takes the form of dissociations observed in amnesia, where amnesic patients are impaired on some memory tasks but not others. The dissociations are taken as evidence for separate explicit and implicit memory systems. We argue against this perspective. We simulate two key dissociations between classification and recognition in a computational model of memory, A Theory of Nonanalytic Association. We assume that amnesia reflects a quantitative difference in the quality of encoding. We also present empirical evidence that replicates the dissociations in healthy participants, simulating amnesic behaviour by reducing study time. In both analyses, we successfully reproduce the dissociations. We integrate our computational and empirical successes with the success of alternative models and manipulations and argue that our demonstrations, taken in concert with similar demonstrations with similar models, provide converging evidence for a more general set of single-system analyses that support the conclusion that a wide variety of memory phenomena can be explained by a unified and coherent set of principles.

  18. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  19. Comparative Performance of Four Single Extreme Outlier Discordancy Tests from Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Surendra P. Verma

    2014-01-01

    Full Text Available Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15 for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ=0 and ε=±1, were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15>N14>N8.

  20. Numerical characteristics of quantum computer simulation

    Science.gov (United States)

    Chernyavskiy, A.; Khamitov, K.; Teplov, A.; Voevodin, V.; Voevodin, Vl.

    2016-12-01

    The simulation of quantum circuits is significantly important for the implementation of quantum information technologies. The main difficulty of such modeling is the exponential growth of dimensionality, thus the usage of modern high-performance parallel computations is relevant. As it is well known, arbitrary quantum computation in circuit model can be done by only single- and two-qubit gates, and we analyze the computational structure and properties of the simulation of such gates. We investigate the fact that the unique properties of quantum nature lead to the computational properties of the considered algorithms: the quantum parallelism make the simulation of quantum gates highly parallel, and on the other hand, quantum entanglement leads to the problem of computational locality during simulation. We use the methodology of the AlgoWiki project (algowiki-project.org) to analyze the algorithm. This methodology consists of theoretical (sequential and parallel complexity, macro structure, and visual informational graph) and experimental (locality and memory access, scalability and more specific dynamic characteristics) parts. Experimental part was made by using the petascale Lomonosov supercomputer (Moscow State University, Russia). We show that the simulation of quantum gates is a good base for the research and testing of the development methods for data intense parallel software, and considered methodology of the analysis can be successfully used for the improvement of the algorithms in quantum information science.

  1. Ion beam studies. Part 5 - the computer simulation of composite ion implantation profiles

    International Nuclear Information System (INIS)

    Freeman, J.H.; Booker, D.V.

    1977-01-01

    The computer simulation of composite ion implantation profiles produced by continuous energy programming and by discrete multiple dose doping is described. It is shown that precise matching of the computed profile to various uniform and power-law distributions can be achieved. (author)

  2. Propagator formalism and computer simulation of restricted diffusion behaviors of inter-molecular multiple-quantum coherences

    International Nuclear Information System (INIS)

    Cai Congbo; Chen Zhong; Cai Shuhui; Zhong Jianhui

    2005-01-01

    In this paper, behaviors of single-quantum coherences and inter-molecular multiple-quantum coherences under restricted diffusion in nuclear magnetic resonance experiments were investigated. The propagator formalism based on the loss of spin phase memory during random motion was applied to describe the diffusion-induced signal attenuation. The exact expression of the signal attenuation under the short gradient pulse approximation for restricted diffusion between two parallel plates was obtained using this propagator method. For long gradient pulses, a modified formalism was proposed. The simulated signal attenuation under the effects of gradient pulses of different width based on the Monte Carlo method agrees with the theoretical predictions. The propagator formalism and computer simulation can provide convenient, intuitive and precise methods for the study of the diffusion behaviors

  3. Real-time multi-GNSS single-frequency precise point positioning

    NARCIS (Netherlands)

    de Bakker, P.F.; Tiberius, C.C.J.M.

    2017-01-01

    Precise Point Positioning (PPP) is a popular Global Positioning System (GPS) processing strategy, thanks to its high precision without requiring additional GPS infrastructure. Single-Frequency PPP (SF-PPP) takes this one step further by no longer relying on expensive dual-frequency GPS receivers,

  4. SpecBit, DecayBit and PrecisionBit. GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter; Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Dal, Lars A.; Gonzalo, Tomas E. [University of Oslo, Department of Physics, Oslo (Norway); Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Kvellestad, Anders [NORDITA, Stockholm (Sweden); McKay, James; Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Putze, Antje [Universite de Savoie, CNRS, LAPTh, Annecy-le-Vieux (France); Rogan, Chris [Harvard University, Department of Physics, Cambridge, MA (United States); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Collaboration: The GAMBIT Models Workgroup

    2018-01-15

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT. (orig.)

  5. SpecBit, DecayBit and PrecisionBit: GAMBIT modules for computing mass spectra, particle decay rates and precision observables

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin

    2018-01-01

    We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.

  6. Computer simulations of long-time tails: what's new?

    NARCIS (Netherlands)

    Hoef, van der M.A.; Frenkel, D.

    1995-01-01

    Twenty five years ago Alder and Wainwright discovered, by simulation, the 'long-time tails' in the velocity autocorrelation function of a single particle in fluid [1]. Since then, few qualitatively new results on long-time tails have been obtained by computer simulations. However, within the

  7. Evaporation of freely suspended single droplets: experimental, theoretical and computational simulations

    International Nuclear Information System (INIS)

    Hołyst, R; Litniewski, M; Jakubczyk, D; Kolwas, K; Kolwas, M; Kowalski, K; Migacz, S; Palesa, S; Zientara, M

    2013-01-01

    Evaporation is ubiquitous in nature. This process influences the climate, the formation of clouds, transpiration in plants, the survival of arctic organisms, the efficiency of car engines, the structure of dried materials and many other phenomena. Recent experiments discovered two novel mechanisms accompanying evaporation: temperature discontinuity at the liquid–vapour interface during evaporation and equilibration of pressures in the whole system during evaporation. None of these effects has been predicted previously by existing theories despite the fact that after 130 years of investigation the theory of evaporation was believed to be mature. These two effects call for reanalysis of existing experimental data and such is the goal of this review. In this article we analyse the experimental and the computational simulation data on the droplet evaporation of several different systems: water into its own vapour, water into the air, diethylene glycol into nitrogen and argon into its own vapour. We show that the temperature discontinuity at the liquid–vapour interface discovered by Fang and Ward (1999 Phys. Rev. E 59 417–28) is a rule rather than an exception. We show in computer simulations for a single-component system (argon) that this discontinuity is due to the constraint of momentum/pressure equilibrium during evaporation. For high vapour pressure the temperature is continuous across the liquid–vapour interface, while for small vapour pressures the temperature is discontinuous. The temperature jump at the interface is inversely proportional to the vapour density close to the interface. We have also found that all analysed data are described by the following equation: da/dt = P 1 /(a + P 2 ), where a is the radius of the evaporating droplet, t is time and P 1 and P 2 are two parameters. P 1 = −λΔT/(q eff ρ L ), where λ is the thermal conductivity coefficient in the vapour at the interface, ΔT is the temperature difference between the liquid droplet

  8. Spike timing precision of neuronal circuits.

    Science.gov (United States)

    Kilinc, Deniz; Demir, Alper

    2018-04-17

    Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.

  9. Finite Precision Logistic Map between Computational Efficiency and Accuracy with Encryption Applications

    Directory of Open Access Journals (Sweden)

    Wafaa S. Sayed

    2017-01-01

    Full Text Available Chaotic systems appear in many applications such as pseudo-random number generation, text encryption, and secure image transfer. Numerical solutions of these systems using digital software or hardware inevitably deviate from the expected analytical solutions. Chaotic orbits produced using finite precision systems do not exhibit the infinite period expected under the assumptions of infinite simulation time and precision. In this paper, digital implementation of the generalized logistic map with signed parameter is considered. We present a fixed-point hardware realization of a Pseudo-Random Number Generator using the logistic map that experiences a trade-off between computational efficiency and accuracy. Several introduced factors such as the used precision, the order of execution of the operations, parameter, and initial point values affect the properties of the finite precision map. For positive and negative parameter cases, the studied properties include bifurcation points, output range, maximum Lyapunov exponent, and period length. The performance of the finite precision logistic map is compared in the two cases. A basic stream cipher system is realized to evaluate the system performance for encryption applications for different bus sizes regarding the encryption key size, hardware requirements, maximum clock frequency, NIST and correlation, histogram, entropy, and Mean Absolute Error analyses of encrypted images.

  10. Computer simulation of variform fuel assemblies using Dragon code

    International Nuclear Information System (INIS)

    Ju Haitao; Wu Hongchun; Yao Dong

    2005-01-01

    The DRAGON is a cell code that developed for the CANDU reactor by the Ecole Polytechnique de Montreal of CANADA. Although, the DRAGON is mainly used to simulate the CANDU super-cell fuel assembly, it has an ability to simulate other geometries of the fuel assembly. However, only NEACRP benchmark problem of the BWR lattice cell was analyzed until now except for the CANDU reactor. We also need to develop the code to simulate the variform fuel assemblies, especially, for design of the advanced reactor. We validated that the cell code DRAGON is useful for simulating various kinds of the fuel assembly by analyzing the rod-shape fuel assembly of the PWR and the MTR plate-shape fuel assembly. Some other kinds of geometry of geometry were computed. Computational results show that the DRAGON is able to analyze variform fuel assembly problems and the precision is high. (authors)

  11. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  12. Single photon laser altimeter simulator and statistical signal processing

    Science.gov (United States)

    Vacek, Michael; Prochazka, Ivan

    2013-05-01

    Spaceborne altimeters are common instruments onboard the deep space rendezvous spacecrafts. They provide range and topographic measurements critical in spacecraft navigation. Simultaneously, the receiver part may be utilized for Earth-to-satellite link, one way time transfer, and precise optical radiometry. The main advantage of single photon counting approach is the ability of processing signals with very low signal-to-noise ratio eliminating the need of large telescopes and high power laser source. Extremely small, rugged and compact microchip lasers can be employed. The major limiting factor, on the other hand, is the acquisition time needed to gather sufficient volume of data in repetitive measurements in order to process and evaluate the data appropriately. Statistical signal processing is adopted to detect signals with average strength much lower than one photon per measurement. A comprehensive simulator design and range signal processing algorithm are presented to identify a mission specific altimeter configuration. Typical mission scenarios (celestial body surface landing and topographical mapping) are simulated and evaluated. The high interest and promising single photon altimeter applications are low-orbit (˜10 km) and low-radial velocity (several m/s) topographical mapping (asteroids, Phobos and Deimos) and landing altimetry (˜10 km) where range evaluation repetition rates of ˜100 Hz and 0.1 m precision may be achieved. Moon landing and asteroid Itokawa topographical mapping scenario simulations are discussed in more detail.

  13. Quantum analogue computing.

    Science.gov (United States)

    Kendon, Vivien M; Nemoto, Kae; Munro, William J

    2010-08-13

    We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.

  14. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  15. Foundations for computer simulation of a low pressure oil flooded single screw air compressor

    Science.gov (United States)

    Bein, T. W.

    1981-12-01

    The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.

  16. A study of reduced numerical precision to make superparameterization more competitive using a hardware emulator in the OpenIFS model

    Science.gov (United States)

    Düben, Peter D.; Subramanian, Aneesh; Dawson, Andrew; Palmer, T. N.

    2017-03-01

    The use of reduced numerical precision to reduce computing costs for the cloud resolving model of superparameterized simulations of the atmosphere is investigated. An approach to identify the optimal level of precision for many different model components is presented, and a detailed analysis of precision is performed. This is nontrivial for a complex model that shows chaotic behavior such as the cloud resolving model in this paper. It is shown not only that numerical precision can be reduced significantly but also that the results of the reduced precision analysis provide valuable information for the quantification of model uncertainty for individual model components. The precision analysis is also used to identify model parts that are of less importance thus enabling a reduction of model complexity. It is shown that the precision analysis can be used to improve model efficiency for both simulations in double precision and in reduced precision. Model simulations are performed with a superparameterized single-column model version of the OpenIFS model that is forced by observational data sets. A software emulator was used to mimic the use of reduced precision floating point arithmetic in simulations.

  17. Precision Medicine and PET/Computed Tomography: Challenges and Implementation.

    Science.gov (United States)

    Subramaniam, Rathan M

    2017-01-01

    Precision Medicine is about selecting the right therapy for the right patient, at the right time, specific to the molecular targets expressed by disease or tumors, in the context of patient's environment and lifestyle. Some of the challenges for delivery of precision medicine in oncology include biomarkers for patient selection for enrichment-precision diagnostics, mapping out tumor heterogeneity that contributes to therapy failures, and early therapy assessment to identify resistance to therapies. PET/computed tomography offers solutions in these important areas of challenges and facilitates implementation of precision medicine. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Benefits of computer screen-based simulation in learning cardiac arrest procedures.

    Science.gov (United States)

    Bonnetain, Elodie; Boucheix, Jean-Michel; Hamet, Maël; Freysz, Marc

    2010-07-01

    What is the best way to train medical students early so that they acquire basic skills in cardiopulmonary resuscitation as effectively as possible? Studies have shown the benefits of high-fidelity patient simulators, but have also demonstrated their limits. New computer screen-based multimedia simulators have fewer constraints than high-fidelity patient simulators. In this area, as yet, there has been no research on the effectiveness of transfer of learning from a computer screen-based simulator to more realistic situations such as those encountered with high-fidelity patient simulators. We tested the benefits of learning cardiac arrest procedures using a multimedia computer screen-based simulator in 28 Year 2 medical students. Just before the end of the traditional resuscitation course, we compared two groups. An experiment group (EG) was first asked to learn to perform the appropriate procedures in a cardiac arrest scenario (CA1) in the computer screen-based learning environment and was then tested on a high-fidelity patient simulator in another cardiac arrest simulation (CA2). While the EG was learning to perform CA1 procedures in the computer screen-based learning environment, a control group (CG) actively continued to learn cardiac arrest procedures using practical exercises in a traditional class environment. Both groups were given the same amount of practice, exercises and trials. The CG was then also tested on the high-fidelity patient simulator for CA2, after which it was asked to perform CA1 using the computer screen-based simulator. Performances with both simulators were scored on a precise 23-point scale. On the test on a high-fidelity patient simulator, the EG trained with a multimedia computer screen-based simulator performed significantly better than the CG trained with traditional exercises and practice (16.21 versus 11.13 of 23 possible points, respectively; p<0.001). Computer screen-based simulation appears to be effective in preparing learners to

  19. Longitudinal interfacility precision in single-energy quantitative CT

    International Nuclear Information System (INIS)

    Morin, R.L.; Gray, J.E.; Wahner, H.W.; Weekes, R.G.

    1987-01-01

    The authors investigated the precision of single-energy quantitative CT measurements between two facilities over 3 months. An anthropomorphic phantom with calcium hydroxyapatite inserts (60,100, and 160 mg/cc) was used with the Cann-Gennant method to measure bone mineral density. The same model CT scanner, anthropomorphic phantom, quantitative CT standard and analysis package were utilized at each facility. Acquisition and analysis techniques were identical to those used in patient studies. At one facility, 28 measurements yielded an average precision of 6.1% (5.0%-8.5%). The average precision for 39 measurements at the other facility was 4.3% (3.2%-8.1%). Successive scans with phantom repositioning between scanning yielded an average precision of about 3% (1%-4% without repositioning). Despite differences in personnel, scanners, standards, and phantoms, the variation between facilities was about 2%, which was within the intrafacility variation of about 5% at each location

  20. Rapid computation of single PET scan rest-stress myocardial blood flow parametric images by table look up.

    Science.gov (United States)

    Guehl, Nicolas J; Normandin, Marc D; Wooten, Dustin W; Rozen, Guy; Ruskin, Jeremy N; Shoup, Timothy M; Woo, Jonghye; Ptaszek, Leon M; Fakhri, Georges El; Alpert, Nathaniel M

    2017-09-01

    We have recently reported a method for measuring rest-stress myocardial blood flow (MBF) using a single, relatively short, PET scan session. The method requires two IV tracer injections, one to initiate rest imaging and one at peak stress. We previously validated absolute flow quantitation in ml/min/cc for standard bull's eye, segmental analysis. In this work, we extend the method for fast computation of rest-stress MBF parametric images. We provide an analytic solution to the single-scan rest-stress flow model which is then solved using a two-dimensional table lookup method (LM). Simulations were performed to compare the accuracy and precision of the lookup method with the original nonlinear method (NLM). Then the method was applied to 16 single scan rest/stress measurements made in 12 pigs: seven studied after infarction of the left anterior descending artery (LAD) territory, and nine imaged in the native state. Parametric maps of rest and stress MBF as well as maps of left (f LV ) and right (f RV ) ventricular spill-over fractions were generated. Regions of interest (ROIs) for 17 myocardial segments were defined in bull's eye fashion on the parametric maps. The mean of each ROI was then compared to the rest (K 1r ) and stress (K 1s ) MBF estimates obtained from fitting the 17 regional TACs with the NLM. In simulation, the LM performed as well as the NLM in terms of precision and accuracy. The simulation did not show that bias was introduced by the use of a predefined two-dimensional lookup table. In experimental data, parametric maps demonstrated good statistical quality and the LM was computationally much more efficient than the original NLM. Very good agreement was obtained between the mean MBF calculated on the parametric maps for each of the 17 ROIs and the regional MBF values estimated by the NLM (K 1map LM  = 1.019 × K 1 ROI NLM  + 0.019, R 2  = 0.986; mean difference = 0.034 ± 0.036 mL/min/cc). We developed a table lookup method for fast

  1. Automated, Resummed and Effective: Precision Computations for the LHC and Beyond

    CERN Document Server

    2017-01-01

    Precise predictions for collider processes are crucial to interpret the results from the Large Hadron Collider (LHC) at CERN. The goal of this programme is to bring together experts from different communities in precision collider physics (diagrammatic resummation vs. effective field theory, automated numerical computations vs. analytic approaches, etc.) to discuss the latest advances in jet physics, higher-order computations and resummation.

  2. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  3. SU (2) lattice gauge theory simulations on Fermi GPUs

    International Nuclear Information System (INIS)

    Cardoso, Nuno; Bicudo, Pedro

    2011-01-01

    In this work we explore the performance of CUDA in quenched lattice SU (2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU. We present an analysis and performance comparison between the GPU and CPU in single and double precision. Analyses with multiple GPUs and two different architectures (G200 and Fermi architectures) are also presented. In order to obtain a high performance, the code must be optimized for the GPU architecture, i.e., an implementation that exploits the memory hierarchy of the CUDA programming model. We produce codes for the Monte Carlo generation of SU (2) lattice gauge configurations, for the mean plaquette, for the Polyakov Loop at finite T and for the Wilson loop. We also present results for the potential using many configurations (50,000) without smearing and almost 2000 configurations with APE smearing. With two Fermi GPUs we have achieved an excellent performance of 200x the speed over one CPU, in single precision, around 110 Gflops/s. We also find that, using the Fermi architecture, double precision computations for the static quark-antiquark potential are not much slower (less than 2x slower) than single precision computations.

  4. MDGRAPE-4: a special-purpose computer system for molecular dynamics simulations.

    Science.gov (United States)

    Ohmura, Itta; Morimoto, Gentaro; Ohno, Yousuke; Hasegawa, Aki; Taiji, Makoto

    2014-08-06

    We are developing the MDGRAPE-4, a special-purpose computer system for molecular dynamics (MD) simulations. MDGRAPE-4 is designed to achieve strong scalability for protein MD simulations through the integration of general-purpose cores, dedicated pipelines, memory banks and network interfaces (NIFs) to create a system on chip (SoC). Each SoC has 64 dedicated pipelines that are used for non-bonded force calculations and run at 0.8 GHz. Additionally, it has 65 Tensilica Xtensa LX cores with single-precision floating-point units that are used for other calculations and run at 0.6 GHz. At peak performance levels, each SoC can evaluate 51.2 G interactions per second. It also has 1.8 MB of embedded shared memory banks and six network units with a peak bandwidth of 7.2 GB s(-1) for the three-dimensional torus network. The system consists of 512 (8×8×8) SoCs in total, which are mounted on 64 node modules with eight SoCs. The optical transmitters/receivers are used for internode communication. The expected maximum power consumption is 50 kW. While MDGRAPE-4 software has still been improved, we plan to run MD simulations on MDGRAPE-4 in 2014. The MDGRAPE-4 system will enable long-time molecular dynamics simulations of small systems. It is also useful for multiscale molecular simulations where the particle simulation parts often become bottlenecks.

  5. Simulation of Intelligent Single Wheel Mobile Robot

    Directory of Open Access Journals (Sweden)

    Maki K. Rashid

    2008-11-01

    Full Text Available Stabilization of a single wheel mobile robot attracted researcher attentions in robotic area. However, the budget requirements for building experimental setups capable in investigating isolated parameters and implementing others encouraged the development of new simulation methods and techniques that beat such limitations. In this work we have developed a simulation platform for testing different control tactics to stabilize a single wheel mobile robot. The graphic representation of the robot, the dynamic solution, and, the control scheme are all integrated on common computer platform using Visual Basic. Simulation indicates that we can control such robot without knowing the detail of it's internal structure or dynamics behaviour just by looking at it and using manual operation tactics. Twenty five rules are extracted and implemented using Takagi-Sugeno's fuzzy controller with significant achievement in controlling robot motion during the dynamic simulation. The resulted data from the successful implementation of the fuzzy model are used to utilize and train a neurofuzzy controller using ANFIS scheme to produce further improvement in robot performance

  6. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  7. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  9. A computer simulation of an adaptive noise canceler with a single input

    Science.gov (United States)

    Albert, Stuart D.

    1991-06-01

    A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.

  10. Computational simulation of flow and heat transfer in single-phase natural circulation loops

    International Nuclear Information System (INIS)

    Pinheiro, Larissa Cunha

    2017-01-01

    Passive decay heat removal systems based on natural circulation are essential assets for the new Gen III+ nuclear power reactors and nuclear spent fuel pools. The aim of the present work is to study both laminar and turbulent flow and heat transfer in single-phase natural circulation systems through computational fluid dynamics simulations. The working fluid is considered to be incompressible with constant properties. In the way, the Boussinesq Natural Convection Hypothesis was applied. The model chosen for the turbulence closure problem was the k -- εThe commercial computational fluid dynamics code ANSYS CFX 15.0 was used to obtain the numerical solution of the governing equations. Two single-phase natural circulation circuits were studied, a 2D toroidal loop and a 3D rectangular loop, both with the same boundary conditions of: prescribed heat flux at the heater and fixed wall temperature at the cooler. The validation and verification was performed with the numerical data provided by DESRAYAUD et al. [1] and the experimental data provided by MISALE et al. [2] and KUMAR et al. [3]. An excellent agreement between the Reynolds number (Re) and the modified Grashof number (Gr_m), independently of Prandtl Pr number was observed. However, the convergence interval was observed to be variable with Pr, thus indicating that Pr is a stability governing parameter for natural circulation. Multiple steady states was obtained for Pr = 0,7. Finally, the effect of inclination was studied for the 3D circuit, both in-plane and out-of-plane inclinations were verified for the steady state laminar regime. As a conclusion, the Re for the out-of-plane inclination was in perfect agreement with the correlation found for the zero inclination system, while for the in-plane inclined system the results differ from that of the corresponding vertical loop. (author)

  11. Site-selective substitutional doping with atomic precision on stepped Al (111) surface by single-atom manipulation.

    Science.gov (United States)

    Chen, Chang; Zhang, Jinhu; Dong, Guofeng; Shao, Hezhu; Ning, Bo-Yuan; Zhao, Li; Ning, Xi-Jing; Zhuang, Jun

    2014-01-01

    In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method.

  12. Easy-DHPSF open-source software for three-dimensional localization of single molecules with precision beyond the optical diffraction limit.

    Science.gov (United States)

    Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E

    2013-02-25

    Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.

  13. Scientific computer simulation review

    International Nuclear Information System (INIS)

    Kaizer, Joshua S.; Heller, A. Kevin; Oberkampf, William L.

    2015-01-01

    Before the results of a scientific computer simulation are used for any purpose, it should be determined if those results can be trusted. Answering that question of trust is the domain of scientific computer simulation review. There is limited literature that focuses on simulation review, and most is specific to the review of a particular type of simulation. This work is intended to provide a foundation for a common understanding of simulation review. This is accomplished through three contributions. First, scientific computer simulation review is formally defined. This definition identifies the scope of simulation review and provides the boundaries of the review process. Second, maturity assessment theory is developed. This development clarifies the concepts of maturity criteria, maturity assessment sets, and maturity assessment frameworks, which are essential for performing simulation review. Finally, simulation review is described as the application of a maturity assessment framework. This is illustrated through evaluating a simulation review performed by the U.S. Nuclear Regulatory Commission. In making these contributions, this work provides a means for a more objective assessment of a simulation’s trustworthiness and takes the next step in establishing scientific computer simulation review as its own field. - Highlights: • We define scientific computer simulation review. • We develop maturity assessment theory. • We formally define a maturity assessment framework. • We describe simulation review as the application of a maturity framework. • We provide an example of a simulation review using a maturity framework

  14. Mixed Precision Solver Scalable to 16000 MPI Processes for Lattice Quantum Chromodynamics Simulations on the Oakforest-PACS System

    OpenAIRE

    Boku, Taisuke; Ishikawa, Ken-Ichi; Kuramashi, Yoshinobu; Meadows, Lawrence

    2017-01-01

    Lattice Quantum Chromodynamics (Lattice QCD) is a quantum field theory on a finite discretized space-time box so as to numerically compute the dynamics of quarks and gluons to explore the nature of subatomic world. Solving the equation of motion of quarks (quark solver) is the most compute-intensive part of the lattice QCD simulations and is one of the legacy HPC applications. We have developed a mixed-precision quark solver for a large Intel Xeon Phi (KNL) system named "Oakforest-PACS", empl...

  15. Modelling and numerical simulation of vortex induced vibrations of single cylinder or cylinder arrays

    International Nuclear Information System (INIS)

    Jus, Y.

    2011-01-01

    This research thesis fits into the frame of researches achieved in the nuclear field in order to optimize the predictive abilities of sizing models of nuclear plant components. It more precisely addresses the modelling of the action exerted by the flowing fluid and the induced feedback by the structure dynamics. The objective is herein to investigate the interaction between the turbulence at the wall vicinity and the effects of non-conservative and potentially destabilizing unsteady coupling. The peculiar case of a single cylinder in infinite environment, and submitted to a transverse flow, is studied statically and then dynamically. The influence of flow regimes on dynamic response is characterized, and the quantification of fluid-structure interaction energy is assessed. The author then addresses the case of an array of cylinders, and highlights the contribution of three-dimensional macro-simulations for the analysis of flow-induced structure vibrations in subcritical regime within a High Performance Calculation (HPC) framework, and the interest of a CFD/CSM (computational fluid dynamics/computational structure mechanics) coupling in the case of turbulent flows in an industrial environment

  16. Precision tests of CPT invariance with single trapped antiprotons

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Stefan [RIKEN, Ulmer Initiative Research Unit, Wako, Saitama (Japan); Collaboration: BASE-Collaboration

    2015-07-01

    The reason for the striking imbalance of matter and antimatter in our Universe has yet to be understood. This is the motivation and inspiration to conduct high precision experiments comparing the fundamental properties of matter and antimatter equivalents at lowest energies and with greatest precision. According to theory, the most sensitive tests of CPT invariance are measurements of antihydrogen ground-state hyperfine splitting as well as comparisons of proton and antiproton magnetic moments. Within the BASE collaboration we target the latter. By using a double Penning trap we performed very recently the first direct high precision measurement of the proton magnetic moment. The achieved fractional precision of 3.3 ppb improves the currently accepted literature value by a factor of 2.5. Application of the method to a single trapped antiproton will improve precision of the particles magnetic moment by more than a factor of 1000, thus providing one of the most stringent tests of CPT invariance. In my talk I report on the status and future perspectives of our efforts.

  17. Distribution of recombination hotspots in the human genome--a comparison of computer simulations with real data.

    Directory of Open Access Journals (Sweden)

    Dorota Mackiewicz

    Full Text Available Recombination is the main cause of genetic diversity. Thus, errors in this process can lead to chromosomal abnormalities. Recombination events are confined to narrow chromosome regions called hotspots in which characteristic DNA motifs are found. Genomic analyses have shown that both recombination hotspots and DNA motifs are distributed unevenly along human chromosomes and are much more frequent in the subtelomeric regions of chromosomes than in their central parts. Clusters of motifs roughly follow the distribution of recombination hotspots whereas single motifs show a negative correlation with the hotspot distribution. To model the phenomena related to recombination, we carried out computer Monte Carlo simulations of genome evolution. Computer simulations generated uneven distribution of hotspots with their domination in the subtelomeric regions of chromosomes. They also revealed that purifying selection eliminating defective alleles is strong enough to cause such hotspot distribution. After sufficiently long time of simulations, the structure of chromosomes reached a dynamic equilibrium, in which number and global distribution of both hotspots and defective alleles remained statistically unchanged, while their precise positions were shifted. This resembles the dynamic structure of human and chimpanzee genomes, where hotspots change their exact locations but the global distributions of recombination events are very similar.

  18. Precise determination of sodium in serum by simulated isotope dilution method of inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yan Ying; Zhang Chuanbao; Zhao Haijian; Chen Wenxiang; Shen Ziyu; Wang Xiaoru; Chen Dengyun

    2007-01-01

    A new precise and accurate method for the determination of sodium in serum by inductively coupled plasma mass spectrometry (ICP-MS) was developed. Since 23 Na is the single isotope element, 27 Al is selected as simulated isotope of Na. Al is spiked into serum samples and Na standard solution. 23 Na/ 27 Al ratio in the Na standard solution is determined to assume the natural Na isotope ratio. The serums samples are digested by purified HNO 3 /H 2 O 2 and diluted to get about 0.6 μg·g -1 Al solutions, and the 23 Na/ 27 Al ratios of the serum samples are obtained to calculate the accurate Na concentrations basing on the isotope dilution method. When the simulated isotope dilution method of ICP-MS is applied and Al is selected as the simulated isotope of Na, the precise and accurate Na concentrations in the serums are determined. The inter-day precision of CV<0.13% for one same serum sample is obtained during 3 days 4 measurements. The spike recoveries are between 99.69% and 100.60% for 4 different serum samples and 3 days multi-measurements. The results of measuring standard reference materials of serum sodium are agree with the certified value. The relative difference between 3 days is 0.22%-0.65%, and the relative difference in one bottle is 0.15%-0.44%. The ICP-MS and Al simulated isotope dilution method is proved to be not only precise and accurate, but also quick and convenient for measuring Na in serum. It is promising to be a reference method for precise determination of Na in serum. Since Al is a low cost isotope dilution reagent, the method is possible to be widely applied for serum Na determination. (authors)

  19. Multilevel criticality computations in AREVA NP'S core simulation code artemis - 195

    International Nuclear Information System (INIS)

    Van Geemert, R.

    2010-01-01

    This paper discusses the multi-level critical boron iteration approach that is applied per default in AREVA NP's whole-core neutronics and thermal hydraulics core simulation program ARTEMIS. This multi-level approach is characterized by the projection of variational boron concentration adjustments to the coarser mesh levels in a multi-level re-balancing hierarchy that is associated with the nodal flux equations to be solved in steady-state core simulation. At each individual re-balancing mesh level, optimized variational criticality tuning formulas are applied. The latter drive the core model to a numerically highly accurate self-sustaining state (i.e. with the neutronic eigenvalue being 1 up to a very high numerical precision) by continuous adjustment of the boron concentration as a system-wide scalar criticality parameter. Due to the default application of this approach in ARTEMIS reactor cycle simulations, an accuracy of all critical boron concentration estimates better than 0.001 ppm is enabled for all burnup time steps in a computationally efficient way. This high accuracy is relevant for precision optimization in industrial core simulation as well as for enabling accurate reactivity perturbation assessments. The developed approach is presented from a numerical methodology point of view with an emphasis on the multi-grid aspect of the concept. Furthermore, an application-relevant verification is presented in terms of achieved coupled iteration convergence efficiency for an application-representative industrial core cycle computation. (authors)

  20. Computer simulation of ion recombination in irradiated nonpolar liquids

    International Nuclear Information System (INIS)

    Bartczak, W.M.; Hummel, A.

    1986-01-01

    A review on the results of computer simulation of the diffusion controlled recombination of ions is presented. The ions generated in clusters of two and three pairs of oppositely charged ions were considered. The recombination kinetics and the ion escape probability at infinite time with and without external electric field were computed. These results are compared with the calculations based on the single-pair theory. (athor)

  1. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  2. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  3. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  4. Airborne gravimetry used in precise geoid computations by ring integration

    DEFF Research Database (Denmark)

    Kearsley, A.H.W.; Forsberg, René; Olesen, Arne Vestergaard

    1998-01-01

    Two detailed geoids have been computed in the region of North Jutland. The first computation used marine data in the offshore areas. For the second computation the marine data set was replaced by the sparser airborne gravity data resulting from the AG-MASCO campaign of September 1996. The results...... of comparisons of the geoid heights at on-shore geometric control showed that the geoid heights computed from the airborne gravity data matched in precision those computed using the marine data, supporting the view that airborne techniques have enormous potential for mapping those unsurveyed areas between...

  5. Distribution of Recombination Hotspots in the Human Genome – A Comparison of Computer Simulations with Real Data

    Science.gov (United States)

    Mackiewicz, Dorota; de Oliveira, Paulo Murilo Castro; Moss de Oliveira, Suzana; Cebrat, Stanisław

    2013-01-01

    Recombination is the main cause of genetic diversity. Thus, errors in this process can lead to chromosomal abnormalities. Recombination events are confined to narrow chromosome regions called hotspots in which characteristic DNA motifs are found. Genomic analyses have shown that both recombination hotspots and DNA motifs are distributed unevenly along human chromosomes and are much more frequent in the subtelomeric regions of chromosomes than in their central parts. Clusters of motifs roughly follow the distribution of recombination hotspots whereas single motifs show a negative correlation with the hotspot distribution. To model the phenomena related to recombination, we carried out computer Monte Carlo simulations of genome evolution. Computer simulations generated uneven distribution of hotspots with their domination in the subtelomeric regions of chromosomes. They also revealed that purifying selection eliminating defective alleles is strong enough to cause such hotspot distribution. After sufficiently long time of simulations, the structure of chromosomes reached a dynamic equilibrium, in which number and global distribution of both hotspots and defective alleles remained statistically unchanged, while their precise positions were shifted. This resembles the dynamic structure of human and chimpanzee genomes, where hotspots change their exact locations but the global distributions of recombination events are very similar. PMID:23776462

  6. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  7. Simulation of electronic structure Hamiltonians in a superconducting quantum computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kaicher, Michael; Wilhelm, Frank K. [Theoretical Physics, Saarland University, 66123 Saarbruecken (Germany); Love, Peter J. [Department of Physics, Haverford College, Haverford, Pennsylvania 19041 (United States)

    2015-07-01

    Quantum chemistry has become one of the most promising applications within the field of quantum computation. Simulating the electronic structure Hamiltonian (ESH) in the Bravyi-Kitaev (BK)-Basis to compute the ground state energies of atoms/molecules reduces the number of qubit operations needed to simulate a single fermionic operation to O(log(n)) as compared to O(n) in the Jordan-Wigner-Transformation. In this work we will present the details of the BK-Transformation, show an example of implementation in a superconducting quantum computer architecture and compare it to the most recent quantum chemistry algorithms suggesting a constant overhead.

  8. Interpretation of Cellular Imaging and AQP4 Quantification Data in a Single Cell Simulator

    Directory of Open Access Journals (Sweden)

    Seon B. Kim

    2014-03-01

    Full Text Available The goal of the present study is to integrate different datasets in cell biology to derive additional quantitative information about a gene or protein of interest within a single cell using computational simulations. We propose a novel prototype cell simulator as a quantitative tool to integrate datasets including dynamic information about transcript and protein levels and the spatial information on protein trafficking in a complex cellular geometry. In order to represent the stochastic nature of transcription and gene expression, our cell simulator uses event-based stochastic simulations to capture transcription, translation, and dynamic trafficking events. In a reconstructed cellular geometry, a realistic microtubule structure is generated with a novel growth algorithm for simulating vesicular transport and trafficking events. In a case study, we investigate the change in quantitative expression levels of a water channel-aquaporin 4-in a single astrocyte cell, upon pharmacological treatment. Gillespie based discrete time approximation method results in stochastic fluctuation of mRNA and protein levels. In addition, we compute the dynamic trafficking of aquaporin-4 on microtubules in this reconstructed astrocyte. Computational predictions are validated with experimental data. The demonstrated cell simulator facilitates the analysis and prediction of protein expression dynamics.

  9. Precise calculations in simulations of the interaction of low energy neutrons with nano-dispersed media

    International Nuclear Information System (INIS)

    Artem’ev, V. A.; Nezvanov, A. Yu.; Nesvizhevsky, V. V.

    2016-01-01

    We discuss properties of the interaction of slow neutrons with nano-dispersed media and their application for neutron reflectors. In order to increase the accuracy of model simulation of the interaction of neutrons with nanopowders, we perform precise quantum mechanical calculation of potential scattering of neutrons on single nanoparticles using the method of phase functions. We compare results of precise calculations with those performed within first Born approximation for nanodiamonds with the radius of 2–5 nm and for neutron energies 3 × 10 -7 –10 -3 eV. Born approximation overestimates the probability of scattering to large angles, while the accuracy of evaluation of integral characteristics (cross sections, albedo) is acceptable. Using Monte-Carlo method, we calculate albedo of neutrons from different layers of piled up diamond nanopowder

  10. Precise calculations in simulations of the interaction of low energy neutrons with nano-dispersed media

    Science.gov (United States)

    Artem'ev, V. A.; Nezvanov, A. Yu.; Nesvizhevsky, V. V.

    2016-01-01

    We discuss properties of the interaction of slow neutrons with nano-dispersed media and their application for neutron reflectors. In order to increase the accuracy of model simulation of the interaction of neutrons with nanopowders, we perform precise quantum mechanical calculation of potential scattering of neutrons on single nanoparticles using the method of phase functions. We compare results of precise calculations with those performed within first Born approximation for nanodiamonds with the radius of 2-5 nm and for neutron energies 3 × 10-7-10-3 eV. Born approximation overestimates the probability of scattering to large angles, while the accuracy of evaluation of integral characteristics (cross sections, albedo) is acceptable. Using Monte-Carlo method, we calculate albedo of neutrons from different layers of piled up diamond nanopowder.

  11. Precise calculations in simulations of the interaction of low energy neutrons with nano-dispersed media

    Energy Technology Data Exchange (ETDEWEB)

    Artem’ev, V. A., E-mail: niitm@inbox.ru [Research Institute of Materials Technology (Russian Federation); Nezvanov, A. Yu. [Moscow State Industrial University (Russian Federation); Nesvizhevsky, V. V. [Institut Max von Laue—Paul Langevin (France)

    2016-01-15

    We discuss properties of the interaction of slow neutrons with nano-dispersed media and their application for neutron reflectors. In order to increase the accuracy of model simulation of the interaction of neutrons with nanopowders, we perform precise quantum mechanical calculation of potential scattering of neutrons on single nanoparticles using the method of phase functions. We compare results of precise calculations with those performed within first Born approximation for nanodiamonds with the radius of 2–5 nm and for neutron energies 3 × 10{sup -7}–10{sup -3} eV. Born approximation overestimates the probability of scattering to large angles, while the accuracy of evaluation of integral characteristics (cross sections, albedo) is acceptable. Using Monte-Carlo method, we calculate albedo of neutrons from different layers of piled up diamond nanopowder.

  12. Computer simulation f the genetic controller for the EB flue gas treatment process

    International Nuclear Information System (INIS)

    Moroz, Z.; Bouzyk, J.; Sowinski, M.; Chmielewski, A.G.

    2001-01-01

    The use of computer genetic algorithm (GA) for driving a controller device for the industrial flue gas purification systems employing the electron beam irradiation, has been studied. As the mathematical model of the installation the properly trained artificial neural net (ANN) was used. Various cost functions and optimising strategies of the genetic code were tested. These computer simulations proved, that ANN + GA controller can be sufficiently precise and fast to be applied in real installations. (author)

  13. A computational platform for modeling and simulation of pipeline georeferencing systems

    Energy Technology Data Exchange (ETDEWEB)

    Guimaraes, A.G.; Pellanda, P.C.; Gois, J.A. [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil); Roquette, P.; Pinto, M.; Durao, R. [Instituto de Pesquisas da Marinha (IPqM), Rio de Janeiro, RJ (Brazil); Silva, M.S.V.; Martins, W.F.; Camillo, L.M.; Sacsa, R.P.; Madeira, B. [Ministerio de Ciencia e Tecnologia (CT-PETRO2006MCT), Brasilia, DF (Brazil). Financiadora de Estudos e Projetos (FINEP). Plano Nacional de Ciencia e Tecnologia do Setor Petroleo e Gas Natural

    2009-07-01

    This work presents a computational platform for modeling and simulation of pipeline geo referencing systems, which was developed based on typical pipeline characteristics, on the dynamical modeling of Pipeline Inspection Gauge (PIG) and on the analysis and implementation of an inertial navigation algorithm. The software environment of PIG trajectory simulation and navigation allows the user, through a friendly interface, to carry-out evaluation tests of the inertial navigation system under different scenarios. Therefore, it is possible to define the required specifications of the pipeline geo referencing system components, such as: required precision of inertial sensors, characteristics of the navigation auxiliary system (GPS surveyed control points, odometers etc.), pipeline construction information to be considered in order to improve the trajectory estimation precision, and the signal processing techniques more suitable for the treatment of inertial sensors data. The simulation results are analyzed through the evaluation of several performance metrics usually considered in inertial navigation applications, and 2D and 3D plots of trajectory estimation error and of recovered trajectory in the three coordinates are made available to the user. This paper presents the simulation platform and its constituting modules and defines their functional characteristics and interrelationships.(author)

  14. Computational fluid dynamics simulations of single-phase flow in a filter-press flow reactor having a stack of three cells

    International Nuclear Information System (INIS)

    Sandoval, Miguel A.; Fuentes, Rosalba; Walsh, Frank C.; Nava, José L.; Ponce de León, Carlos

    2016-01-01

    Highlights: • Computational fluid dynamic simulations in a filter-press stack of three cells. • The fluid velocity was different in each cell due to local turbulence. • The upper cell link pipe of the filter press cell acts as a fluid mixer. • The fluid behaviour tends towards a continuous mixing flow pattern. • Close agreement between simulations and experimental data was achieved. - Abstract: Computational fluid dynamics (CFD) simulations were carried out for single-phase flow in a pre-pilot filter press flow reactor with a stack of three cells. Velocity profiles and streamlines were obtained by solving the Reynolds-Averaged Navier-Stokes (RANS) equations with a standard k − ε turbulence model. The flow behaviour shows the appearance of jet flow at the entrance to each cell. At lengths from 12 to 15 cm along the cells channels, a plug flow pattern is developed at all mean linear flow rates studied here, 1.2 ≤ u ≤ 2.1 cm s −1 . The magnitude of the velocity profiles in each cell was different, due to the turbulence generated by the change of flow direction in the last fluid manifold. Residence time distribution (RTD) simulations indicated that the fluid behaviour tends towards a continuous mixing flow pattern, owing to flow at the output of each cell across the upper cell link pipe, which acts as a mixer. Close agreement between simulations and experimental RTD was obtained.

  15. Numerical simulation of mechatronic sensors and actuators finite elements for computational multiphysics

    CERN Document Server

    Kaltenbacher, Manfred

    2015-01-01

    Like the previous editions also the third edition of this book combines the detailed physical modeling of mechatronic systems and their precise numerical simulation using the Finite Element (FE) method. Thereby, the basic chapter concerning the Finite Element (FE) method is enhanced, provides now also a description of higher order finite elements (both for nodal and edge finite elements) and a detailed discussion of non-conforming mesh techniques. The author enhances and improves many discussions on principles and methods. In particular, more emphasis is put on the description of single fields by adding the flow field. Corresponding to these field, the book is augmented with the new chapter about coupled flow-structural mechanical systems. Thereby, the discussion of computational aeroacoustics is extended towards perturbation approaches, which allows a decomposition of flow and acoustic quantities within the flow region. Last but not least, applications are updated and restructured so that the book meets mode...

  16. Atomic-level computer simulation

    International Nuclear Information System (INIS)

    Adams, J.B.; Rockett, Angus; Kieffer, John; Xu Wei; Nomura, Miki; Kilian, K.A.; Richards, D.F.; Ramprasad, R.

    1994-01-01

    This paper provides a broad overview of the methods of atomic-level computer simulation. It discusses methods of modelling atomic bonding, and computer simulation methods such as energy minimization, molecular dynamics, Monte Carlo, and lattice Monte Carlo. ((orig.))

  17. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Science.gov (United States)

    Kovac, André David; Koall, Maximilian; Pipa, Gordon; Toutounji, Hazem

    2016-01-01

    Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  18. GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Liu, Yangchuan; Tang, Yuguo; Gao, Xin

    2017-12-01

    The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.

  19. N-of-1-pathways MixEnrich: advancing precision medicine via single-subject analysis in discovering dynamic changes of transcriptomes.

    Science.gov (United States)

    Li, Qike; Schissler, A Grant; Gardeux, Vincent; Achour, Ikbel; Kenost, Colleen; Berghout, Joanne; Li, Haiquan; Zhang, Hao Helen; Lussier, Yves A

    2017-05-24

    Transcriptome analytic tools are commonly used across patient cohorts to develop drugs and predict clinical outcomes. However, as precision medicine pursues more accurate and individualized treatment decisions, these methods are not designed to address single-patient transcriptome analyses. We previously developed and validated the N-of-1-pathways framework using two methods, Wilcoxon and Mahalanobis Distance (MD), for personal transcriptome analysis derived from a pair of samples of a single patient. Although, both methods uncover concordantly dysregulated pathways, they are not designed to detect dysregulated pathways with up- and down-regulated genes (bidirectional dysregulation) that are ubiquitous in biological systems. We developed N-of-1-pathways MixEnrich, a mixture model followed by a gene set enrichment test, to uncover bidirectional and concordantly dysregulated pathways one patient at a time. We assess its accuracy in a comprehensive simulation study and in a RNA-Seq data analysis of head and neck squamous cell carcinomas (HNSCCs). In presence of bidirectionally dysregulated genes in the pathway or in presence of high background noise, MixEnrich substantially outperforms previous single-subject transcriptome analysis methods, both in the simulation study and the HNSCCs data analysis (ROC Curves; higher true positive rates; lower false positive rates). Bidirectional and concordant dysregulated pathways uncovered by MixEnrich in each patient largely overlapped with the quasi-gold standard compared to other single-subject and cohort-based transcriptome analyses. The greater performance of MixEnrich presents an advantage over previous methods to meet the promise of providing accurate personal transcriptome analysis to support precision medicine at point of care.

  20. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  1. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  2. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  3. Single-particle and collective dynamics of methanol confined in carbon nanotubes: a computer simulation study

    International Nuclear Information System (INIS)

    Garberoglio, Giovanni

    2010-01-01

    We present the results of computer simulations of methanol confined in carbon nanotubes. Different levels of confinement were identified as a function of the nanotube radius and characterized using a pair-distribution function adapted to the cylindrical geometry of these systems. Dynamical properties of methanol were also analysed as a function of the nanotube size, both at the level of single-particle and collective properties. We found that confinement in narrow carbon nanotubes strongly affects the dynamical properties of methanol with respect to the bulk phase, due to the strong interaction with the carbon nanotube. In the other cases, confined methanol shows properties quite similar to those of the bulk phase. These phenomena are related to the peculiar hydrogen bonded network of methanol and are compared to the behaviour of water confined in similar conditions. The effect of nanotube flexibility on the dynamical properties of confined methanol is also discussed.

  4. GEANT4 simulations for Proton computed tomography applications

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim T. de; Evseev, Ivan; Schelin, Hugo R.; Shtejer Diaz, Katherin; Lopes, Ricardo T.

    2011-01-01

    Proton radiation therapy is a highly precise form of cancer treatment. In existing proton treatment centers, dose calculations are performed based on X-ray computed tomography (CT). Alternatively, one could image the tumor directly with proton CT (pCT). Proton beams in medical applications deal with relatively thick targets like the human head or trunk. Thus, the fidelity of proton computed tomography (pCT) simulations as a tool for proton therapy planning depends in the general case on the accuracy of results obtained for the proton interaction with thick absorbers. GEANT4 simulations of proton energy spectra after passing thick absorbers do not agree well with existing experimental data, as showed previously. The spectra simulated for the Bethe-Bloch domain showed an unexpected sensitivity to the choice of low-energy electromagnetic models during the code execution. These observations were done with the GEANT4 version 8.2 during our simulations for pCT. This work describes in more details the simulations of the proton passage through gold absorbers with varied thickness. The simulations were done by modifying only the geometry in the Hadron therapy Example, and for all available choices of the Electromagnetic Physics Models. As the most probable reasons for these effects is some specific feature in the code or some specific implicit parameters in the GEANT4 manual, we continued our study with version 9.2 of the code. Some improvements in comparison with our previous results were obtained. The simulations were performed considering further applications for pCT development. The authors want to thank CNPq, CAPES and 'Fundacao Araucaria' for financial support of this work. (Author)

  5. EDUCATIONAL COMPUTER SIMULATION EXPERIMENT «REAL-TIME SINGLE-MOLECULE IMAGING OF QUANTUM INTERFERENCE»

    Directory of Open Access Journals (Sweden)

    Alexander V. Baranov

    2015-01-01

    Full Text Available Taking part in the organized project activities students of the technical University create virtual physics laboratories. The article gives an example of the student’s project-computer modeling and visualization one of the most wonderful manifestations of reality-quantum interference of particles. The real experiment with heavy organic fluorescent molecules is used as a prototype for this computer simulation. The student’s software product can be used in informational space of the system of open education.

  6. Faster quantum chemistry simulation on fault-tolerant quantum computers

    International Nuclear Information System (INIS)

    Cody Jones, N; McMahon, Peter L; Yamamoto, Yoshihisa; Whitfield, James D; Yung, Man-Hong; Aspuru-Guzik, Alán; Van Meter, Rodney

    2012-01-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay–Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ϵ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ϵ) or O(log log ϵ); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride. (paper)

  7. Parallel quantum computing in a single ensemble quantum computer

    International Nuclear Information System (INIS)

    Long Guilu; Xiao, L.

    2004-01-01

    We propose a parallel quantum computing mode for ensemble quantum computer. In this mode, some qubits are in pure states while other qubits are in mixed states. It enables a single ensemble quantum computer to perform 'single-instruction-multidata' type of parallel computation. Parallel quantum computing can provide additional speedup in Grover's algorithm and Shor's algorithm. In addition, it also makes a fuller use of qubit resources in an ensemble quantum computer. As a result, some qubits discarded in the preparation of an effective pure state in the Schulman-Varizani and the Cleve-DiVincenzo algorithms can be reutilized

  8. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  9. Single and Multiple UAV Cyber-Attack Simulation and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Y. Javaid

    2015-02-01

    Full Text Available Usage of ground, air and underwater unmanned vehicles (UGV, UAV and UUV has increased exponentially in the recent past with industries producing thousands of these unmanned vehicles every year.With the ongoing discussion of integration of UAVs in the US National Airspace, the need of a cost-effective way to verify the security and resilience of a group of communicating UAVs under attack has become very important. The answer to this need is a simulation testbed which can be used to simulate the UAV Network (UAVNet. One of these attempts is - UAVSim (Unmanned Aerial Vehicle Simulation testbed developed at the University of Toledo. It has the capability of simulating large UAV networks as well as small UAV networks with large number of attack nodes. In this paper, we analyse the performance of the simulation testbed for two attacks, targeting single and multiple UAVs. Traditional and generic computing resource available in a regular computer laboratory was used. Various evaluation results have been presented and analysed which suggest the suitability of UAVSim for UAVNet attack and swarm simulation applications.

  10. A precise pointing nanopipette for single-cell imaging via electroosmotic injection.

    Science.gov (United States)

    Lv, Jian; Qian, Ruo-Can; Hu, Yong-Xu; Liu, Shao-Chuang; Cao, Yue; Zheng, Yong-Jie; Long, Yi-Tao

    2016-11-24

    The precise transportation of fluorescent probes to the designated location in living cells is still a challenge. Here, we present a new addition to nanopipettes as a powerful tool to deliver fluorescent molecules to a given place in a single cell by electroosmotic flow, indicating favorable potential for further application in single-cell imaging.

  11. Dynamics of number systems computation with arbitrary precision

    CERN Document Server

    Kurka, Petr

    2016-01-01

    This book is a source of valuable and useful information on the topics of dynamics of number systems and scientific computation with arbitrary precision. It is addressed to scholars, scientists and engineers, and graduate students. The treatment is elementary and self-contained with relevance both for theory and applications. The basic prerequisite of the book is linear algebra and matrix calculus. .

  12. Reproducibility in Computational Neuroscience Models and Simulations

    Science.gov (United States)

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  13. Single-sector thermophysiological human simulator

    International Nuclear Information System (INIS)

    Psikuta, Agnieszka; Richards, Mark; Fiala, Dusan

    2008-01-01

    Thermal sweating manikins are used to analyse the heat and mass transfer phenomena in the skin–clothing–environment system. However, the limiting factor of present thermal manikins is their inability to simulate adequately the human thermal behaviour, which has a significant effect on the clothing microenvironment. A mathematical model of the human physiology was, therefore, incorporated into the system control to simulate human thermoregulatory responses and the perception of thermal comfort over a wide range of environmental and personal conditions. Thereby, the computer model provides the physiological intelligence, while the hardware is used to measure the required calorimetric states relevant to the human heat exchange with the environment. This paper describes the development of a single-sector thermophysiological human simulator, which consists of a sweating heated cylinder 'Torso' coupled with the iesd-Fiala multi-node model of human physiology and thermal comfort. Validation tests conducted for steady-state and, to some extent, transient conditions ranging from cold to hot revealed good agreement with the corresponding experimental results obtained for semi-nude subjects. The new coupled system enables overall physiological and comfort responses, health risk and survival conditions to be predicted for adult humans for various scenarios

  14. Computer code for simulating pressurized water reactor core

    International Nuclear Information System (INIS)

    Serrano, A.M.B.

    1978-01-01

    A computer code was developed for the simulation of the steady-state and transient behaviour of the average channel of a Pressurizer Water Reactor core. Point kinetics equations were used with the reactivity calculated for average temperatures in the channel with the fuel and moderator temperature feedbacks. The radial heat conduction equation in the fuel was solved numerically. For calculating the thermodynamic properties of the coolant, the fundamental equations of conservation (mass, energy and momentum) were solved. The gap and clad were treated as a resistance added to the film coefficient. The fuel system equations were decoupled from the coolant equations. The program permitted the changes in the heat transfer correlations and the flow patterns along the coolant channel. Various test were performed to determine the steady-state and transient response employing the PWR core simulator developed, obtaining results with adequate precision. (author)

  15. Persistent Memory in Single Node Delay-Coupled Reservoir Computing.

    Directory of Open Access Journals (Sweden)

    André David Kovac

    Full Text Available Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.

  16. CloudMC: a cloud computing application for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-01-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice. (note)

  17. CloudMC: a cloud computing application for Monte Carlo simulation.

    Science.gov (United States)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  18. Substructuring in the implicit simulation of single point incremental sheet forming

    NARCIS (Netherlands)

    Hadoush, A.; van den Boogaard, Antonius H.

    2009-01-01

    This paper presents a direct substructuring method to reduce the computing time of implicit simulations of single point incremental forming (SPIF). Substructuring is used to divide the finite element (FE) mesh into several non-overlapping parts. Based on the hypothesis that plastic deformation is

  19. Simulation of quantum computers

    NARCIS (Netherlands)

    De Raedt, H; Michielsen, K; Hams, AH; Miyashita, S; Saito, K; Landau, DP; Lewis, SP; Schuttler, HB

    2001-01-01

    We describe a simulation approach to study the functioning of Quantum Computer hardware. The latter is modeled by a collection of interacting spin-1/2 objects. The time evolution of this spin system maps one-to-one to a quantum program carried out by the Quantum Computer. Our simulation software

  20. Simulation of quantum computers

    NARCIS (Netherlands)

    Raedt, H. De; Michielsen, K.; Hams, A.H.; Miyashita, S.; Saito, K.

    2000-01-01

    We describe a simulation approach to study the functioning of Quantum Computer hardware. The latter is modeled by a collection of interacting spin-1/2 objects. The time evolution of this spin system maps one-to-one to a quantum program carried out by the Quantum Computer. Our simulation software

  1. Computer simulation studies in condensed-matter physics 5. Proceedings

    International Nuclear Information System (INIS)

    Landau, D.P.; Mon, K.K.; Schuettler, H.B.

    1993-01-01

    As the role of computer simulations began to increase in importance, we sensed a need for a ''meeting place'' for both experienced simulators and neophytes to discuss new techniques and results in an environment which promotes extended discussion. As a consequence of these concerns, The Center for Simulational Physics established an annual workshop on Recent Developments in Computer Simulation Studies in Condensed-Matter Physics. This year's workshop was the fifth in this series and the interest which the scientific community has shown demonstrates quite clearly the useful purpose which the series has served. The workshop was held at the University of Georgia, February 17-21, 1992, and these proceedings from a record of the workshop which is published with the goal of timely dissemination of the papers to a wider audience. The proceedings are divided into four parts. The first part contains invited papers which deal with simulational studies of classical systems and includes an introduction to some new simulation techniques and special purpose computers as well. A separate section of the proceedings is devoted to invited papers on quantum systems including new results for strongly correlated electron and quantum spin models. The third section is comprised of a single, invited description of a newly developed software shell designed for running parallel programs. The contributed presentations comprise the final chapter. (orig.). 79 figs

  2. Increasing the computational speed of flash calculations with applications for compositional, transient simulations

    DEFF Research Database (Denmark)

    Rasmussen, Claus P.; Krejbjerg, Kristian; Michelsen, Michael Locht

    2006-01-01

    Approaches are presented for reducing the computation time spent on flash calculations in compositional, transient simulations. In a conventional flash calculation, the majority of the simulation time is spent on stability analysis, even for systems far into the single-phase region. A criterion has...

  3. Effect of phantom dimension variation on Monte Carlo simulation speed and precision

    International Nuclear Information System (INIS)

    Lin Hui; Xu Yuanying; Xu Liangfeng; Li Guoli; Jiang Jia

    2007-01-01

    There is a correlation between Monte Carlo simulation speed and the phantom dimension. The effect of the phantom dimension on the Monte Carlo simulation speed and precision was studied based on a fast Monte Carlo code DPM. The results showed that when the thickness of the phantom was reduced, the efficiency would increase exponentially without compromise of its precision except for the position at the tailor. When the width of the phantom was reduced to outside the penumbra, the effect on the efficiency would be neglectable. However when it was reduced to within the penumbra, the efficiency would be increased at some extent without precision loss. This result was applied to a clinic head case, and the remarkable increased efficiency was acquired. (authors)

  4. Dual energy quantitative computed tomography (QCT). Precision of the mineral density measurements

    International Nuclear Information System (INIS)

    Braillon, P.; Bochu, M.

    1989-01-01

    The improvement that could be obtained in quantitative bone mineral measurements by dual energy computed tomography was tested in vitro. From the results of 15 mineral density measurements (in mg Ca/cm 3 , done on a precise lumbar spine phantom (Hologic) and referred to the values obtained on the same slices on a Siemens Osteo-CT phantom, the precision found was 0.8%, six times better than the precision calculated from the uncorrected measured values [fr

  5. Numerical Simulation of single-stage axial fan operation under dusty flow conditions

    Science.gov (United States)

    Minkov, L. L.; Pikushchak, E. V.

    2017-11-01

    Assessment of the aerodynamic efficiency of the single-stage axial flow fan under dusty flow conditions based on a numerical simulation using the computational package Ansys-Fluent is proposed. The influence of dust volume fraction on the dependences of the air volume flow rate and the pressure drop on the rotational speed of rotor is demonstrated. Matching functions for formulas describing a pressure drop and volume flow rate in dependence on the rotor speed and dust content are obtained by numerical simulation for the single-stage axial fan. It is shown that the aerodynamic efficiency of the single-stage axial flow fan decreases exponentially with increasing volume content of dust in the air.

  6. A review of computer-aided oral and maxillofacial surgery: planning, simulation and navigation.

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Sun, Yi; Politis, Constantinus

    2016-11-01

    Currently, oral and maxillofacial surgery (OMFS) still poses a significant challenge for surgeons due to the anatomic complexity and limited field of view of the oral cavity. With the great development of computer technologies, he computer-aided surgery has been widely used for minimizing the risks and improving the precision of surgery. Areas covered: The major goal of this paper is to provide a comprehensive reference source of current and future development of computer-aided OMFS including surgical planning, simulation and navigation for relevant researchers. Expert commentary: Compared with the traditional OMFS, computer-aided OMFS overcomes the disadvantage that the treatment on the region of anatomically complex maxillofacial depends almost exclusively on the experience of the surgeon.

  7. Computer simulation of displacement cascades in copper

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1983-06-01

    More than 500 displacement cascades in copper have been generated with the computer simulation code MARLOWE over an energy range pertinent to both fission and fusion neutron spectra. Three-dimensional graphical depictions of selected cascades, as well as quantitative analysis of cascade shapes and sizes and defect densities, illustrate cascade behavior as a function of energy. With increasing energy, the transition from production of single compact damage regions to widely spaced multiple damage regions is clearly demonstrated

  8. Precise Orientation of a Single C60 Molecule on the Tip of a Scanning Probe Microscope

    Science.gov (United States)

    Chiutu, C.; Sweetman, A. M.; Lakin, A. J.; Stannard, A.; Jarvis, S.; Kantorovich, L.; Dunn, J. L.; Moriarty, P.

    2012-06-01

    We show that the precise orientation of a C60 molecule which terminates the tip of a scanning probe microscope can be determined with atomic precision from submolecular contrast images of the fullerene cage. A comparison of experimental scanning tunneling microscopy data with images simulated using computationally inexpensive Hückel theory provides a robust method of identifying molecular rotation and tilt at the end of the probe microscope tip. Noncontact atomic force microscopy resolves the atoms of the C60 cage closest to the surface for a range of molecular orientations at tip-sample separations where the molecule-substrate interaction potential is weakly attractive. Measurements of the C60C60 pair potential acquired using a fullerene-terminated tip are in excellent agreement with theoretical predictions based on a pairwise summation of the van der Waals interactions between C atoms in each cage, i.e., the Girifalco potential [L. Girifalco, J. Phys. Chem. 95, 5370 (1991)JPCHAX0022-365410.1021/j100167a002].

  9. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  10. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    Science.gov (United States)

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  11. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  12. Improving the precision and accuracy of Mont Carlo simulation in positron emission tomography

    International Nuclear Information System (INIS)

    Picard, Y.; Thompson, C.J.; Marrett, S.

    1992-01-01

    This paper reports that most of the gamma-rays generated following positron annihilation in subjects undergoing PET studied never reach the detectors of a positron imaging device. Similarly, simulations of PET systems waste considerable time generating events which will never be detected. Simulation efficiency (in terms of detected photon pairs per CPU second) can be improved by generating only rays whose angular distribution gives them a reasonable chance of ultimate detection. For many events in which the original rays are directed generally towards the detectors, the precision of the final simulation can be improved by recycling them without reseeding the random number generator, giving them a second or greater chance of being detected. For simulation programs which cascade the simulation process into source, collimator, and detection phases, recycling can improve the precision of the simulation without requiring larger files of events from the source distribution simulation phase

  13. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid

    International Nuclear Information System (INIS)

    Thiam, Ch.O.

    2007-10-01

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  14. Simulating Biomass Fast Pyrolysis at the Single Particle Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ciesielski, Peter [National Renewable Energy Laboratory (NREL); Wiggins, Gavin [ORNL; Daw, C Stuart [ORNL; Jakes, Joseph E. [U.S. Forest Service, Forest Products Laboratory, Madison, Wisconsin, USA

    2017-07-01

    Simulating fast pyrolysis at the scale of single particles allows for the investigation of the impacts of feedstock-specific parameters such as particle size, shape, and species of origin. For this reason particle-scale modeling has emerged as an important tool for understanding how variations in feedstock properties affect the outcomes of pyrolysis processes. The origins of feedstock properties are largely dictated by the composition and hierarchical structure of biomass, from the microstructural porosity to the external morphology of milled particles. These properties may be accounted for in simulations of fast pyrolysis by several different computational approaches depending on the level of structural and chemical complexity included in the model. The predictive utility of particle-scale simulations of fast pyrolysis can still be enhanced substantially by advancements in several areas. Most notably, considerable progress would be facilitated by the development of pyrolysis kinetic schemes that are decoupled from transport phenomena, predict product evolution from whole-biomass with increased chemical speciation, and are still tractable with present-day computational resources.

  15. Single-Photon Emission Computed Tomography (SPECT) in childhood epilepsy

    International Nuclear Information System (INIS)

    Gulati, Sheffali; Kalra, Veena; Bal, C.S.

    2000-01-01

    The success of epilepsy surgery is determined strongly by the precise location of the epileptogenic focus. The information from clinical electrophysiological data needs to be strengthened by functional neuroimaging techniques. Single photon emission computed tomography (SPECT) available locally has proved useful as a localising investigation. It evaluates the regional cerebral blood flow and the comparison between ictal and interictal blood flow on SPECT has proved to be a sensitive nuclear marker for the site of seizure onset. Many studies justify the utility of SPECT in localising lesions to possess greater precision than interictal scalp EEG or anatomic neuroimaging. SPECT is of definitive value in temporal lobe epilepsy. Its role in extratemporal lobe epilepsy is less clearly defined. It is useful in various other generalized and partial seizure disorders including epileptic syndromes and helps in differentiating pseudoseizures from true seizures. The need for newer radiopharmaceutical agents with specific neurochemical properties and longer shelf life are under investigation. Subtraction ictal SPECT co-registered to MRI is a promising new modality. (author)

  16. Biomass Gasifier for Computer Simulation; Biomassa foergasare foer Computer Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Hansson, Jens; Leveau, Andreas; Hulteberg, Christian [Nordlight AB, Limhamn (Sweden)

    2011-08-15

    This report is an effort to summarize the existing data on biomass gasifiers as the authors have taken part in various projects aiming at computer simulations of systems that include biomass gasification. Reliable input data is paramount for any computer simulation, but so far there is no easy-accessible biomass gasifier database available for this purpose. This study aims at benchmarking current and past gasifier systems in order to create a comprehensive database for computer simulation purposes. The result of the investigation is presented in a Microsoft Excel sheet, so that the user easily can implement the data in their specific model. In addition to provide simulation data, the technology is described briefly for every studied gasifier system. The primary pieces of information that are sought for are temperatures, pressures, stream compositions and energy consumption. At present the resulting database contains 17 gasifiers, with one or more gasifier within the different gasification technology types normally discussed in this context: 1. Fixed bed 2. Fluidised bed 3. Entrained flow. It also contains gasifiers in the range from 100 kW to 120 MW, with several gasifiers in between these two values. Finally, there are gasifiers representing both direct and indirect heating. This allows for a more qualified and better available choice of starting data sets for simulations. In addition to this, with multiple data sets available for several of the operating modes, sensitivity analysis of various inputs will improve simulations performed. However, there have been fewer answers to the survey than expected/hoped for, which could have improved the database further. However, the use of online sources and other public information has to some extent counterbalanced the low response frequency of the survey. In addition to that, the database is preferred to be a living document, continuously updated with new gasifiers and improved information on existing gasifiers.

  17. Influence of non-ideal performance of lasers on displacement precision in single-grating heterodyne interferometry

    Science.gov (United States)

    Wang, Guochao; Xie, Xuedong; Yan, Shuhua

    2010-10-01

    Principle of the dual-wavelength single grating nanometer displacement measuring system, with a long range, high precision, and good stability, is presented. As a result of the nano-level high-precision displacement measurement, the error caused by a variety of adverse factors must be taken into account. In this paper, errors, due to the non-ideal performance of the dual-frequency laser, including linear error caused by wavelength instability and non-linear error caused by elliptic polarization of the laser, are mainly discussed and analyzed. On the basis of theoretical modeling, the corresponding error formulas are derived as well. Through simulation, the limit value of linear error caused by wavelength instability is 2nm, and on the assumption that 0.85 x T = , 1 Ty = of the polarizing beam splitter(PBS), the limit values of nonlinear-error caused by elliptic polarization are 1.49nm, 2.99nm, 4.49nm while the non-orthogonal angle is selected correspondingly at 1°, 2°, 3° respectively. The law of the error change is analyzed based on different values of Tx and Ty .

  18. Evaluation of the Single-precision Floatingpoint Vector Add Kernel Using the Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication and kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.

  19. Precision Attitude Determination System (PADS) system design and analysis: Single-axis gimbal star tracker

    Science.gov (United States)

    1974-01-01

    The feasibility is evaluated of an evolutionary development for use of a single-axis gimbal star tracker from prior two-axis gimbal star tracker based system applications. Detailed evaluation of the star tracker gimbal encoder is considered. A brief system description is given including the aspects of tracker evolution and encoder evaluation. System analysis includes evaluation of star availability and mounting constraints for the geosynchronous orbit application, and a covariance simulation analysis to evaluate performance potential. Star availability and covariance analysis digital computer programs are included.

  20. Computer simulation of molecular sorption in zeolites

    International Nuclear Information System (INIS)

    Calmiano, Mark Daniel

    2001-01-01

    The work presented in this thesis encompasses the computer simulation of molecular sorption. In Chapter 1 we outline the aims and objectives of this work. Chapter 2 follows in which an introduction to sorption in zeolites is presented, with discussion of structure and properties of the main zeolites studied. Chapter 2 concludes with a description of the principles and theories of adsorption. In Chapter 3 we describe the methodology behind the work carried out in this thesis. In Chapter 4 we present our first computational study, that of the sorption of krypton in silicalite. We describe work carried out to investigate low energy sorption sites of krypton in silicalite where we observe krypton to preferentially sorb into straight and sinusoidal channels over channel intersections. We simulate single step type I adsorption isotherms and use molecular dynamics to study the diffusion of krypton and obtain division coefficients and the activation energy. We compare our results to previous experimental and computational studies where we show our work to be in good agreement. In Chapter 5 we present a systematic study of the sorption of oxygen and nitrogen in five lithium substituted zeolites using a transferable interatomic potential that we have developed from ab initio calculations. We show increased loading of nitrogen compared to oxygen in all five zeolites studied as expected and simulate adsorption isotherms, which we compare to experimental and simulated data in the literature. In Chapter 6 we present work on the sorption of ferrocene in the zeolite NaY. We show that a simulated, low energy sorption site for ferrocene is correctly located by comparing to X-ray powder diffraction results for this same system. The thesis concludes with some overall conclusions and discussion of opportunities for future work. (author)

  1. Validation of precision powder injection molding process simulations using a spiral test geometry

    DEFF Research Database (Denmark)

    Marhöfer, Maximilian; Müller, Tobias; Tosello, Guido

    2015-01-01

    Like in many other areas of engineering, process simulations find application in precision injection molding to assist and optimize the quality and design of precise products and the molding process. Injection molding comprises mainly the manufacturing of plastic components. However, the variant ....... The necessary data and the implementation procedure of the new material models are outlined. In order to validate the simulation studies and evaluate their accuracy, the simulation results are compared with experiments performed using a spiral test geometry...... for powder injection molding. This characterization includes measurements of rheological, thermal, and pvT behavior of the powder-binder-mixes. The acquired material data was used to generate new material models for the database of the commercially available Autodesk Moldflow® simulation software...

  2. Monte Carlo simulation with the Gate software using grid computing

    International Nuclear Information System (INIS)

    Reuillon, R.; Hill, D.R.C.; Gouinaud, C.; El Bitar, Z.; Breton, V.; Buvat, I.

    2009-03-01

    Monte Carlo simulations are widely used in emission tomography, for protocol optimization, design of processing or data analysis methods, tomographic reconstruction, or tomograph design optimization. Monte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the 'Multiple Replications In Parallel' approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-science), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses. (authors)

  3. Parallel computing and molecular dynamics of biological membranes

    International Nuclear Information System (INIS)

    La Penna, G.; Letardi, S.; Minicozzi, V.; Morante, S.; Rossi, G.C.; Salina, G.

    1998-01-01

    In this talk I discuss the general question of the portability of molecular dynamics codes for diffusive systems on parallel computers of the APE family. The intrinsic single precision of the today available platforms does not seem to affect the numerical accuracy of the simulations, while the absence of integer addressing from CPU to individual nodes puts strong constraints on possible programming strategies. Liquids can be satisfactorily simulated using the ''systolic'' method. For more complex systems, like the biological ones at which we are ultimately interested in, the ''domain decomposition'' approach is best suited to beat the quadratic growth of the inter-molecular computational time with the number of atoms of the system. The promising perspectives of using this strategy for extensive simulations of lipid bilayers are briefly reviewed. (orig.)

  4. Multi-scale Modeling of Compressible Single-phase Flow in Porous Media using Molecular Simulation

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-05-01

    In this study, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell-centered finite difference method with a non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational time by MC simulations from hours to seconds. In addition, the reweighting and reconstruction scheme, which was originally designed to work with the LJ potential model, is extended to work with a potential model that accounts for the molecular quadrupole moment of fluids with non-spherical molecules such as CO2. The potential model was used to simulate the thermodynamic equilibrium properties for single-phase and two-phase systems using the canonical ensemble and the Gibbs ensemble, respectively. Comparing the simulation results with the experimental data showed that the implemented model has an excellent fit outperforming the standard LJ model. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and Darcy scale in reservoir simulators. This leads to an accurate description of the thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  5. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics

    Science.gov (United States)

    2015-01-01

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125

  6. Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, D B [Concentration Heat and Momentum Ltd, London (United Kingdom)

    1998-12-31

    Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.

  7. Simultaneous fluid-flow, heat-transfer and solid-stress computation in a single computer code

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, D.B. [Concentration Heat and Momentum Ltd, London (United Kingdom)

    1997-12-31

    Computer simulation of flow- and thermally-induced stresses in mechanical-equipment assemblies has, in the past, required the use of two distinct software packages, one to determine the forces and the temperatures, and the other to compute the resultant stresses. The present paper describes how a single computer program can perform both tasks at the same time. The technique relies on the similarity of the equations governing velocity distributions in fluids to those governing displacements in solids. The same SIMPLE-like algorithm is used for solving both. Applications to 1-, 2- and 3-dimensional situations are presented. It is further suggested that Solid-Fluid-Thermal, ie SFT analysis may come to replace CFD on the one hand and the analysis of stresses in solids on the other, by performing the functions of both. (author) 7 refs.

  8. Optimisation and validation of a 3D reconstruction algorithm for single photon emission computed tomography by means of GATE simulation platform

    International Nuclear Information System (INIS)

    El Bitar, Ziad

    2006-12-01

    Although time consuming, Monte-Carlo simulations remain an efficient tool enabling to assess correction methods for degrading physical effects in medical imaging. We have optimized and validated a reconstruction method baptized F3DMC (Fully 3D Monte Carlo) in which the physical effects degrading the image formation process were modelled using Monte-Carlo methods and integrated within the system matrix. We used the Monte-Carlo simulation toolbox GATE. We validated GATE in SPECT by modelling the gamma-camera (Philips AXIS) used in clinical routine. Techniques of threshold, filtering by a principal component analysis and targeted reconstruction (functional regions, hybrid regions) were used in order to improve the precision of the system matrix and to reduce the number of simulated photons as well as the time consumption required. The EGEE Grid infrastructures were used to deploy the GATE simulations in order to reduce their computation time. Results obtained with F3DMC were compared with the reconstruction methods (FBP, ML-EM, MLEMC) for a simulated phantom and with the OSEM-C method for the real phantom. Results have shown that the F3DMC method and its variants improve the restoration of activity ratios and the signal to noise ratio. By the use of the grid EGEE, a significant speed-up factor of about 300 was obtained. These results should be confirmed by performing studies on complex phantoms and patients and open the door to a unified reconstruction method, which could be used in SPECT and also in PET. (author)

  9. A simulation of driven reconnection by a high precision MHD code

    International Nuclear Information System (INIS)

    Kusano, Kanya; Ouchi, Yasuo; Hayashi, Takaya; Horiuchi, Ritoku; Watanabe, Kunihiko; Sato, Tetsuya.

    1988-01-01

    A high precision MHD code, which has the fourth-order accuracy for both the spatial and time steps, is developed, and is applied to the simulation studies of two dimensional driven reconnection. It is confirm that the numerical dissipation of this new scheme is much less than that of two-step Lax-Wendroff scheme. The effect of the plasma compressibility on the reconnection dynamics is investigated by means of this high precision code. (author)

  10. Simulação na validação de sistemas computacionais para a agricultura de precisão Simulation in the validation of precision farming computing systems

    Directory of Open Access Journals (Sweden)

    Braulio A. de Mello

    2008-12-01

    Full Text Available Sistemas computacionais têm tido aplicação crescente na construção de sistemas para o setor agrícola; contudo, o trabalho de projeto desse tipo de sistemas tem-se tornado mais complexo devido à necessidade de se estabelecer cooperação entre componentes que não possuem uma interface comum e/ou construídos com tecnologias diferentes (heterogêneos; deste modo, os componentes são, geralmente, validados em separado e a observação do funcionamento do sistema completo ocorre somente nos testes com o uso de protótipos, retardando a identificação de problemas e gerando custos em tempo e recursos. O uso da simulação pode contribuir com a antecipação de experiências que permitam observar o comportamento da cooperação entre os componentes do sistema, com o objetivo de reduzir o tempo e o custo geral do projeto. Através de um sistema eletrônico de supervisão de plantio (SEP como estudo de caso, o artigo apresenta o uso do DCB (Distributed Cosimulation Backbone como ferramenta de modelagem e simulação de sistemas voltados para a agricultura de precisão para reduzir tempo e custos de projeto.Computing systems have been employed for agricultural purposes, mainly in the precision farming domain. However, their implementation is complex because the cooperation among system components formed by agricultural equipment and computing parts do not have a common interface and the components are developed over distinct technologies (heterogeneous components. Also, the components are individually validated and the cooperation among them can be observed only in experiments using prototypes, delaying the identification of errors and increasing general costs. Simulation techniques are used to observe the work of all systems using models. This paper discusses a heterogeneous simulation to improve the implementation process of precision agricultural systems that bind heterogeneous components. This article presents a solution based on DCB

  11. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    Science.gov (United States)

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  12. Precision of Points Computed from Intersections of Lines or Planes

    DEFF Research Database (Denmark)

    Cederholm, Jens Peter

    2004-01-01

    estimates the precision of the points. When using laser scanning a similar problem appears. A laser scanner captures a 3-D point cloud, not the points of real interest. The suggested method can be used to compute three-dimensional coordinates of the intersection of three planes estimated from the point...

  13. Computer simulations of a single-laser double-gas-jet wakefield accelerator concept

    Directory of Open Access Journals (Sweden)

    R. G. Hemker

    2002-04-01

    Full Text Available We report in this paper on full scale 2D particle-in-cell simulations investigating laser wakefield acceleration. First we describe our findings of electron beam generation by a laser propagating through a single gas jet. Using realistic parameters which are relevant for the experimental setup in our laboratory we find that the electron beam resulting after the propagation of a 0.8 μm, 50 fs laser through a 1.5 mm gas jet has properties that would make it useful for further acceleration. Our simulations show that the electron beam is generated when the laser exits the gas jet, and the properties of the generated beam, especially its energy, depend only weakly on most properties of the gas jet. We therefore propose to use the first gas jet as a plasma cathode and then use a second gas jet placed immediately behind the first to provide additional acceleration. Our simulations of this proposed setup indicate the feasibility of this idea and also suggest ways to optimize the quality of the resulting beam.

  14. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  15. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    Science.gov (United States)

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Distributed simulation of large computer systems

    International Nuclear Information System (INIS)

    Marzolla, M.

    2001-01-01

    Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator

  17. A computationally efficient simulator for three-dimensional Monte Carlo simulation of ion implantation into complex structures

    International Nuclear Information System (INIS)

    Li Di; Wang Geng; Chen Yang; Li Lin; Shrivastav, Gaurav; Oak, Stimit; Tasch, Al; Banerjee, Sanjay; Obradovic, Borna

    2001-01-01

    A physically-based three-dimensional Monte Carlo simulator has been developed within UT-MARLOWE, which is capable of simulating ion implantation into multi-material systems and arbitrary topography. Introducing the third dimension can result in a severe CPU time penalty. In order to minimize this penalty, a three-dimensional trajectory replication algorithm has been developed, implemented and verified. More than two orders of magnitude savings of CPU time have been observed. An unbalanced Octree structure was used to decompose three-dimensional structures. It effectively simplifies the structure, offers a good balance between modeling accuracy and computational efficiency, and allows arbitrary precision of mapping the Octree onto desired structure. Using the well-established and validated physical models in UT-MARLOWE 5.0, this simulator has been extensively verified by comparing the integrated one-dimensional simulation results with secondary ion mass spectroscopy (SIMS). Two options, the typical case and the worst scenario, have been selected to simulate ion implantation into poly-silicon under various scenarios using this simulator: implantation into a random, amorphous network, and implantation into the worst-case channeling condition, into (1 1 0) orientated wafers

  18. Computing Relative Free Energies of Solvation using Single Reference Thermodynamic Integration Augmented with Hamiltonian Replica Exchange.

    Science.gov (United States)

    Khavrutskii, Ilja V; Wallqvist, Anders

    2010-11-09

    This paper introduces an efficient single-topology variant of Thermodynamic Integration (TI) for computing relative transformation free energies in a series of molecules with respect to a single reference state. The presented TI variant that we refer to as Single-Reference TI (SR-TI) combines well-established molecular simulation methodologies into a practical computational tool. Augmented with Hamiltonian Replica Exchange (HREX), the SR-TI variant can deliver enhanced sampling in select degrees of freedom. The utility of the SR-TI variant is demonstrated in calculations of relative solvation free energies for a series of benzene derivatives with increasing complexity. Noteworthy, the SR-TI variant with the HREX option provides converged results in a challenging case of an amide molecule with a high (13-15 kcal/mol) barrier for internal cis/trans interconversion using simulation times of only 1 to 4 ns.

  19. Remote sensing with simulated unmanned aircraft imagery for precision agriculture applications

    Science.gov (United States)

    Hunt, E. Raymond; Daughtry, Craig S.T.; Mirsky, Steven B.; Hively, W. Dean

    2014-01-01

    An important application of unmanned aircraft systems (UAS) may be remote-sensing for precision agriculture, because of its ability to acquire images with very small pixel sizes from low altitude flights. The objective of this study was to compare information obtained from two different pixel sizes, one about a meter (the size of a small vegetation plot) and one about a millimeter. Cereal rye (Secale cereale) was planted at the Beltsville Agricultural Research Center for a winter cover crop with fall and spring fertilizer applications, which produced differences in biomass and leaf chlorophyll content. UAS imagery was simulated by placing a Fuji IS-Pro UVIR digital camera at 3-m height looking nadir. An external UV-IR cut filter was used to acquire true-color images; an external red cut filter was used to obtain color-infrared-like images with bands at near-infrared, green, and blue wavelengths. Plot-scale Green Normalized Difference Vegetation Index was correlated with dry aboveground biomass ( ${mbi {r}} = 0.58$ ), whereas the Triangular Greenness Index (TGI) was not correlated with chlorophyll content. We used the SamplePoint program to select 100 pixels systematically; we visually identified the cover type and acquired the digital numbers. The number of rye pixels in each image was better correlated with biomass ( ${mbi {r}} = 0.73$ ), and the average TGI from only leaf pixels was negatively correlated with chlorophyll content ( ${mbi {r}} = -0.72$ ). Thus, better information for crop requirements may be obtained using very small pixel sizes, but new algorithms based on computer vision are needed for analysis. It may not be necessary to geospatially register large numbers of photographs with very small pixel sizes. Instead, images could be analyzed as single plots along field transects.

  20. Advanced Simulation & Computing FY15 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-16

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. As the program approaches the end of its second decade, ASC is intently focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Where possible, the program also enables the use of high-performance simulation and computing tools to address broader national security needs, such as foreign nuclear weapon assessments and counternuclear terrorism.

  1. Computer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Pronskikh, V. S. [Fermilab

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossible to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes

  2. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    Energy Technology Data Exchange (ETDEWEB)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient

  3. What do we want from computer simulation of SIMS using clusters?

    International Nuclear Information System (INIS)

    Webb, R.P.

    2008-01-01

    Computer simulation of energetic cluster interactions with surfaces has provided much needed insight into some of the complex processes which occur and are responsible for the desirable as well as undesirable effects which make the use of clusters in SIMS both useful and challenging. Simulations have shown how cluster impacts can cause meso-scale motion of the target material which can result in the relatively gentle up-lift of large intact molecules adsorbed on the surface in contrast to the behaviour of single atom impacts which tend to create discrete motion in the surface often ejecting fragments of adsorbed molecules instead. With the insight provided from simulations experimentalists can then improve their equipment to best maximise the desired effects. The past 40 years has seen great progress in simulation techniques and computer equipment. 40 years ago simulations were performed on simple atomic systems of around 300 atoms employing only simple pair-wise interaction potentials to times of several hundred femtoseconds. Currently simulations can be performed on large organic materials employing many body potentials for millions of atoms for times of many picoseconds. These simulations, however, can take several months of computation time. Even with the degree of realism introduced with these long time simulations they are still not perfect are often not capable of being used in a completely predictive way. Computer simulation is reaching a position where by any more effort to increase its realism will make it completely intractable to solution in a reasonable time frame and yet there is an increasing demand from experimentalists for something that can help in a predictive way to help in experiment design and interpretation. This paper will discuss the problems of computer simulation and what might be possible to achieve in the short term, what is unlikely ever to be possible without a major new break through and how we might exploit the meso-scale effects in

  4. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  5. Ramifications of single-port laparoscopic surgery: measuring differences in task performance using simulation.

    Science.gov (United States)

    Conway, Nathan E; Romanelli, John R; Bush, Ron W; Seymour, Neal E

    2014-02-01

    Single-port laparoscopic surgery imposes unique psychomotor challenges. We used surgical simulation to define performance differences between surgeons with and without single-port clinical experience and examined whether a short course of training resulted in improved performance. Study participants were assigned to 3 groups: resident group (RES), experienced laparoscopic surgeons with (SP) and without (LAP) prior single-port laparoscopic experience. Participants performed the Fundamentals of Laparoscopic Surgery precision cutting task on a ProMIS trainer through conventional ports or with articulating instruments via a SILS Port (Covidien, Inc). Two iterations of each method were performed. Then, 6 residents performed 10 successive single-port iterations to assess the effect of practice on task performance. The SP group had faster task times for both laparoscopic (P = .0486) and single-port (P = .0238) methods. The LAP group had longer path lengths for the single-port task than for the laparoscopic task (P = .03). The RES group was slower (P = .0019), with longer path length (P = .0010) but with greater smoothness (P = .0186) on the single-port task than the conventional laparoscopic task. Resident performance task time (P = .005) and smoothness (P = .045) improved with successive iterations. Our data show that surgeons with clinical single-port surgery experience perform a simulated single-port surgical task better than inexperienced single-port surgeons. Furthermore, this performance is comparable to that achieved with conventional laparoscopic techniques. Performance of residents declined dramatically when confronted with the challenges of the single-port task but improved with practice. These results suggest a role for lab-based single-port training.

  6. Computer simulation of ductile fracture

    International Nuclear Information System (INIS)

    Wilkins, M.L.; Streit, R.D.

    1979-01-01

    Finite difference computer simulation programs are capable of very accurate solutions to problems in plasticity with large deformations and rotation. This opens the possibility of developing models of ductile fracture by correlating experiments with equivalent computer simulations. Selected experiments were done to emphasize different aspects of the model. A difficult problem is the establishment of a fracture-size effect. This paper is a study of the strain field around notched tensile specimens of aluminum 6061-T651. A series of geometrically scaled specimens are tested to fracture. The scaled experiments are conducted for different notch radius-to-diameter ratios. The strains at fracture are determined from computer simulations. An estimate is made of the fracture-size effect

  7. Simulating chemistry using quantum computers.

    Science.gov (United States)

    Kassal, Ivan; Whitfield, James D; Perdomo-Ortiz, Alejandro; Yung, Man-Hong; Aspuru-Guzik, Alán

    2011-01-01

    The difficulty of simulating quantum systems, well known to quantum chemists, prompted the idea of quantum computation. One can avoid the steep scaling associated with the exact simulation of increasingly large quantum systems on conventional computers, by mapping the quantum system to another, more controllable one. In this review, we discuss to what extent the ideas in quantum computation, now a well-established field, have been applied to chemical problems. We describe algorithms that achieve significant advantages for the electronic-structure problem, the simulation of chemical dynamics, protein folding, and other tasks. Although theory is still ahead of experiment, we outline recent advances that have led to the first chemical calculations on small quantum information processors.

  8. Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors.

    Science.gov (United States)

    Hines, Michael L; Eichner, Hubert; Schürmann, Felix

    2008-08-01

    Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.

  9. 21 CFR 870.1435 - Single-function, preprogrammed diagnostic computer.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Single-function, preprogrammed diagnostic computer... Single-function, preprogrammed diagnostic computer. (a) Identification. A single-function, preprogrammed diagnostic computer is a hard-wired computer that calculates a specific physiological or blood-flow parameter...

  10. HTTR plant dynamic simulation using a hybrid computer

    International Nuclear Information System (INIS)

    Shimazaki, Junya; Suzuki, Katsuo; Nabeshima, Kunihiko; Watanabe, Koichi; Shinohara, Yoshikuni; Nakagawa, Shigeaki.

    1990-01-01

    A plant dynamic simulation of High-Temperature Engineering Test Reactor has been made using a new-type hybrid computer. This report describes a dynamic simulation model of HTTR, a hybrid simulation method for SIMSTAR and some results obtained from dynamics analysis of HTTR simulation. It concludes that the hybrid plant simulation is useful for on-line simulation on account of its capability of computation at high speed, compared with that of all digital computer simulation. With sufficient accuracy, 40 times faster computation than real time was reached only by changing an analog time scale for HTTR simulation. (author)

  11. GPU-accelerated micromagnetic simulations using cloud computing

    Energy Technology Data Exchange (ETDEWEB)

    Jermain, C.L., E-mail: clj72@cornell.edu [Cornell University, Ithaca, NY 14853 (United States); Rowlands, G.E.; Buhrman, R.A. [Cornell University, Ithaca, NY 14853 (United States); Ralph, D.C. [Cornell University, Ithaca, NY 14853 (United States); Kavli Institute at Cornell, Ithaca, NY 14853 (United States)

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  12. GPU-accelerated micromagnetic simulations using cloud computing

    International Nuclear Information System (INIS)

    Jermain, C.L.; Rowlands, G.E.; Buhrman, R.A.; Ralph, D.C.

    2016-01-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics. - Highlights: • The benefits of cloud computing for GPU-accelerated micromagnetics are examined. • We present the MuCloud software for running simulations on cloud computing. • Simulation run times are measured to benchmark cloud computing performance. • Comparison benchmarks are analyzed between CPU and GPU based solvers.

  13. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  14. Single Crystal Piezomotor for Large Stroke, High Precision and Cryogenic Actuations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — TRS Technologies proposes a novel single crystal piezomotor for large stroke, high precision, and cryogenic actuations with capability of position set-hold with...

  15. Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure

    International Nuclear Information System (INIS)

    Yokohama, Noriya

    2013-01-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. (author)

  16. Computer Simulation Western

    International Nuclear Information System (INIS)

    Rasmussen, H.

    1992-01-01

    Computer Simulation Western is a unit within the Department of Applied Mathematics at the University of Western Ontario. Its purpose is the development of computational and mathematical methods for practical problems in industry and engineering and the application and marketing of such methods. We describe the unit and our efforts at obtaining research and development grants. Some representative projects will be presented and future plans discussed. (author)

  17. General-purpose parallel simulator for quantum computing

    International Nuclear Information System (INIS)

    Niwa, Jumpei; Matsumoto, Keiji; Imai, Hiroshi

    2002-01-01

    With current technologies, it seems to be very difficult to implement quantum computers with many qubits. It is therefore of importance to simulate quantum algorithms and circuits on the existing computers. However, for a large-size problem, the simulation often requires more computational power than is available from sequential processing. Therefore, simulation methods for parallel processors are required. We have developed a general-purpose simulator for quantum algorithms/circuits on the parallel computer (Sun Enterprise4500). It can simulate algorithms/circuits with up to 30 qubits. In order to test efficiency of our proposed methods, we have simulated Shor's factorization algorithm and Grover's database search, and we have analyzed robustness of the corresponding quantum circuits in the presence of both decoherence and operational errors. The corresponding results, statistics, and analyses are presented in this paper

  18. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kadoura, Ahmad, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Sun, Shuyu, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Computational Transport Phenomena Laboratory, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Siripatana, Adil, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa; Hoteit, Ibrahim, E-mail: ibrahim.hoteit@kaust.edu.sa [Earth Fluid Modeling and Predicting Group, The Earth Sciences and Engineering Department, The Physical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia); Knio, Omar, E-mail: ahmad.kadoura@kaust.edu.sa, E-mail: adil.siripatana@kaust.edu.sa, E-mail: shuyu.sun@kaust.edu.sa, E-mail: omar.knio@kaust.edu.sa [Uncertainty Quantification Center, The Applied Mathematics and Computational Science Department, The Computer, Electrical and Mathematical Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900 (Saudi Arabia)

    2016-06-07

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH{sub 4}, N{sub 2}, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO{sub 2} and C{sub 2} H{sub 6}.

  19. [Precision of digital impressions with TRIOS under simulated intraoral impression taking conditions].

    Science.gov (United States)

    Yang, Xin; Sun, Yi-fei; Tian, Lei; Si, Wen-jie; Feng, Hai-lan; Liu, Yi-hong

    2015-02-18

    To evaluate the precision of digital impressions taken under simulated clinical impression taking conditions with TRIOS and to compare with the precision of extraoral digitalizations. Six #14-#17 epoxy resin dentitions with extracted #16 tooth preparations embedded were made. For each artificial dentition, (1)a silicone rubber impression was taken with individual tray, poured with type IV plaster,and digitalized with 3Shape D700 model scanner for 10 times; (2) fastened to a dental simulator, 10 digital impressions for each were taken with 3Shape TRIOS intraoral scanner. To assess the precision, best-fit algorithm and 3D comparison were conducted between repeated scan models pairwise by Geomagic Qualify 12.0, exported as averaged errors (AE) and color-coded diagrams. Non-parametric analysis was performed to compare the precisions of digital impressions and model images. The color-coded diagrams were used to show the deviations distributions. The mean of AE for digital impressions was 7.058 281 μm, which was greater than that of 4.092 363 μm for the model images (Pimpressions were no more than 10 μm, which meant that the consistency between the digital impressions was good. The deviations distribution was uniform in the model images,while nonuniform in the digital impressions with greater deviations lay mainly around the shoulders and interproximal surfaces. Digital impressions with TRIOS are of good precision and up to the clinical standard. Shoulders and interproximal surfaces scanning are more difficult.

  20. A computational intelligence approach to the Mars Precision Landing problem

    Science.gov (United States)

    Birge, Brian Kent, III

    Various proposed Mars missions, such as the Mars Sample Return Mission (MRSR) and the Mars Smart Lander (MSL), require precise re-entry terminal position and velocity states. This is to achieve mission objectives including rendezvous with a previous landed mission, or reaching a particular geographic landmark. The current state of the art footprint is in the magnitude of kilometers. For this research a Mars Precision Landing is achieved with a landed footprint of no more than 100 meters, for a set of initial entry conditions representing worst guess dispersions. Obstacles to reducing the landed footprint include trajectory dispersions due to initial atmospheric entry conditions (entry angle, parachute deployment height, etc.), environment (wind, atmospheric density, etc.), parachute deployment dynamics, unavoidable injection error (propagated error from launch on), etc. Weather and atmospheric models have been developed. Three descent scenarios have been examined. First, terminal re-entry is achieved via a ballistic parachute with concurrent thrusting events while on the parachute, followed by a gravity turn. Second, terminal re-entry is achieved via a ballistic parachute followed by gravity turn to hover and then thrust vector to desired location. Third, a guided parafoil approach followed by vectored thrusting to reach terminal velocity is examined. The guided parafoil is determined to be the best architecture. The purpose of this study is to examine the feasibility of using a computational intelligence strategy to facilitate precision planetary re-entry, specifically to take an approach that is somewhat more intuitive and less rigid, and see where it leads. The test problems used for all research are variations on proposed Mars landing mission scenarios developed by NASA. A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), which can be considered to be in the same general class as Genetic Algorithms. An improvement over

  1. Advanced computers and simulation

    International Nuclear Information System (INIS)

    Ryne, R.D.

    1993-01-01

    Accelerator physicists today have access to computers that are far more powerful than those available just 10 years ago. In the early 1980's, desktop workstations performed less one million floating point operations per second (Mflops), and the realized performance of vector supercomputers was at best a few hundred Mflops. Today vector processing is available on the desktop, providing researchers with performance approaching 100 Mflops at a price that is measured in thousands of dollars. Furthermore, advances in Massively Parallel Processors (MPP) have made performance of over 10 gigaflops a reality, and around mid-decade MPPs are expected to be capable of teraflops performance. Along with advances in MPP hardware, researchers have also made significant progress in developing algorithms and software for MPPS. These changes have had, and will continue to have, a significant impact on the work of computational accelerator physicists. Now, instead of running particle simulations with just a few thousand particles, we can perform desktop simulations with tens of thousands of simulation particles, and calculations with well over 1 million particles are being performed on MPPs. In the area of computational electromagnetics, simulations that used to be performed only on vector supercomputers now run in several hours on desktop workstations, and researchers are hoping to perform simulations with over one billion mesh points on future MPPs. In this paper we will discuss the latest advances, and what can be expected in the near future, in hardware, software and applications codes for advanced simulation of particle accelerators

  2. Advanced Simulation and Computing Fiscal Year 14 Implementation Plan, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, Robert [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzen, M. Keith [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-09-11

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), quantify critical margins and uncertainties, and resolve increasingly difficult analyses needed for the SSP. Moreover, ASC’s business model is integrated and focused on requirements-driven products that address long-standing technical questions related to enhanced predictive

  3. Computational fluid dynamics simulation of a single cylinder research engine working with biodiesel

    Directory of Open Access Journals (Sweden)

    Moldovanu Dan

    2013-01-01

    Full Text Available The main objective of the paper is to present the results of the CFD simulation of a DI single cylinder engine using diesel, biodiesel, or different mixture proportions of diesel and biodiesel and compare the results to a test bed measurement in the same functioning point. The engine used for verifying the results of the simulation is a single cylinder research engine from AVL with an open ECU, so that the injection timings and quantities can be controlled and analyzed. In Romania, until the year 2020 all the fuel stations are obliged to have mixtures of at least 10% biodiesel in diesel [14]. The main advantages using mixtures of biofuels in diesel are: the fact that biodiesel is not harmful to the environment; in order to use biodiesel in your engine no modifications are required; the price of biodiesel is smaller than diesel and also if we compare biodiesel production to the classic petroleum based diesel production, it is more energy efficient; biodiesel assures more lubrication to the engine so the life of the engine is increased; biodiesel is a sustainable fuel; using biodiesel helps maintain the environment and it keeps the people more healthy [1-3].

  4. Computer simulation of kinetic properties of plasmas. Progress report, October 1, 1978-June 30, 1979

    International Nuclear Information System (INIS)

    Denavit, J.

    1979-01-01

    The research is directed toward the development and testing of new numerical methods for particle and hybrid simulation of plasmas, and their application to physical problems of current significance to Magnetic Fusion Energy. During the present period, research on the project has been concerned with the following specific problems: (1) Computer simulations of drift and dissipative trapped-electron instabilities in tokamaks, including radial dependence and shear stabilization. (2) Long-time-scale algorithms for numerical solutions of the drift-kinetic equation. (3) Computer simulation of field-reversed ion ring stability. (4) Nonlinear, single-mode saturation of the bump-on-tail instability

  5. Computer simulations of collisionless shock waves

    International Nuclear Information System (INIS)

    Leroy, M.M.

    1984-01-01

    A review of the contributions of particle computer simulations to the understanding of the physics of magnetic shock waves in collisionless plasmas is presented. The emphasis is on the relation between the computer simulation results, spacecraft observations of shocks in space, and related theories, rather than on technical aspects of the numerics. It is shown that much has been learned from the comparison of ISEE spacecraft observations of the terrestrial bow shock and particle computer simulations concerning the quasi-perpendicular, supercritical shock (ion scale structure, ion reflection mechanism and ultimate dissipation processes). Particle computer simulations have also had an appreciable prospective role in the investigation of the physics of quasi-parallel shocks, about which still little is known observationally. Moreover, these numerical techniques have helped to clarify the process of suprathermal ion rejection by the shock into the foreshock, and the subsequent evolution of the ions in the foreshock. 95 references

  6. Clinical application of 3D computer simulation for upper limb surgery

    International Nuclear Information System (INIS)

    Murase, Tsuyoshi; Moritomo, Hisao; Oka, Kunihiro; Arimitsu, Sayuri; Shimada, Kozo

    2008-01-01

    To perform precise orthopaedic surgery, we have been developing a surgical method using a custom-made surgical device designed based on preoperative three-dimensional computer simulation. The purpose of this study was to investigate the preliminary results of its clinical application for corrective osteotomy of the upper extremity. Twenty patients with long bone deformities of the upper extremities (four cubitus varus deformities, nine malunited forearm fractures, six malunited distal radial fractures and one congenital deformity of the forearm) participated in this study. Three-dimensional computer models of the affected bone and the contralateral normal bone were constructed from computed tomography data. By comparing these models, the three-dimensional deformity axis and the accurate amount of deformity around it were quantified. Three-dimensional deformity correction was then simulated. A custom-made osteotomy template was designed and manufactured as a real plastic model aiming to reproduce the preoperative simulation in the actual operation. In the operation, we put the template on the bone surface, cut the bone through a slit on the template, and corrected the deformity as preoperatively simulated, followed by internal fixation. Radiographic and clinical evaluations were made in all cases before surgery and at the most recent follow-up. Corrective osteotomy was achieved as simulated in all cases. All patients had bone fusion within six months. Regarding the cubitus varus deformity, the average carrying angle and tilting angle were 5deg and 28deg after surgery. For malunited forearm fractures, angular deformities on radiographs were nearly nonexistent after surgery. All radiographic parameters in malunited distal radius fractures were normalized. The range of forearm rotation in cases of forearm malunion and that of wrist flexion-extension in cases of malunited distal radius improved after surgery. (author)

  7. Monte-Carlo Simulation of Hydrogen Adsorption in Single-Wall Carbon Nano-Cones

    Directory of Open Access Journals (Sweden)

    Zohreh Ahadi

    2011-01-01

    Full Text Available The properties of hydrogen adsorption in single-walled carbon nano-cones are investigated in detail by Monte Carlo simulations. A great deal of our computational results show that the hydrogen storage capacity in single-walled carbon nano-cones is slightly smaller than the capacity of single-walled carbon nanotubes at any time at the same conditions. This indicates that the hydrogen storage capacity of single-walled carbon nano-cones is related to angles of carbon nano-cones. It seems that these type of nanotubes could not exceed the 2010 goal of 6 wt%, which is presented by the U.S. Department of Energy. In addition, these results are discussed in theory.

  8. Simulation-based computation of dose to humans in radiological environments

    International Nuclear Information System (INIS)

    Breazeal, N.L.; Davis, K.R.; Watson, R.A.; Vickers, D.S.; Ford, M.S.

    1996-03-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans working in radiological environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO simulation software. These commercially available products are augmented with custom C code to provide radiation exposure information to, and collect radiation dose information from, workcell simulations. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these databases to compute and accumulate dose to programmable human models operating around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. The entire REMS capability can be operated from a single graphical user interface

  9. Simulation-based computation of dose to humans in radiological environments

    Energy Technology Data Exchange (ETDEWEB)

    Breazeal, N.L. [Sandia National Labs., Livermore, CA (United States); Davis, K.R.; Watson, R.A. [Sandia National Labs., Albuquerque, NM (United States); Vickers, D.S. [Brigham Young Univ., Provo, UT (United States). Dept. of Electrical and Computer Engineering; Ford, M.S. [Battelle Pantex, Amarillo, TX (United States). Dept. of Radiation Safety

    1996-03-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans working in radiological environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO simulation software. These commercially available products are augmented with custom C code to provide radiation exposure information to, and collect radiation dose information from, workcell simulations. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these databases to compute and accumulate dose to programmable human models operating around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. The entire REMS capability can be operated from a single graphical user interface.

  10. Computer algebra simulation - what can it do?; Was leistet Computer-Algebra-Simulation?

    Energy Technology Data Exchange (ETDEWEB)

    Braun, S. [Visual Analysis AG, Muenchen (Germany)

    2001-07-01

    Shortened development times require new and improved calculation methods. Numeric methods have long become state of the art. However, although numeric simulations provide a better understanding of process parameters, they do not give a feast overview of the interdependences between parameters. Numeric simulations are effective only if all physical parameters are sufficiently known; otherwise, the efficiency will decrease due to the large number of variant calculations required. Computer algebra simulation closes this gap and provides a deeper understanding of the physical fundamentals of technical processes. [German] Neue und verbesserte Berechnungsmethoden sind notwendig, um die staendige Verkuerzung der Entwicklungszyklen zu ermoeglichen. Herkoemmliche Methoden, die auf einem rein numerischen Ansatz basieren, haben sich in vielen Anwendungsbereichen laengst zum Standard entwickelt. Aber nicht nur die staendig kuerzer werdenden Entwicklungszyklen, sondern auch die weiterwachsende Komplexitaet machen es notwendig, ein besseres Verstaendnis der beteiligten Prozessparameter zu gewinnen. Die numerische Simulation besticht zwar durch Detailloesungen, selbst bei komplexen Strukturen und Prozessen, allerdings liefert sie keine schnelle Abschaetzung ueber die Zusammenhaenge zwischen den einzelnen Parametern. Die numerische Simulation ist nur dann effektiv, wenn alle physikalischen Parameter hinreichend bekannt sind; andernfalls sinkt die Effizienz durch die notwendige Anzahl von notwendigen Variantenrechnungen sehr stark. Die Computer-Algebra-Simulation schliesst diese Luecke in dem sie es erlaubt, sich einen tieferen Einblick in die physikalische Funktionsweise technischer Prozesse zu verschaffen. (orig.)

  11. Framework for utilizing computational devices within simulation

    Directory of Open Access Journals (Sweden)

    Miroslav Mintál

    2013-12-01

    Full Text Available Nowadays there exist several frameworks to utilize a computation power of graphics cards and other computational devices such as FPGA, ARM and multi-core processors. The best known are either low-level and need a lot of controlling code or are bounded only to special graphic cards. Furthermore there exist more specialized frameworks, mainly aimed to the mathematic field. Described framework is adjusted to use in a multi-agent simulations. Here it provides an option to accelerate computations when preparing simulation and mainly to accelerate a computation of simulation itself.

  12. Notes From the Field: Secondary Task Precision for Cognitive Load Estimation During Virtual Reality Surgical Simulation Training.

    Science.gov (United States)

    Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W

    2016-03-01

    Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.

  13. Understanding Emergency Care Delivery Through Computer Simulation Modeling.

    Science.gov (United States)

    Laker, Lauren F; Torabi, Elham; France, Daniel J; Froehle, Craig M; Goldlust, Eric J; Hoot, Nathan R; Kasaie, Parastu; Lyons, Michael S; Barg-Walkow, Laura H; Ward, Michael J; Wears, Robert L

    2018-02-01

    In 2017, Academic Emergency Medicine convened a consensus conference entitled, "Catalyzing System Change through Health Care Simulation: Systems, Competency, and Outcomes." This article, a product of the breakout session on "understanding complex interactions through systems modeling," explores the role that computer simulation modeling can and should play in research and development of emergency care delivery systems. This article discusses areas central to the use of computer simulation modeling in emergency care research. The four central approaches to computer simulation modeling are described (Monte Carlo simulation, system dynamics modeling, discrete-event simulation, and agent-based simulation), along with problems amenable to their use and relevant examples to emergency care. Also discussed is an introduction to available software modeling platforms and how to explore their use for research, along with a research agenda for computer simulation modeling. Through this article, our goal is to enhance adoption of computer simulation, a set of methods that hold great promise in addressing emergency care organization and design challenges. © 2017 by the Society for Academic Emergency Medicine.

  14. A portable high-quality random number generator for lattice field theory simulations

    International Nuclear Information System (INIS)

    Luescher, M.

    1993-09-01

    The theory underlying a proposed random number generator for numerical simulations in elementary particle physics and statistical mechanics is discussed. The generator is based on an algorithm introduced by Marsaglia and Zaman, with an important added feature leading to demonstrably good statistical properties. It can be implemented exactly on any computer complying with the IEEE-754 standard for single precision floating point arithmetic. (orig.)

  15. Measurement precision and efficiency of multidimensional computer adaptive testing of physical functioning using the pediatric evaluation of disability inventory.

    Science.gov (United States)

    Haley, Stephen M; Ni, Pengsheng; Ludlow, Larry H; Fragala-Pinkham, Maria A

    2006-09-01

    To compare the measurement efficiency and precision of a multidimensional computer adaptive testing (M-CAT) application to a unidimensional CAT (U-CAT) comparison using item bank data from 2 of the functional skills scales of the Pediatric Evaluation of Disability Inventory (PEDI). Using existing PEDI mobility and self-care item banks, we compared the stability of item calibrations and model fit between unidimensional and multidimensional Rasch models and compared the efficiency and precision of the U-CAT- and M-CAT-simulated assessments to a random draw of items. Pediatric rehabilitation hospital and clinics. Clinical and normative samples. Not applicable. Not applicable. The M-CAT had greater levels of precision and efficiency than the separate mobility and self-care U-CAT versions when using a similar number of items for each PEDI subdomain. Equivalent estimation of mobility and self-care scores can be achieved with a 25% to 40% item reduction with the M-CAT compared with the U-CAT. M-CAT applications appear to have both precision and efficiency advantages compared with separate U-CAT assessments when content subdomains have a high correlation. Practitioners may also realize interpretive advantages of reporting test score information for each subdomain when separate clinical inferences are desired.

  16. Flexible single molecule simulation of reaction-diffusion processes

    International Nuclear Information System (INIS)

    Hellander, Stefan; Loetstedt, Per

    2011-01-01

    An algorithm is developed for simulation of the motion and reactions of single molecules at a microscopic level. The molecules diffuse in a solvent and react with each other or a polymer and molecules can dissociate. Such simulations are of interest e.g. in molecular biology. The algorithm is similar to the Green's function reaction dynamics (GFRD) algorithm by van Zon and ten Wolde where longer time steps can be taken by computing the probability density functions (PDFs) and then sample from the distribution functions. Our computation of the PDFs is much less complicated than GFRD and more flexible. The solution of the partial differential equation for the PDF is split into two steps to simplify the calculations. The sampling is without splitting error in two of the coordinate directions for a pair of molecules and a molecule-polymer interaction and is approximate in the third direction. The PDF is obtained either from an analytical solution or a numerical discretization. The errors due to the operator splitting, the partitioning of the system, and the numerical approximations are analyzed. The method is applied to three different systems involving up to four reactions. Comparisons with other mesoscopic and macroscopic models show excellent agreement.

  17. Analyzing Robotic Kinematics Via Computed Simulations

    Science.gov (United States)

    Carnahan, Timothy M.

    1992-01-01

    Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.

  18. Computer Simulations, Disclosure and Duty of Care

    Directory of Open Access Journals (Sweden)

    John Barlow

    2006-05-01

    Full Text Available Computer simulations provide cost effective methods for manipulating and modeling 'reality'. However they are not real. They are imitations of a system or event, real or fabricated, and as such mimic, duplicate or represent that system or event. The degree to which a computer simulation aligns with and reproduces the ‘reality’ of the system or event it attempts to mimic or duplicate depends upon many factors including the efficiency of the simulation algorithm, the processing power of the computer hardware used to run the simulation model, and the expertise, assumptions and prejudices of those concerned with designing, implementing and interpreting the simulation output. Computer simulations in particular are increasingly replacing physical experimentation in many disciplines, and as a consequence, are used to underpin quite significant decision-making which may impact on ‘innocent’ third parties. In this context, this paper examines two interrelated issues: Firstly, how much and what kind of information should a simulation builder be required to disclose to potential users of the simulation? Secondly, what are the implications for a decision-maker who acts on the basis of their interpretation of a simulation output without any reference to its veracity, which may in turn comprise the safety of other parties?

  19. Creating science simulations through Computational Thinking Patterns

    Science.gov (United States)

    Basawapatna, Ashok Ram

    Computational thinking aims to outline fundamental skills from computer science that everyone should learn. As currently defined, with help from the National Science Foundation (NSF), these skills include problem formulation, logically organizing data, automating solutions through algorithmic thinking, and representing data through abstraction. One aim of the NSF is to integrate these and other computational thinking concepts into the classroom. End-user programming tools offer a unique opportunity to accomplish this goal. An end-user programming tool that allows students with little or no prior experience the ability to create simulations based on phenomena they see in-class could be a first step towards meeting most, if not all, of the above computational thinking goals. This thesis describes the creation, implementation and initial testing of a programming tool, called the Simulation Creation Toolkit, with which users apply high-level agent interactions called Computational Thinking Patterns (CTPs) to create simulations. Employing Computational Thinking Patterns obviates lower behavior-level programming and allows users to directly create agent interactions in a simulation by making an analogy with real world phenomena they are trying to represent. Data collected from 21 sixth grade students with no prior programming experience and 45 seventh grade students with minimal programming experience indicates that this is an effective first step towards enabling students to create simulations in the classroom environment. Furthermore, an analogical reasoning study that looked at how users might apply patterns to create simulations from high- level descriptions with little guidance shows promising results. These initial results indicate that the high level strategy employed by the Simulation Creation Toolkit is a promising strategy towards incorporating Computational Thinking concepts in the classroom environment.

  20. Computationally efficient method for optical simulation of solar cells and their applications

    Science.gov (United States)

    Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.

    2013-01-01

    This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.

  1. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  2. Simulation of a small computer of the TRA-1001 type on the BESM computer

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    1975-01-01

    Considered are the purpose and probable simulation ways of one computer by the other. The emulator (simulation program) is given for a small computer of TRA-1001 type on BESM-6 computer. The simulated computer basic elements are the following: memory (8 K words), central processor, input-output program channel, interruption circuit, computer panel. The work with the input-output devices, teletypes ASP-33, FS-1500 is also simulated. Under actual operation the emulator has been used for translating the programs prepared on punched cards with the aid of translator SLANG-1 by BESM-6 computer. The translator alignment from language COPLAN has been realized with the aid of the emulator

  3. Computer simulation of plastic deformation in irradiated metals

    International Nuclear Information System (INIS)

    Colak, U.

    1989-01-01

    A computer-based model is developed for the localized plastic deformation in irradiated metals by dislocation channeling, and it is applied to irradiated single crystals of niobium. In the model, the concentrated plastic deformation in the dislocation channels is postulated to occur by virtue of the motion of dislocations in a series of pile-tips on closely spaced parallel slip planes. The dynamics of this dislocation motion is governed by an experimentally determined dependence of dislocation velocity on shear stress. This leads to a set of coupled differential equations for the positions of the individual dislocations in the pile-up as a function of time. Shear displacement in the channel region is calculated from the total distance traveled by the dislocations. The macroscopic shape change in single crystal metal sheet samples is determined by the axial displacement produced by the shear displacements in the dislocation channels. Computer simulations are performed for the plastic deformation up to 20% engineering strain at a constant strain rate. Results of the computer calculations are compared with experimental observations of the shear stress-engineering strain curve obtained in tensile tests described in the literature. Agreement between the calculated and experimental stress-strain curves is obtained for shear displacement of 1.20-1.25 μm and 1000 active slip planes per channel, which is reasonable in the view of experimental observations

  4. A computer-simulated liver phantom (virtual liver phantom) for multidetector computed tomography evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Funama, Yoshinori [Kumamoto University, Department of Radiological Sciences, School of Health Sciences, Kumamoto (Japan); Awai, Kazuo; Nakayama, Yoshiharu; Liu, Da; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Miyazaki, Osamu; Goto, Taiga [Hitachi Medical Corporation, Tokyo (Japan); Hori, Shinichi [Gate Tower Institute of Image Guided Therapy, Osaka (Japan)

    2006-04-15

    The purpose of study was to develop a computer-simulated liver phantom for hepatic CT studies. A computer-simulated liver phantom was mathematically constructed on a computer workstation. The computer-simulated phantom was calibrated using real CT images acquired by an actual four-detector CT. We added an inhomogeneous texture to the simulated liver by referring to CT images of chronically damaged human livers. The mean CT number of the simulated liver was 60 HU and we added numerous 5-to 10-mm structures with 60{+-}10 HU/mm. To mimic liver tumors we added nodules measuring 8, 10, and 12 mm in diameter with CT numbers of 60{+-}10, 60{+-}15, and 60{+-}20 HU. Five radiologists visually evaluated similarity of the texture of the computer-simulated liver phantom and a real human liver to confirm the appropriateness of the virtual liver images using a five-point scale. The total score was 44 in two radiologists, and 42, 41, and 39 in one radiologist each. They evaluated that the textures of virtual liver were comparable to those of human liver. Our computer-simulated liver phantom is a promising tool for the evaluation of the image quality and diagnostic performance of hepatic CT imaging. (orig.)

  5. Precision Scaling of Neural Networks for Efficient Audio Processing

    OpenAIRE

    Ko, Jong Hwan; Fromm, Josh; Philipose, Matthai; Tashev, Ivan; Zarar, Shuayb

    2017-01-01

    While deep neural networks have shown powerful performance in many audio applications, their large computation and memory demand has been a challenge for real-time processing. In this paper, we study the impact of scaling the precision of neural networks on the performance of two common audio processing tasks, namely, voice-activity detection and single-channel speech enhancement. We determine the optimal pair of weight/neuron bit precision by exploring its impact on both the performance and ...

  6. Computer Simulations of Lipid Bilayers and Proteins

    DEFF Research Database (Denmark)

    Sonne, Jacob

    2006-01-01

    The importance of computer simulations in lipid bilayer research has become more prominent for the last couple of decades and as computers get even faster, simulations will play an increasingly important part of understanding the processes that take place in and across cell membranes. This thesis...... entitled Computer simulations of lipid bilayers and proteins describes two molecular dynamics (MD) simulation studies of pure lipid bilayers as well as a study of a transmembrane protein embedded in a lipid bilayer matrix. Below follows a brief overview of the thesis. Chapter 1. This chapter is a short...... in the succeeding chapters is presented. Details on system setups, simulation parameters and other technicalities can be found in the relevant chapters. Chapter 3, DPPC lipid parameters: The quality of MD simulations is intimately dependent on the empirical potential energy function and its parameters, i...

  7. Simulation of Si:P spin-based quantum computer architecture

    International Nuclear Information System (INIS)

    Chang Yiachung; Fang Angbo

    2008-01-01

    We present realistic simulation for single and double phosphorous donors in a silicon-based quantum computer design by solving a valley-orbit coupled effective-mass equation for describing phosphorous donors in strained silicon quantum well (QW). Using a generalized unrestricted Hartree-Fock method, we solve the two-electron effective-mass equation with quantum well confinement and realistic gate potentials. The effects of QW width, gate voltages, donor separation, and donor position shift on the lowest singlet and triplet energies and their charge distributions for a neighboring donor pair in the quantum computer(QC) architecture are analyzed. The gate tunability are defined and evaluated for a typical QC design. Estimates are obtained for the duration of spin half-swap gate operation.

  8. Simulation of Sweep-Jet Flow Control, Single Jet and Full Vertical Tail

    Science.gov (United States)

    Childs, Robert E.; Stremel, Paul M.; Garcia, Joseph A.; Heineck, James T.; Kushner, Laura K.; Storms, Bruce L.

    2016-01-01

    This work is a simulation technology demonstrator, of sweep jet flow control used to suppress boundary layer separation and increase the maximum achievable load coefficients. A sweep jet is a discrete Coanda jet that oscillates in the plane parallel to an aerodynamic surface. It injects mass and momentum in the approximate streamwise direction. It also generates turbulent eddies at the oscillation frequency, which are typically large relative to the scales of boundary layer turbulence, and which augment mixing across the boundary layer to attack flow separation. Simulations of a fluidic oscillator, the sweep jet emerging from a nozzle downstream of the oscillator, and an array of sweep jets which suppresses boundary layer separation are performed. Simulation results are compared to data from a dedicated validation experiment of a single oscillator and its sweep jet, and from a wind tunnel test of a full-scale Boeing 757 vertical tail augmented with an array of sweep jets. A critical step in the work is the development of realistic time-dependent sweep jet inflow boundary conditions, derived from the results of the single-oscillator simulations, which create the sweep jets in the full-tail simulations. Simulations were performed using the computational fluid dynamics (CFD) solver Overow, with high-order spatial discretization and a range of turbulence modeling. Good results were obtained for all flows simulated, when suitable turbulence modeling was used.

  9. An ultra-precision tool nanoindentation instrument for replication of single point diamond tool cutting edges

    Science.gov (United States)

    Cai, Yindi; Chen, Yuan-Liu; Xu, Malu; Shimizu, Yuki; Ito, So; Matsukuma, Hiraku; Gao, Wei

    2018-05-01

    Precision replication of the diamond tool cutting edge is required for non-destructive tool metrology. This paper presents an ultra-precision tool nanoindentation instrument designed and constructed for replication of the cutting edge of a single point diamond tool onto a selected soft metal workpiece by precisely indenting the tool cutting edge into the workpiece surface. The instrument has the ability to control the indentation depth with a nanometric resolution, enabling the replication of tool cutting edges with high precision. The motion of the diamond tool along the indentation direction is controlled by the piezoelectric actuator of a fast tool servo (FTS). An integrated capacitive sensor of the FTS is employed to detect the displacement of the diamond tool. The soft metal workpiece is attached to an aluminum cantilever whose deflection is monitored by another capacitive sensor, referred to as an outside capacitive sensor. The indentation force and depth can be accurately evaluated from the diamond tool displacement, the cantilever deflection and the cantilever spring constant. Experiments were carried out by replicating the cutting edge of a single point diamond tool with a nose radius of 2.0 mm on a copper workpiece surface. The profile of the replicated tool cutting edge was measured using an atomic force microscope (AFM). The effectiveness of the instrument in precision replication of diamond tool cutting edges is well-verified by the experimental results.

  10. Dosimetry in radiotherapy and brachytherapy by Monte-Carlo GATE simulation on computing grid; Dosimetrie en radiotherapie et curietherapie par simulation Monte-Carlo GATE sur grille informatique

    Energy Technology Data Exchange (ETDEWEB)

    Thiam, Ch O

    2007-10-15

    Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G

  11. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    Science.gov (United States)

    Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile

    2017-10-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.

  12. Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera

    International Nuclear Information System (INIS)

    Uesaka, M.; Ueda, T.; Kozawa, T.; Kobayashi, T.

    1998-01-01

    Precise measurement of a subpicosecond electron single bunch by the femtosecond streak camera is presented. The subpicosecond electron single bunch of energy 35 MeV was generated by the achromatic magnetic pulse compressor at the S-band linear accelerator of nuclear engineering research laboratory (NERL), University of Tokyo. The electric charge per bunch and beam size are 0.5 nC and the horizontal and vertical beam sizes are 3.3 and 5.5 mm (full width at half maximum; FWHM), respectively. Pulse shape of the electron single bunch is measured via Cherenkov radiation emitted in air by the femtosecond streak camera. Optical parameters of the optical measurement system were optimized based on much experiment and numerical analysis in order to achieve a subpicosecond time resolution. By using the optimized optical measurement system, the subpicosecond pulse shape, its variation for the differents rf phases in the accelerating tube, the jitter of the total system and the correlation between measured streak images and calculated longitudinal phase space distributions were precisely evaluated. This measurement system is going to be utilized in several subpicosecond analyses for radiation physics and chemistry. (orig.)

  13. Radiotherapy Monte Carlo simulation using cloud computing technology.

    Science.gov (United States)

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  14. Radiotherapy Monte Carlo simulation using cloud computing technology

    International Nuclear Information System (INIS)

    Poole, C.M.; Cornelius, I.; Trapp, J.V.; Langton, C.M.

    2012-01-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  15. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  16. The cellular environment in computer simulations of radiation-induced damage to DNA

    International Nuclear Information System (INIS)

    Moiseenko, V.V.; Waker, A.J.; Prestwich, W.V.

    1998-01-01

    Radiation-induced DNA single- and double-strand breaks were modeled for 660 keV photon radiation and scavenger capacity mimicking the cellular environment. Atomistic representation of DNA in B form with a first hydration shell was utilized to model direct and indirect damage. Monte Carlo generated electron tracks were used to model energy deposition in matter and to derive initial spatial distributions of species which appear in the medium following radiolysis. Diffusion of species was followed with time, and their reactions with DNA and each other were modeled in an encounter-controlled manner. Three methods to account for hydroxyl radical diffusion in a cellular environment were tested: assumed exponential survival, time-limited modeling and modeling of reactions between hydroxyl radicals and scavengers in an encounter-controlled manner. Although the method based on modeling scavenging in an encounter-controlled manner is more precise, it requires substantially more computer resources than either the exponential or time-limiting method. Scavenger concentrations of 0.5 and 0.15 M were considered using exponential and encounter-controlled methods with reaction rate set at 3 x 10 9 dm 3 mol -1 s -1 . Diffusion length and strand break yields, predicted by these two methods for the same scavenger molarity, were different by 20%-30%. The method based on limiting time of chemistry follow-up to 10 -9 s leads to DNA damage and radical diffusion estimates similar to 0.5 M scavenger concentration in the other two methods. The difference observed in predictions made by the methods considered could be tolerated in computer simulations of DNA damage. (orig.)

  17. The cellular environment in computer simulations of radiation-induced damage to DNA

    International Nuclear Information System (INIS)

    Moiseenko, V.V.; Hamm, R.N.; Waker, A.J.; Prestwich, W.V.

    1988-01-01

    Radiation-induced DNA single- and double-strand breaks were modeled for 660 keV photon radiation and scavenger capacity mimicking the cellular environment. Atomistic representation of DNA in B form with a first hydration shell was utilized to model direct and indirect damage. Monte Carlo generated electron tracks were used to model energy deposition in matter and to derive initial spatial distributions of species which appear in the medium following radiolysis. Diffusion of species was followed with time, and their reactions with DNA and each other were modeled in an encounter-controlled manner. Three methods to account for hydroxyl radical diffusion in cellular environment were tested: assumed exponential survival, time-limited modeling and modeling of reactions between hydroxyl radicals and scavengers in an encounter-controlled manner. Although the method based on modeling scavenging in an encounter-controlled manner is more precise, it requires substantially more computer resources than either the exponential or time-limiting method. Scavenger concentrations of 0.5 and 0.15 M were considered using exponential and encounter-controlled methods with reaction rate set at 3x10 9 dm 3 mol -1 s-1. Diffusion length and strand break yields, predicted by these two methods for the same scavenger molarity, were different by 20%-30%. The method based on limiting time of chemistry follow-up to 10 -9 s leads to DNA damage and radical diffusion estimates similar to 0.5 M scavenger concentration in the other two methods. The difference observed in predictions made by the methods considered could be tolerated in computer simulations of DNA damage. (author)

  18. Fel simulations using distributed computing

    NARCIS (Netherlands)

    Einstein, J.; Biedron, S.G.; Freund, H.P.; Milton, S.V.; Van Der Slot, P. J M; Bernabeu, G.

    2016-01-01

    While simulation tools are available and have been used regularly for simulating light sources, including Free-Electron Lasers, the increasing availability and lower cost of accelerated computing opens up new opportunities. This paper highlights a method of how accelerating and parallelizing code

  19. Accelerator simulation using computers

    International Nuclear Information System (INIS)

    Lee, M.; Zambre, Y.; Corbett, W.

    1992-01-01

    Every accelerator or storage ring system consists of a charged particle beam propagating through a beam line. Although a number of computer programs exits that simulate the propagation of a beam in a given beam line, only a few provide the capabilities for designing, commissioning and operating the beam line. This paper shows how a ''multi-track'' simulation and analysis code can be used for these applications

  20. Validation of the Gate simulation platform in single photon emission computed tomography and application to the development of a complete 3-dimensional reconstruction algorithm

    International Nuclear Information System (INIS)

    Lazaro, D.

    2003-10-01

    Monte Carlo simulations are currently considered in nuclear medical imaging as a powerful tool to design and optimize detection systems, and also to assess reconstruction algorithms and correction methods for degrading physical effects. Among the many simulators available, none of them is considered as a standard in nuclear medical imaging: this fact has motivated the development of a new generic Monte Carlo simulation platform (GATE), based on GEANT4 and dedicated to SPECT/PET (single photo emission computed tomography / positron emission tomography) applications. We participated during this thesis to the development of the GATE platform within an international collaboration. GATE was validated in SPECT by modeling two gamma cameras characterized by a different geometry, one dedicated to small animal imaging and the other used in a clinical context (Philips AXIS), and by comparing the results obtained with GATE simulations with experimental data. The simulation results reproduce accurately the measured performances of both gamma cameras. The GATE platform was then used to develop a new 3-dimensional reconstruction method: F3DMC (fully 3-dimension Monte-Carlo) which consists in computing with Monte Carlo simulation the transition matrix used in an iterative reconstruction algorithm (in this case, ML-EM), including within the transition matrix the main physical effects degrading the image formation process. The results obtained with the F3DMC method were compared to the results obtained with three other more conventional methods (FBP, MLEM, MLEMC) for different phantoms. The results of this study show that F3DMC allows to improve the reconstruction efficiency, the spatial resolution and the signal to noise ratio with a satisfactory quantification of the images. These results should be confirmed by performing clinical experiments and open the door to a unified reconstruction method, which could be applied in SPECT but also in PET. (author)

  1. Numerical Simulation Analysis of High-precision Dispensing Needles for Solid-liquid Two-phase Grinding

    Science.gov (United States)

    Li, Junye; Hu, Jinglei; Wang, Binyu; Sheng, Liang; Zhang, Xinming

    2018-03-01

    In order to investigate the effect of abrasive flow polishing surface variable diameter pipe parts, with high precision dispensing needles as the research object, the numerical simulation of the process of polishing high precision dispensing needle was carried out. Analysis of different volume fraction conditions, the distribution of the dynamic pressure and the turbulence viscosity of the abrasive flow field in the high precision dispensing needle, through comparative analysis, the effectiveness of the abrasive grain polishing high precision dispensing needle was studied, controlling the volume fraction of silicon carbide can change the viscosity characteristics of the abrasive flow during the polishing process, so that the polishing quality of the abrasive grains can be controlled.

  2. Precision toxicology based on single cell sequencing: an evolving trend in toxicological evaluations and mechanism exploration.

    Science.gov (United States)

    Zhang, Boyang; Huang, Kunlun; Zhu, Liye; Luo, Yunbo; Xu, Wentao

    2017-07-01

    In this review, we introduce a new concept, precision toxicology: the mode of action of chemical- or drug-induced toxicity can be sensitively and specifically investigated by isolating a small group of cells or even a single cell with typical phenotype of interest followed by a single cell sequencing-based analysis. Precision toxicology can contribute to the better detection of subtle intracellular changes in response to exogenous substrates, and thus help researchers find solutions to control or relieve the toxicological effects that are serious threats to human health. We give examples for single cell isolation and recommend laser capture microdissection for in vivo studies and flow cytometric sorting for in vitro studies. In addition, we introduce the procedures for single cell sequencing and describe the expected application of these techniques to toxicological evaluations and mechanism exploration, which we believe will become a trend in toxicology.

  3. Imaging by single photon emission computed tomography: interest in the pre surgical check up of epilepsy

    International Nuclear Information System (INIS)

    Biraben, A.; Bernard, A.M.

    1999-01-01

    With the single photon emission computed tomography, it is a more reliable technique that is at someone's disposal, especially to limit spatially the evolution of epilepsy crisis before any surgery act. The determination of the precise area is necessary to make sure that the crisis come really from this area and the determination of the functionality of this area is checked to be sure that the ablation of the zone will not lead to an unacceptable functional deficit. (N.C.)

  4. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Phillips, Julia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wampler, Cheryl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meisner, Robert [National Nuclear Security Administration (NNSA), Washington, DC (United States)

    2010-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality, and scientific details); to quantify critical margins and uncertainties; and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  5. Computer Simulation in Information and Communication Engineering

    CERN Multimedia

    Anton Topurov

    2005-01-01

    CSICE'05 Sofia, Bulgaria 20th - 22nd October, 2005 On behalf of the International Scientific Committee, we would like to invite you all to Sofia, the capital city of Bulgaria, to the International Conference in Computer Simulation in Information and Communication Engineering CSICE'05. The Conference is aimed at facilitating the exchange of experience in the field of computer simulation gained not only in traditional fields (Communications, Electronics, Physics...) but also in the areas of biomedical engineering, environment, industrial design, etc. The objective of the Conference is to bring together lectures, researchers and practitioners from different countries, working in the fields of computer simulation in information engineering, in order to exchange information and bring new contribution to this important field of engineering design and education. The Conference will bring you the latest ideas and development of the tools for computer simulation directly from their inventors. Contribution describ...

  6. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  7. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  8. Computational simulation of concurrent engineering for aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1992-01-01

    Results are summarized of an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulations methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties - fundamental in developing such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering for propulsion systems and systems in general. Benefits and facets needing early attention in the development are outlined.

  9. Computational simulation for concurrent engineering of aerospace propulsion systems

    Science.gov (United States)

    Chamis, C. C.; Singhal, S. N.

    1993-01-01

    Results are summarized for an investigation to assess the infrastructure available and the technology readiness in order to develop computational simulation methods/software for concurrent engineering. These results demonstrate that development of computational simulation methods for concurrent engineering is timely. Extensive infrastructure, in terms of multi-discipline simulation, component-specific simulation, system simulators, fabrication process simulation, and simulation of uncertainties--fundamental to develop such methods, is available. An approach is recommended which can be used to develop computational simulation methods for concurrent engineering of propulsion systems and systems in general. Benefits and issues needing early attention in the development are outlined.

  10. Computer-Based Simulation Games in Public Administration Education

    OpenAIRE

    Kutergina Evgeniia

    2017-01-01

    Computer simulation, an active learning technique, is now one of the advanced pedagogical technologies. Th e use of simulation games in the educational process allows students to gain a firsthand understanding of the processes of real life. Public- administration, public-policy and political-science courses increasingly adopt simulation games in universities worldwide. Besides person-to-person simulation games, there are computer-based simulations in public-administration education. Currently...

  11. Inversion based on computational simulations

    International Nuclear Information System (INIS)

    Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.

    1998-01-01

    A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal

  12. Computer-Based Simulation Games in Public Administration Education

    Directory of Open Access Journals (Sweden)

    Kutergina Evgeniia

    2017-12-01

    Full Text Available Computer simulation, an active learning technique, is now one of the advanced pedagogical technologies. Th e use of simulation games in the educational process allows students to gain a firsthand understanding of the processes of real life. Public- administration, public-policy and political-science courses increasingly adopt simulation games in universities worldwide. Besides person-to-person simulation games, there are computer-based simulations in public-administration education. Currently in Russia the use of computer-based simulation games in Master of Public Administration (MPA curricula is quite limited. Th is paper focuses on computer- based simulation games for students of MPA programmes. Our aim was to analyze outcomes of implementing such games in MPA curricula. We have done so by (1 developing three computer-based simulation games about allocating public finances, (2 testing the games in the learning process, and (3 conducting a posttest examination to evaluate the effect of simulation games on students’ knowledge of municipal finances. Th is study was conducted in the National Research University Higher School of Economics (HSE and in the Russian Presidential Academy of National Economy and Public Administration (RANEPA during the period of September to December 2015, in Saint Petersburg, Russia. Two groups of students were randomly selected in each university and then randomly allocated either to the experimental or the control group. In control groups (n=12 in HSE, n=13 in RANEPA students had traditional lectures. In experimental groups (n=12 in HSE, n=13 in RANEPA students played three simulation games apart from traditional lectures. Th is exploratory research shows that the use of computer-based simulation games in MPA curricula can improve students’ outcomes by 38 %. In general, the experimental groups had better performances on the post-test examination (Figure 2. Students in the HSE experimental group had 27.5 % better

  13. Advanced Simulation and Computing Fiscal Year 2011-2012 Implementation Plan, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, M; Phillips, J; Hpson, J; Meisner, R

    2010-04-22

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  14. A high precision dual feedback discrete control system designed for satellite trajectory simulator

    Science.gov (United States)

    Liu, Ximin; Liu, Liren; Sun, Jianfeng; Xu, Nan

    2005-08-01

    Cooperating with the free-space laser communication terminals, the satellite trajectory simulator is used to test the acquisition, pointing, tracking and communicating performances of the terminals. So the satellite trajectory simulator plays an important role in terminal ground test and verification. Using the double-prism, Sun etc in our group designed a satellite trajectory simulator. In this paper, a high precision dual feedback discrete control system designed for the simulator is given and a digital fabrication of the simulator is made correspondingly. In the dual feedback discrete control system, Proportional- Integral controller is used in velocity feedback loop and Proportional- Integral- Derivative controller is used in position feedback loop. In the controller design, simplex method is introduced and an improvement to the method is made. According to the transfer function of the control system in Z domain, the digital fabrication of the simulator is given when it is exposed to mechanism error and moment disturbance. Typically, when the mechanism error is 100urad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.49urad, 6.12urad, 4.56urad, 4.09urad respectively. When the moment disturbance is 0.1rad, the residual standard error of pitching angle, azimuth angle, x-coordinate position and y-coordinate position are 0.26urad, 0.22urad, 0.16urad, 0.15urad respectively. The digital fabrication results demonstrate that the dual feedback discrete control system designed for the simulator can achieve the anticipated high precision performance.

  15. Provably unbounded memory advantage in stochastic simulation using quantum mechanics

    International Nuclear Information System (INIS)

    Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing

    2017-01-01

    Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)

  16. The Australian Computational Earth Systems Simulator

    Science.gov (United States)

    Mora, P.; Muhlhaus, H.; Lister, G.; Dyskin, A.; Place, D.; Appelbe, B.; Nimmervoll, N.; Abramson, D.

    2001-12-01

    Numerical simulation of the physics and dynamics of the entire earth system offers an outstanding opportunity for advancing earth system science and technology but represents a major challenge due to the range of scales and physical processes involved, as well as the magnitude of the software engineering effort required. However, new simulation and computer technologies are bringing this objective within reach. Under a special competitive national funding scheme to establish new Major National Research Facilities (MNRF), the Australian government together with a consortium of Universities and research institutions have funded construction of the Australian Computational Earth Systems Simulator (ACcESS). The Simulator or computational virtual earth will provide the research infrastructure to the Australian earth systems science community required for simulations of dynamical earth processes at scales ranging from microscopic to global. It will consist of thematic supercomputer infrastructure and an earth systems simulation software system. The Simulator models and software will be constructed over a five year period by a multi-disciplinary team of computational scientists, mathematicians, earth scientists, civil engineers and software engineers. The construction team will integrate numerical simulation models (3D discrete elements/lattice solid model, particle-in-cell large deformation finite-element method, stress reconstruction models, multi-scale continuum models etc) with geophysical, geological and tectonic models, through advanced software engineering and visualization technologies. When fully constructed, the Simulator aims to provide the software and hardware infrastructure needed to model solid earth phenomena including global scale dynamics and mineralisation processes, crustal scale processes including plate tectonics, mountain building, interacting fault system dynamics, and micro-scale processes that control the geological, physical and dynamic

  17. Software Engineering for Scientific Computer Simulations

    Science.gov (United States)

    Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.

    2004-11-01

    Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.

  18. The precision of the NDVI derived from AVHRR observations

    International Nuclear Information System (INIS)

    Roderick, M.; Smith, R.; Cridland, S.

    1996-01-01

    Vegetation studies using NOAA-AVHRR data have tended to focus on the use of the normalized difference vegetation index (NDVI). This unitless index is computed using near-infrared and red reflectances, and thus has both an accuracy and precision. This article reports on a formal statistical framework for assessing the precision of the NDVI derived from NOAA-AVHRR observations. The framework is based on the “best possible” precision concept, which assumes that signal quantization is the only source of observational error. While the radiance resolution of a spectral observation is essentially fixed by the instrument characteristics, the reflectance resolution is the radiance resolution divided by the cosine of the solar zenith angle. Using typical solar zenith angles for AVHRR image acquisitions over Australia, ± 0.01 NDVI units is typically with “best possible” precision attainable in the NDVI, although this degrades significantly over dark targets, and at large solar zenith angles. Transforming the computed NDVI into a single byte for disk storage results in little or no loss of precision. The framework developed in this article can be adapted to estimate the “best possible” precision of other vegetation indices derived using data from other remote sensing satellites. (author)

  19. Automatic temperature computation for realistic IR simulation

    Science.gov (United States)

    Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe

    2000-07-01

    Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.

  20. Discrete Event Simulation Computers can be used to simulate the ...

    Indian Academy of Sciences (India)

    IAS Admin

    people who use computers every moment of their waking lives, others even ... How is discrete event simulation different from other kinds of simulation? ... time, energy consumption .... Schedule the CustomerDeparture event for this customer.

  1. MUSIDH, multiple use of simulated demographic histories, a novel method to reduce computation time in microsimulation models of infectious diseases.

    Science.gov (United States)

    Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F

    2008-09-01

    Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.

  2. HIGHTEX: a computer program for the steady-state simulation of steam-methane reformers used in a nuclear process heat plant

    International Nuclear Information System (INIS)

    Tadokoro, Yoshihiro; Seya, Toko

    1977-08-01

    This report describes a computational model and the input procedure of HIGHTEX, a computer program for steady-state simulation of the steam-methane reformers used in a nuclear process heat plant. The HIGHTEX program simulates rapidly a single reformer tube, and treats the reactant single-phase in the two-dimensional catalyst bed. Output of the computer program is radial distributions of temperature and reaction products in the catalyst-packed bed, pressure loss of the packed bed, stress in the reformer tube, hydrogen permeation rate through the reformer tube, heat rate of reaction, and heat-transfer rate between helium and process gas. The running time (cpu) for a 9m-long bayonet type reformer tube is 12 min with FACOM-230/75. (auth.)

  3. Launch Site Computer Simulation and its Application to Processes

    Science.gov (United States)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  4. Quantum chemistry simulation on quantum computers: theories and experiments.

    Science.gov (United States)

    Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng

    2012-07-14

    It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.

  5. Precision Light Flavor Physics from Lattice QCD

    Science.gov (United States)

    Murphy, David

    In this thesis we present three distinct contributions to the study of light flavor physics using the techniques of lattice QCD. These results are arranged into four self-contained papers. The first two papers concern global fits of the quark mass, lattice spacing, and finite volume dependence of the pseudoscalar meson masses and decay constants, computed in a series of lattice QCD simulations, to partially quenched SU(2) and SU(3) chiral perturbation theory (chiPT). These fits determine a subset of the low energy constants of chiral perturbation theory -- in some cases with increased precision, and in other cases for the first time -- which, once determined, can be used to compute other observables and amplitudes in chiPT. We also use our formalism to self-consistently probe the behavior of the (asymptotic) chiral expansion as a function of the quark masses by repeating the fits with different subsets of the data. The third paper concerns the first lattice QCD calculation of the semileptonic K0 → pi-l +nul ( Kl3) form factor at vanishing momentum transfer, f+Kpi(0), with physical mass domain wall quarks. The value of this form factor can be combined with a Standard Model analysis of the experimentally measured K0 → pi -l+nu l decay rate to extract a precise value of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element Vus, and to test unitarity of the CKM matrix. We also discuss lattice calculations of the pion and kaon decay constants, which can be used to extract Vud through an analogous Standard Model analysis of experimental constraints on leptonic pion and kaon decays. The final paper explores the recently proposed exact one flavor algorithm (EOFA). This algorithm has been shown to drastically reduce the memory footprint required to simulate single quark flavors on the lattice relative to the widely used rational hybrid Monte Carlo (RHMC) algorithm, while also offering modest O(20%) speed-ups. We independently derive the exact one flavor action, explore its

  6. The role of computer simulation in nuclear technologies development

    International Nuclear Information System (INIS)

    Tikhonchev, M.Yu.; Shimansky, G.A.; Lebedeva, E.E.; Lichadeev, V. V.; Ryazanov, D.K.; Tellin, A.I.

    2001-01-01

    In the report the role and purposes of computer simulation in nuclear technologies development is discussed. The authors consider such applications of computer simulation as nuclear safety researches, optimization of technical and economic parameters of acting nuclear plant, planning and support of reactor experiments, research and design new devices and technologies, design and development of 'simulators' for operating personnel training. Among marked applications the following aspects of computer simulation are discussed in the report: neutron-physical, thermal and hydrodynamics models, simulation of isotope structure change and damage dose accumulation for materials under irradiation, simulation of reactor control structures. (authors)

  7. Computational steering of GEM based detector simulations

    Science.gov (United States)

    Sheharyar, Ali; Bouhali, Othmane

    2017-10-01

    Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.

  8. Simulation-based artifact correction (SBAC) for metrological computed tomography

    Science.gov (United States)

    Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc

    2017-06-01

    Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.

  9. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  10. A comparative study of attenuation correction algorithms in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Murase, Kenya; Itoh, Hisao; Mogami, Hiroshi; Ishine, Masashiro; Kawamura, Masashi; Iio, Atsushi; Hamamoto, Ken

    1987-01-01

    A computer based simulation method was developed to assess the relative effectiveness and availability of various attenuation compensation algorithms in single photon emission computed tomography (SPECT). The effect of the nonuniformity of attenuation coefficient distribution in the body, the errors in determining a body contour and the statistical noise on reconstruction accuracy and the computation time in using the algorithms were studied. The algorithms were classified into three groups: precorrection, post correction and iterative correction methods. Furthermore, a hybrid method was devised by combining several methods. This study will be useful for understanding the characteristics limitations and strengths of the algorithms and searching for a practical correction method for photon attenuation in SPECT. (orig.)

  11. Alternative energy technologies an introduction with computer simulations

    CERN Document Server

    Buxton, Gavin

    2014-01-01

    Introduction to Alternative Energy SourcesGlobal WarmingPollutionSolar CellsWind PowerBiofuelsHydrogen Production and Fuel CellsIntroduction to Computer ModelingBrief History of Computer SimulationsMotivation and Applications of Computer ModelsUsing Spreadsheets for SimulationsTyping Equations into SpreadsheetsFunctions Available in SpreadsheetsRandom NumbersPlotting DataMacros and ScriptsInterpolation and ExtrapolationNumerical Integration and Diffe

  12. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  13. Computer simulations of the activity of RND efflux pumps.

    Science.gov (United States)

    Vargiu, Attilio Vittorio; Ramaswamy, Venkata Krishnan; Malloci, Giuliano; Malvacio, Ivana; Atzori, Alessio; Ruggerone, Paolo

    2018-01-31

    The putative mechanism by which bacterial RND-type multidrug efflux pumps recognize and transport their substrates is a complex and fascinating enigma of structural biology. How a single protein can recognize a huge number of unrelated compounds and transport them through one or just a few mechanisms is an amazing feature not yet completely unveiled. The appearance of cooperativity further complicates the understanding of structure-dynamics-activity relationships in these complex machineries. Experimental techniques may have limited access to the molecular determinants and to the energetics of key processes regulating the activity of these pumps. Computer simulations are a complementary approach that can help unveil these features and inspire new experiments. Here we review recent computational studies that addressed the various molecular processes regulating the activity of RND efflux pumps. Copyright © 2018 The Authors. Published by Elsevier Masson SAS.. All rights reserved.

  14. Single-polymer dynamics under constraints: scaling theory and computer experiment

    International Nuclear Information System (INIS)

    Milchev, Andrey

    2011-01-01

    The relaxation, diffusion and translocation dynamics of single linear polymer chains in confinement is briefly reviewed with emphasis on the comparison between theoretical scaling predictions and observations from experiment or, most frequently, from computer simulations. Besides cylindrical, spherical and slit-like constraints, related problems such as the chain dynamics in a random medium and the translocation dynamics through a nanopore are also considered. Another particular kind of confinement is imposed by polymer adsorption on attractive surfaces or selective interfaces-a short overview of single-chain dynamics is also contained in this survey. While both theory and numerical experiments consider predominantly coarse-grained models of self-avoiding linear chain molecules with typically Rouse dynamics, we also note some recent studies which examine the impact of hydrodynamic interactions on polymer dynamics in confinement. In all of the aforementioned cases we focus mainly on the consequences of imposed geometric restrictions on single-chain dynamics and try to check our degree of understanding by assessing the agreement between theoretical predictions and observations. (topical review)

  15. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    Science.gov (United States)

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  16. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    Directory of Open Access Journals (Sweden)

    Anbang Zhao

    2017-02-01

    Full Text Available In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  17. The role of computer simulation in nuclear technology development

    International Nuclear Information System (INIS)

    Tikhonchev, M.Yu.; Shimansky, G.A.; Lebedeva, E.E.; Lichadeev, VV.; Ryazanov, D.K.; Tellin, A.I.

    2000-01-01

    In the report, the role and purpose of computer simulation in nuclear technology development is discussed. The authors consider such applications of computer simulation as: (a) Nuclear safety research; (b) Optimization of technical and economic parameters of acting nuclear plant; (c) Planning and support of reactor experiments; (d) Research and design new devices and technologies; (f) Design and development of 'simulators' for operating personnel training. Among marked applications, the following aspects of computer simulation are discussed in the report: (g) Neutron-physical, thermal and hydrodynamics models; (h) Simulation of isotope structure change and dam- age dose accumulation for materials under irradiation; (i) Simulation of reactor control structures. (authors)

  18. Development of computational science in JAEA. R and D of simulation

    International Nuclear Information System (INIS)

    Nakajima, Norihiro; Araya, Fumimasa; Hirayama, Toshio

    2006-01-01

    R and D of computational science in JAEA (Japan Atomic Energy Agency) is described. Environment of computer, R and D system in CCSE (Center for Computational Science and e-Systems), joint computational science researches in Japan and world, development of computer technologies, the some examples of simulation researches, 3-dimensional image vibrational platform system, simulation researches of FBR cycle techniques, simulation of large scale thermal stress for development of steam generator, simulation research of fusion energy techniques, development of grid computing technology, simulation research of quantum beam techniques and biological molecule simulation researches are explained. Organization of JAEA, development of computational science in JAEA, network of JAEA, international collaboration of computational science, and environment of ITBL (Information-Technology Based Laboratory) project are illustrated. (S.Y.)

  19. Atomically precise graphene nanoribbon heterojunctions from a single molecular precursor

    Science.gov (United States)

    Nguyen, Giang D.; Tsai, Hsin-Zon; Omrani, Arash A.; Marangoni, Tomas; Wu, Meng; Rizzo, Daniel J.; Rodgers, Griffin F.; Cloke, Ryan R.; Durr, Rebecca A.; Sakai, Yuki; Liou, Franklin; Aikawa, Andrew S.; Chelikowsky, James R.; Louie, Steven G.; Fischer, Felix R.; Crommie, Michael F.

    2017-11-01

    The rational bottom-up synthesis of atomically defined graphene nanoribbon (GNR) heterojunctions represents an enabling technology for the design of nanoscale electronic devices. Synthetic strategies used thus far have relied on the random copolymerization of two electronically distinct molecular precursors to yield GNR heterojunctions. Here we report the fabrication and electronic characterization of atomically precise GNR heterojunctions prepared through late-stage functionalization of chevron GNRs obtained from a single precursor. Post-growth excitation of fully cyclized GNRs induces cleavage of sacrificial carbonyl groups, resulting in atomically well-defined heterojunctions within a single GNR. The GNR heterojunction structure was characterized using bond-resolved scanning tunnelling microscopy, which enables chemical bond imaging at T = 4.5 K. Scanning tunnelling spectroscopy reveals that band alignment across the heterojunction interface yields a type II heterojunction, in agreement with first-principles calculations. GNR heterojunction band realignment proceeds over a distance less than 1 nm, leading to extremely large effective fields.

  20. Polymer Composites Corrosive Degradation: A Computational Simulation

    Science.gov (United States)

    Chamis, Christos C.; Minnetyan, Levon

    2007-01-01

    A computational simulation of polymer composites corrosive durability is presented. The corrosive environment is assumed to manage the polymer composite degradation on a ply-by-ply basis. The degradation is correlated with a measured pH factor and is represented by voids, temperature and moisture which vary parabolically for voids and linearly for temperature and moisture through the laminate thickness. The simulation is performed by a computational composite mechanics computer code which includes micro, macro, combined stress failure and laminate theories. This accounts for starting the simulation from constitutive material properties and up to the laminate scale which exposes the laminate to the corrosive environment. Results obtained for one laminate indicate that the ply-by-ply degradation degrades the laminate to the last one or the last several plies. Results also demonstrate that the simulation is applicable to other polymer composite systems as well.

  1. Single-Chip Computers With Microelectromechanical Systems-Based Magnetic Memory

    NARCIS (Netherlands)

    Carley, L. Richard; Bain, James A.; Fedder, Gary K.; Greve, David W.; Guillou, David F.; Lu, Michael S.C.; Mukherjee, Tamal; Santhanam, Suresh; Abelmann, Leon; Min, Seungook

    This article describes an approach for implementing a complete computer system (CPU, RAM, I/O, and nonvolatile mass memory) on a single integrated-circuit substrate (a chip)—hence, the name "single-chip computer." The approach presented combines advances in the field of microelectromechanical

  2. Accelerating population balance-Monte Carlo simulation for coagulation dynamics from the Markov jump model, stochastic algorithm and GPU parallel computing

    International Nuclear Information System (INIS)

    Xu, Zuwei; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    This paper proposes a comprehensive framework for accelerating population balance-Monte Carlo (PBMC) simulation of particle coagulation dynamics. By combining Markov jump model, weighted majorant kernel and GPU (graphics processing unit) parallel computing, a significant gain in computational efficiency is achieved. The Markov jump model constructs a coagulation-rule matrix of differentially-weighted simulation particles, so as to capture the time evolution of particle size distribution with low statistical noise over the full size range and as far as possible to reduce the number of time loopings. Here three coagulation rules are highlighted and it is found that constructing appropriate coagulation rule provides a route to attain the compromise between accuracy and cost of PBMC methods. Further, in order to avoid double looping over all simulation particles when considering the two-particle events (typically, particle coagulation), the weighted majorant kernel is introduced to estimate the maximum coagulation rates being used for acceptance–rejection processes by single-looping over all particles, and meanwhile the mean time-step of coagulation event is estimated by summing the coagulation kernels of rejected and accepted particle pairs. The computational load of these fast differentially-weighted PBMC simulations (based on the Markov jump model) is reduced greatly to be proportional to the number of simulation particles in a zero-dimensional system (single cell). Finally, for a spatially inhomogeneous multi-dimensional (multi-cell) simulation, the proposed fast PBMC is performed in each cell, and multiple cells are parallel processed by multi-cores on a GPU that can implement the massively threaded data-parallel tasks to obtain remarkable speedup ratio (comparing with CPU computation, the speedup ratio of GPU parallel computing is as high as 200 in a case of 100 cells with 10 000 simulation particles per cell). These accelerating approaches of PBMC are

  3. An integrated computational tool for precipitation simulation

    Science.gov (United States)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  4. Advanced Simulation and Computing FY08-09 Implementation Plan Volume 2 Revision 0

    International Nuclear Information System (INIS)

    McCoy, M; Kusnezov, D; Bikkel, T; Hopson, J

    2007-01-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future nonnuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  5. Advanced Simulation and Computing FY10-11 Implementation Plan Volume 2, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    Carnes, B

    2009-06-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  6. Uncertainty analysis of NDA waste measurements using computer simulations

    International Nuclear Information System (INIS)

    Blackwood, L.G.; Harker, Y.D.; Yoon, W.Y.; Meachum, T.R.

    2000-01-01

    Uncertainty assessments for nondestructive radioassay (NDA) systems for nuclear waste are complicated by factors extraneous to the measurement systems themselves. Most notably, characteristics of the waste matrix (e.g., homogeneity) and radioactive source material (e.g., particle size distribution) can have great effects on measured mass values. Under these circumstances, characterizing the waste population is as important as understanding the measurement system in obtaining realistic uncertainty values. When extraneous waste characteristics affect measurement results, the uncertainty results are waste-type specific. The goal becomes to assess the expected bias and precision for the measurement of a randomly selected item from the waste population of interest. Standard propagation-of-errors methods for uncertainty analysis can be very difficult to implement in the presence of significant extraneous effects on the measurement system. An alternative approach that naturally includes the extraneous effects is as follows: (1) Draw a random sample of items from the population of interest; (2) Measure the items using the NDA system of interest; (3) Establish the true quantity being measured using a gold standard technique; and (4) Estimate bias by deriving a statistical regression model comparing the measurements on the system of interest to the gold standard values; similar regression techniques for modeling the standard deviation of the difference values gives the estimated precision. Actual implementation of this method is often impractical. For example, a true gold standard confirmation measurement may not exist. A more tractable implementation is obtained by developing numerical models for both the waste material and the measurement system. A random sample of simulated waste containers generated by the waste population model serves as input to the measurement system model. This approach has been developed and successfully applied to assessing the quantity of

  7. Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine.

    Science.gov (United States)

    Hu, Miao; Graves, Catherine E; Li, Can; Li, Yunning; Ge, Ning; Montgomery, Eric; Davila, Noraica; Jiang, Hao; Williams, R Stanley; Yang, J Joshua; Xia, Qiangfei; Strachan, John Paul

    2018-03-01

    Using memristor crossbar arrays to accelerate computations is a promising approach to efficiently implement algorithms in deep neural networks. Early demonstrations, however, are limited to simulations or small-scale problems primarily due to materials and device challenges that limit the size of the memristor crossbar arrays that can be reliably programmed to stable and analog values, which is the focus of the current work. High-precision analog tuning and control of memristor cells across a 128 × 64 array is demonstrated, and the resulting vector matrix multiplication (VMM) computing precision is evaluated. Single-layer neural network inference is performed in these arrays, and the performance compared to a digital approach is assessed. Memristor computing system used here reaches a VMM accuracy equivalent of 6 bits, and an 89.9% recognition accuracy is achieved for the 10k MNIST handwritten digit test set. Forecasts show that with integrated (on chip) and scaled memristors, a computational efficiency greater than 100 trillion operations per second per Watt is possible. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Fluid simulation for computer graphics

    CERN Document Server

    Bridson, Robert

    2008-01-01

    Animating fluids like water, smoke, and fire using physics-based simulation is increasingly important in visual effects, in particular in movies, like The Day After Tomorrow, and in computer games. This book provides a practical introduction to fluid simulation for graphics. The focus is on animating fully three-dimensional incompressible flow, from understanding the math and the algorithms to the actual implementation.

  9. Precision digital control systems

    Science.gov (United States)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  10. SU-F-T-368: Improved HPGe Detector Precise Efficiency Calibration with Monte Carlo Simulations and Radioactive Sources

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Y. John [Vanderbilt University, Vanderbilt-Ingram Cancer Center, Nashville, TN 37232 (United States)

    2016-06-15

    Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excited states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.

  11. DE-BLURRING SINGLE PHOTON EMISSION COMPUTED TOMOGRAPHY IMAGES USING WAVELET DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Neethu M. Sasi

    2016-02-01

    Full Text Available Single photon emission computed tomography imaging is a popular nuclear medicine imaging technique which generates images by detecting radiations emitted by radioactive isotopes injected in the human body. Scattering of these emitted radiations introduces blur in this type of images. This paper proposes an image processing technique to enhance cardiac single photon emission computed tomography images by reducing the blur in the image. The algorithm works in two main stages. In the first stage a maximum likelihood estimate of the point spread function and the true image is obtained. In the second stage Lucy Richardson algorithm is applied on the selected wavelet coefficients of the true image estimate. The significant contribution of this paper is that processing of images is done in the wavelet domain. Pre-filtering is also done as a sub stage to avoid unwanted ringing effects. Real cardiac images are used for the quantitative and qualitative evaluations of the algorithm. Blur metric, peak signal to noise ratio and Tenengrad criterion are used as quantitative measures. Comparison against other existing de-blurring algorithms is also done. The simulation results indicate that the proposed method effectively reduces blur present in the image.

  12. Computer simulations of the mouse spermatogenic cycle

    Directory of Open Access Journals (Sweden)

    Debjit Ray

    2014-12-01

    Full Text Available The spermatogenic cycle describes the periodic development of germ cells in the testicular tissue. The temporal–spatial dynamics of the cycle highlight the unique, complex, and interdependent interaction between germ and somatic cells, and are the key to continual sperm production. Although understanding the spermatogenic cycle has important clinical relevance for male fertility and contraception, there are a number of experimental obstacles. For example, the lengthy process cannot be visualized through dynamic imaging, and the precise action of germ cells that leads to the emergence of testicular morphology remains uncharacterized. Here, we report an agent-based model that simulates the mouse spermatogenic cycle on a cross-section of the seminiferous tubule over a time scale of hours to years, while considering feedback regulation, mitotic and meiotic division, differentiation, apoptosis, and movement. The computer model is able to elaborate the germ cell dynamics in a time-lapse movie format, allowing us to trace individual cells as they change state and location. More importantly, the model provides mechanistic understanding of the fundamentals of male fertility, namely how testicular morphology and sperm production are achieved. By manipulating cellular behaviors either individually or collectively in silico, the model predicts causal events for the altered arrangement of germ cells upon genetic or environmental perturbations. This in silico platform can serve as an interactive tool to perform long-term simulation and to identify optimal approaches for infertility treatment and contraceptive development.

  13. Advanced Simulation and Computing FY09-FY10 Implementation Plan Volume 2, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Kissel, L

    2009-04-01

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one that

  14. Experimental and computer simulation study of radionuclide yields in the ADT materials irradiated with intermediate energy protons

    Energy Technology Data Exchange (ETDEWEB)

    Titarenko, Yu.E.; Shvedov, O.V.; Batyaev, V.F. [Inst. for Theoretical and Experimental Physics, B. Cheremushkinskaya, Moscow (Russian Federation)] [and others

    1998-11-01

    The results of measurements and computer simulations of the yields of residual product nuclei in {sup 209}Bi, {sup 208,207,206,nat}Pb, {sup 65,63}Cu, {sup 59}Co thin targets irradiated by 0.13, 1.2 and 1.5 GeV protons are presented. The yields were measured by direct high-precision {gamma}-spectrometry. The process was monitored by the {sup 27}Al(p,x){sup 24}Na reaction. 801 cross sections are presented and used in comparisons between the reaction yields obtained experimentally and simulated by the HETC, GNASH, LAHET, INUCL, CEM95, CASCADE, NUCLEUS, YIELDX, QMD and ALICE codes. (author)

  15. Simulation model of a single-stage lithium bromide-water absorption cooling unit

    Science.gov (United States)

    Miao, D.

    1978-01-01

    A computer model of a LiBr-H2O single-stage absorption machine was developed. The model, utilizing a given set of design data such as water-flow rates and inlet or outlet temperatures of these flow rates but without knowing the interior characteristics of the machine (heat transfer rates and surface areas), can be used to predict or simulate off-design performance. Results from 130 off-design cases for a given commercial machine agree with the published data within 2 percent.

  16. I - Detector Simulation for the LHC and beyond: how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  17. II - Detector simulation for the LHC and beyond : how to match computing resources and physics requirements

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Detector simulation at the LHC is one of the most computing intensive activities. In these lectures we will show how physics requirements were met for the LHC experiments and extrapolate to future experiments (FCC-hh case). At the LHC, detectors are complex, very precise and ambitious: this implies modern modelisation tools for geometry and response. Events are busy and characterised by an unprecedented energy scale with hundreds of particles to be traced and high energy showers to be accurately simulated. Furthermore, high luminosities imply many events in a bunch crossing and many bunch crossings to be considered at the same time. In addition, backgrounds not directly correlated to bunch crossings have also to be taken into account. Solutions chosen for ATLAS (a mixture of detailed simulation and fast simulation/parameterisation) will be described and CPU and memory figures will be given. An extrapolation to the FCC-hh case will be tried by taking as example the calorimeter simulation.

  18. Architectures for single-chip image computing

    Science.gov (United States)

    Gove, Robert J.

    1992-04-01

    This paper will focus on the architectures of VLSI programmable processing components for image computing applications. TI, the maker of industry-leading RISC, DSP, and graphics components, has developed an architecture for a new-generation of image processors capable of implementing a plurality of image, graphics, video, and audio computing functions. We will show that the use of a single-chip heterogeneous MIMD parallel architecture best suits this class of processors--those which will dominate the desktop multimedia, document imaging, computer graphics, and visualization systems of this decade.

  19. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Sophistication of computational science and fundamental physics simulations

    International Nuclear Information System (INIS)

    Ishiguro, Seiji; Ito, Atsushi; Usami, Shunsuke; Ohtani, Hiroaki; Sakagami, Hitoshi; Toida, Mieko; Hasegawa, Hiroki; Horiuchi, Ritoku; Miura, Hideaki

    2016-01-01

    Numerical experimental reactor research project is composed of the following studies: (1) nuclear fusion simulation research with a focus on specific physical phenomena of specific equipment, (2) research on advanced simulation method to increase predictability or expand its application range based on simulation, (3) visualization as the foundation of simulation research, (4) research for advanced computational science such as parallel computing technology, and (5) research aiming at elucidation of fundamental physical phenomena not limited to specific devices. Specifically, a wide range of researches with medium- to long-term perspectives are being developed: (1) virtual reality visualization, (2) upgrading of computational science such as multilayer simulation method, (3) kinetic behavior of plasma blob, (4) extended MHD theory and simulation, (5) basic plasma process such as particle acceleration due to interaction of wave and particle, and (6) research related to laser plasma fusion. This paper reviews the following items: (1) simultaneous visualization in virtual reality space, (2) multilayer simulation of collisionless magnetic reconnection, (3) simulation of microscopic dynamics of plasma coherent structure, (4) Hall MHD simulation of LHD, (5) numerical analysis for extension of MHD equilibrium and stability theory, (6) extended MHD simulation of 2D RT instability, (7) simulation of laser plasma, (8) simulation of shock wave and particle acceleration, and (9) study on simulation of homogeneous isotropic MHD turbulent flow. (A.O.)

  1. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  2. A Review of Freely Available Quantum Computer Simulation Software

    OpenAIRE

    Brandhorst-Satzkorn, Johan

    2012-01-01

    A study has been made of a few different freely available Quantum Computer simulators. All the simulators tested are available online on their respective websites. A number of tests have been performed to compare the different simulators against each other. Some untested simulators of various programming languages are included to show the diversity of the quantum computer simulator applications. The conclusion of the review is that LibQuantum is the best of the simulators tested because of ea...

  3. Computer simulation of liquid crystals

    International Nuclear Information System (INIS)

    McBride, C.

    1999-01-01

    Molecular dynamics simulation performed on modern computer workstations provides a powerful tool for the investigation of the static and dynamic characteristics of liquid crystal phases. In this thesis molecular dynamics computer simulations have been performed for two model systems. Simulations of 4,4'-di-n-pentyl-bibicyclo[2.2.2]octane demonstrate the growth of a structurally ordered phase directly from an isotropic fluid. This is the first time that this has been achieved for an atomistic model. The results demonstrate a strong coupling between orientational ordering and molecular shape, but indicate that the coupling between molecular conformational changes and molecular reorientation is relatively weak. Simulations have also been performed for a hybrid Gay-Berne/Lennard-Jones model resulting in thermodynamically stable nematic and smectic phases. Frank elastic constants have been calculated for the nematic phase formed by the hybrid model through analysis of the fluctuations of the nematic director, giving results comparable with those found experimentally. Work presented in this thesis also describes the parameterization of the torsional potential of a fragment of a dimethyl siloxane polymer chain, disiloxane diol (HOMe 2 Si) 2 O, using ab initio quantum mechanical calculations. (author)

  4. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    Science.gov (United States)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  5. HIGH-FIDELITY SIMULATION-DRIVEN MODEL DEVELOPMENT FOR COARSE-GRAINED COMPUTATIONAL FLUID DYNAMICS

    Energy Technology Data Exchange (ETDEWEB)

    Hanna, Botros N.; Dinh, Nam T.; Bolotnov, Igor A.

    2016-06-01

    Nuclear reactor safety analysis requires identifying various credible accident scenarios and determining their consequences. For a full-scale nuclear power plant system behavior, it is impossible to obtain sufficient experimental data for a broad range of risk-significant accident scenarios. In single-phase flow convective problems, Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES) can provide us with high fidelity results when physical data are unavailable. However, these methods are computationally expensive and cannot be afforded for simulation of long transient scenarios in nuclear accidents despite extraordinary advances in high performance scientific computing over the past decades. The major issue is the inability to make the transient computation parallel, thus making number of time steps required in high-fidelity methods unaffordable for long transients. In this work, we propose to apply a high fidelity simulation-driven approach to model sub-grid scale (SGS) effect in Coarse Grained Computational Fluid Dynamics CG-CFD. This approach aims to develop a statistical surrogate model instead of the deterministic SGS model. We chose to start with a turbulent natural convection case with volumetric heating in a horizontal fluid layer with a rigid, insulated lower boundary and isothermal (cold) upper boundary. This scenario of unstable stratification is relevant to turbulent natural convection in a molten corium pool during a severe nuclear reactor accident, as well as in containment mixing and passive cooling. The presented approach demonstrates how to create a correction for the CG-CFD solution by modifying the energy balance equation. A global correction for the temperature equation proves to achieve a significant improvement to the prediction of steady state temperature distribution through the fluid layer.

  6. Biomes computed from simulated climatologies

    Energy Technology Data Exchange (ETDEWEB)

    Claussen, M.; Esch, M. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1994-01-01

    The biome model of Prentice et al. is used to predict global patterns of potential natural plant formations, or biomes, from climatologies simulated by ECHAM, a model used for climate simulations at the Max-Planck-Institut fuer Meteorologie. This study undertaken in order to show the advantage of this biome model in diagnosing the performance of a climate model and assessing effects of past and future climate changes predicted by a climate model. Good overall agreement is found between global patterns of biomes computed from observed and simulated data of present climate. But there are also major discrepancies indicated by a difference in biomes in Australia, in the Kalahari Desert, and in the Middle West of North America. These discrepancies can be traced back to in simulated rainfall as well as summer or winter temperatures. Global patterns of biomes computed from an ice age simulation reveal that North America, Europe, and Siberia should have been covered largely by tundra and taiga, whereas only small differences are for the tropical rain forests. A potential northeast shift of biomes is expected from a simulation with enhanced CO{sub 2} concentration according to the IPCC Scenario A. Little change is seen in the tropical rain forest and the Sahara. Since the biome model used is not capable of predicting chances in vegetation patterns due to a rapid climate change, the latter simulation to be taken as a prediction of chances in conditions favourable for the existence of certain biomes, not as a reduction of a future distribution of biomes. 15 refs., 8 figs., 2 tabs.

  7. Integrating surrogate models into subsurface simulation framework allows computation of complex reactive transport scenarios

    Science.gov (United States)

    De Lucia, Marco; Kempka, Thomas; Jatnieks, Janis; Kühn, Michael

    2017-04-01

    Reactive transport simulations - where geochemical reactions are coupled with hydrodynamic transport of reactants - are extremely time consuming and suffer from significant numerical issues. Given the high uncertainties inherently associated with the geochemical models, which also constitute the major computational bottleneck, such requirements may seem inappropriate and probably constitute the main limitation for their wide application. A promising way to ease and speed-up such coupled simulations is achievable employing statistical surrogates instead of "full-physics" geochemical models [1]. Data-driven surrogates are reduced models obtained on a set of pre-calculated "full physics" simulations, capturing their principal features while being extremely fast to compute. Model reduction of course comes at price of a precision loss; however, this appears justified in presence of large uncertainties regarding the parametrization of geochemical processes. This contribution illustrates the integration of surrogates into the flexible simulation framework currently being developed by the authors' research group [2]. The high level language of choice for obtaining and dealing with surrogate models is R, which profits from state-of-the-art methods for statistical analysis of large simulations ensembles. A stand-alone advective mass transport module was furthermore developed in order to add such capability to any multiphase finite volume hydrodynamic simulator within the simulation framework. We present 2D and 3D case studies benchmarking the performance of surrogates and "full physics" chemistry in scenarios pertaining the assessment of geological subsurface utilization. [1] Jatnieks, J., De Lucia, M., Dransch, D., Sips, M.: "Data-driven surrogate model approach for improving the performance of reactive transport simulations.", Energy Procedia 97, 2016, p. 447-453. [2] Kempka, T., Nakaten, B., De Lucia, M., Nakaten, N., Otto, C., Pohl, M., Chabab [Tillner], E., Kühn, M

  8. Computer security simulation

    International Nuclear Information System (INIS)

    Schelonka, E.P.

    1979-01-01

    Development and application of a series of simulation codes used for computer security analysis and design are described. Boolean relationships for arrays of barriers within functional modules are used to generate composite effectiveness indices. The general case of multiple layers of protection with any specified barrier survival criteria is given. Generalized reduction algorithms provide numerical security indices in selected subcategories and for the system as a whole. 9 figures, 11 tables

  9. Simulation of solvent extraction in reprocessing

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Shekhar; Koganti, S B [Reprocessing Group, Indira Gandhi Centre for Atomic Research, Kalpakkam (India)

    1994-06-01

    A SIMulation Program for Solvent EXtraction (SIMPSEX) has been developed for simulation of PUREX process used in nuclear fuel reprocessing. This computer program is written in double precision structured FORTRAN77 and at present it is used in DOS environment on a PC386. There is a plan to port it to ND supermini computers in future. (author). 5 refs., 3 figs.

  10. Numerical simulation of the dynamic recrystallization behaviour in hot precision forging helical gears

    Directory of Open Access Journals (Sweden)

    Feng Wei

    2015-01-01

    Full Text Available In hot precision forging helical gears, the dynamic recrystallization phenomena will occur, which affect the microstructure of the formed part and in turn decide their mechanical properties. To investigate the effect of deformation temperature on the dynamic recrystallization in hot precision forging helical gears, a three dimensional (3D finite element (FE model was created by coupling the thermo-mechanical model with the microstructure evolution model developed based on the hot compressive experimental data of 20CrMnTiH steel. The hot precision forging process was simulated and the effect laws of the deformation temperature on the microstructure evolution the formed part were investigated. The results show that the dynamic recrystallization volume fraction and the average grain sizes increased with the increasing deformation temperature and the higher deformation temperature is beneficial to dynamic recrystallization and grain refinement.

  11. Single-Sex Computer Classes: An Effective Alternative.

    Science.gov (United States)

    Swain, Sandra L.; Harvey, Douglas M.

    2002-01-01

    Advocates single-sex computer instruction as a temporary alternative educational program to provide middle school and secondary school girls with access to computers, to present girls with opportunities to develop positive attitudes towards technology, and to make available a learning environment conducive to girls gaining technological skills.…

  12. Molecular computing towards a novel computing architecture for complex problem solving

    CERN Document Server

    Chang, Weng-Long

    2014-01-01

    This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...

  13. Computationally efficient simulation of unsteady aerodynamics using POD on the fly

    Energy Technology Data Exchange (ETDEWEB)

    Moreno-Ramos, Ruben [Gulfstream Aerospace Corporation, Savannah, GA 31408 (United States); Vega, José M; Varas, Fernando, E-mail: ruben.morenoramos@altran.com [E.T.S.I. Aeronáutica y del Espacio, Universidad Politécnica de Madrid, E-28040 Madrid (Spain)

    2016-12-15

    Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330–65; Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844–68) on an adaptive reduced order model  that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg–Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots. (paper)

  14. Structure and dynamics of amorphous polymers: computer simulations compared to experiment and theory

    International Nuclear Information System (INIS)

    Paul, Wolfgang; Smith, Grant D

    2004-01-01

    This contribution considers recent developments in the computer modelling of amorphous polymeric materials. Progress in our capabilities to build models for the computer simulation of polymers from the detailed atomistic scale up to coarse-grained mesoscopic models, together with the ever-improving performance of computers, have led to important insights from computer simulations into the structural and dynamic properties of amorphous polymers. Structurally, chain connectivity introduces a range of length scales from that of the chemical bond to the radius of gyration of the polymer chain covering 2-4 orders of magnitude. Dynamically, this range of length scales translates into an even larger range of time scales observable in relaxation processes in amorphous polymers ranging from about 10 -13 to 10 -3 s or even to 10 3 s when glass dynamics is concerned. There is currently no single simulation technique that is able to describe all these length and time scales efficiently. On large length and time scales basic topology and entropy become the governing properties and this fact can be exploited using computer simulations of coarse-grained polymer models to study universal aspects of the structure and dynamics of amorphous polymers. On the largest length and time scales chain connectivity is the dominating factor leading to the strong increase in longest relaxation times described within the reptation theory of polymer melt dynamics. Recently, many of the universal aspects of this behaviour have been further elucidated by computer simulations of coarse-grained polymer models. On short length scales the detailed chemistry and energetics of the polymer are important, and one has to be able to capture them correctly using chemically realistic modelling of specific polymers, even when the aim is to extract generic physical behaviour exhibited by the specific chemistry. Detailed studies of chemically realistic models highlight the central importance of torsional dynamics

  15. Understanding Islamist political violence through computational social simulation

    Energy Technology Data Exchange (ETDEWEB)

    Watkins, Jennifer H [Los Alamos National Laboratory; Mackerrow, Edward P [Los Alamos National Laboratory; Patelli, Paolo G [Los Alamos National Laboratory; Eberhardt, Ariane [Los Alamos National Laboratory; Stradling, Seth G [Los Alamos National Laboratory

    2008-01-01

    Understanding the process that enables political violence is of great value in reducing the future demand for and support of violent opposition groups. Methods are needed that allow alternative scenarios and counterfactuals to be scientifically researched. Computational social simulation shows promise in developing 'computer experiments' that would be unfeasible or unethical in the real world. Additionally, the process of modeling and simulation reveals and challenges assumptions that may not be noted in theories, exposes areas where data is not available, and provides a rigorous, repeatable, and transparent framework for analyzing the complex dynamics of political violence. This paper demonstrates the computational modeling process using two simulation techniques: system dynamics and agent-based modeling. The benefits and drawbacks of both techniques are discussed. In developing these social simulations, we discovered that the social science concepts and theories needed to accurately simulate the associated psychological and social phenomena were lacking.

  16. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  17. Multi-resolution simulation of focused ultrasound propagation through ovine skull from a single-element transducer

    Science.gov (United States)

    Yoon, Kyungho; Lee, Wonhye; Croce, Phillip; Cammalleri, Amanda; Yoo, Seung-Schik

    2018-05-01

    Transcranial focused ultrasound (tFUS) is emerging as a non-invasive brain stimulation modality. Complicated interactions between acoustic pressure waves and osseous tissue introduce many challenges in the accurate targeting of an acoustic focus through the cranium. Image-guidance accompanied by a numerical simulation is desired to predict the intracranial acoustic propagation through the skull; however, such simulations typically demand heavy computation, which warrants an expedited processing method to provide on-site feedback for the user in guiding the acoustic focus to a particular brain region. In this paper, we present a multi-resolution simulation method based on the finite-difference time-domain formulation to model the transcranial propagation of acoustic waves from a single-element transducer (250 kHz). The multi-resolution approach improved computational efficiency by providing the flexibility in adjusting the spatial resolution. The simulation was also accelerated by utilizing parallelized computation through the graphic processing unit. To evaluate the accuracy of the method, we measured the actual acoustic fields through ex vivo sheep skulls with different sonication incident angles. The measured acoustic fields were compared to the simulation results in terms of focal location, dimensions, and pressure levels. The computational efficiency of the presented method was also assessed by comparing simulation speeds at various combinations of resolution grid settings. The multi-resolution grids consisting of 0.5 and 1.0 mm resolutions gave acceptable accuracy (under 3 mm in terms of focal position and dimension, less than 5% difference in peak pressure ratio) with a speed compatible with semi real-time user feedback (within 30 s). The proposed multi-resolution approach may serve as a novel tool for simulation-based guidance for tFUS applications.

  18. REACTOR: a computer simulation for schools

    International Nuclear Information System (INIS)

    Squires, D.

    1985-01-01

    The paper concerns computer simulation of the operation of a nuclear reactor, for use in schools. The project was commissioned by UKAEA, and carried out by the Computers in the Curriculum Project, Chelsea College. The program, for an advanced gas cooled reactor, is briefly described. (U.K.)

  19. Learning and instruction with computer simulations

    NARCIS (Netherlands)

    de Jong, Anthonius J.M.

    1991-01-01

    The present volume presents the results of an inventory of elements of such a computer learning environment. This inventory was conducted within a DELTA project called SIMULATE. In the project a learning environment that provides intelligent support to learners and that has a simulation as its

  20. Precision platform for convex lens-induced confinement microscopy

    Science.gov (United States)

    Berard, Daniel; McFaul, Christopher M. J.; Leith, Jason S.; Arsenault, Adriel K. J.; Michaud, François; Leslie, Sabrina R.

    2013-10-01

    We present the conception, fabrication, and demonstration of a versatile, computer-controlled microscopy device which transforms a standard inverted fluorescence microscope into a precision single-molecule imaging station. The device uses the principle of convex lens-induced confinement [S. R. Leslie, A. P. Fields, and A. E. Cohen, Anal. Chem. 82, 6224 (2010)], which employs a tunable imaging chamber to enhance background rejection and extend diffusion-limited observation periods. Using nanopositioning stages, this device achieves repeatable and dynamic control over the geometry of the sample chamber on scales as small as the size of individual molecules, enabling regulation of their configurations and dynamics. Using microfluidics, this device enables serial insertion as well as sample recovery, facilitating temporally controlled, high-throughput measurements of multiple reagents. We report on the simulation and experimental characterization of this tunable chamber geometry, and its influence upon the diffusion and conformations of DNA molecules over extended observation periods. This new microscopy platform has the potential to capture, probe, and influence the configurations of single molecules, with dramatically improved imaging conditions in comparison to existing technologies. These capabilities are of immediate interest to a wide range of research and industry sectors in biotechnology, biophysics, materials, and chemistry.

  1. Computer simulation on molten ionic salts

    International Nuclear Information System (INIS)

    Kawamura, K.; Okada, I.

    1978-01-01

    The extensive advances in computer technology have since made it possible to apply computer simulation to the evaluation of the macroscopic and microscopic properties of molten salts. The evaluation of the potential energy in molten salts systems is complicated by the presence of long-range energy, i.e. Coulomb energy, in contrast to simple liquids where the potential energy is easily evaluated. It has been shown, however, that no difficulties are encountered when the Ewald method is applied to the evaluation of Coulomb energy. After a number of attempts had been made to approximate the pair potential, the Huggins-Mayer potential based on ionic crystals became the most often employed. Since it is thought that the only appreciable contribution to many-body potential, not included in Huggins-Mayer potential, arises from the internal electrostatic polarization of ions in molten ionic salts, computer simulation with a provision for ion polarization has been tried recently. The computations, which are employed mainly for molten alkali halides, can provide: (1) thermodynamic data such as internal energy, internal pressure and isothermal compressibility; (2) microscopic configurational data such as radial distribution functions; (3) transport data such as the diffusion coefficient and electrical conductivity; and (4) spectroscopic data such as the intensity of inelastic scattering and the stretching frequency of simple molecules. The computed results seem to agree well with the measured results. Computer simulation can also be used to test the effectiveness of a proposed pair potential and the adequacy of postulated models of molten salts, and to obtain experimentally inaccessible data. A further application of MD computation employing the pair potential based on an ionic model to BeF 2 , ZnCl 2 and SiO 2 shows the possibility of quantitative interpretation of structures and glass transformation phenomena

  2. Hybrid simulation of scatter intensity in industrial cone-beam computed tomography

    International Nuclear Information System (INIS)

    Thierry, R.; Miceli, A.; Hofmann, J.; Flisch, A.; Sennhauser, U.

    2009-01-01

    A cone-beam computed tomography (CT) system using a 450 kV X-ray tube has been developed to challenge the three-dimensional imaging of parts of the automotive industry in short acquisition time. Because the probability of detecting scattered photons is high regarding the energy range and the area of detection, a scattering correction becomes mandatory for generating reliable images with enhanced contrast detectability. In this paper, we present a hybrid simulator for the fast and accurate calculation of the scattering intensity distribution. The full acquisition chain, from the generation of a polyenergetic photon beam, its interaction with the scanned object and the energy deposit in the detector is simulated. Object phantoms can be spatially described in form of voxels, mathematical primitives or CAD models. Uncollided radiation is treated with a ray-tracing method and scattered radiation is split into single and multiple scattering. The single scattering is calculated with a deterministic approach accelerated with a forced detection method. The residual noisy signal is subsequently deconvoluted with the iterative Richardson-Lucy method. Finally the multiple scattering is addressed with a coarse Monte Carlo (MC) simulation. The proposed hybrid method has been validated on aluminium phantoms with varying size and object-to-detector distance, and found in good agreement with the MC code Geant4. The acceleration achieved by the hybrid method over the standard MC on a single projection is approximately of three orders of magnitude.

  3. New Pedagogies on Teaching Science with Computer Simulations

    Science.gov (United States)

    Khan, Samia

    2011-01-01

    Teaching science with computer simulations is a complex undertaking. This case study examines how an experienced science teacher taught chemistry using computer simulations and the impact of his teaching on his students. Classroom observations over 3 semesters, teacher interviews, and student surveys were collected. The data was analyzed for (1)…

  4. Computer simulation boosts automation in the stockyard

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-04-01

    Today's desktop computer and advanced software keep pace with handling equipment to reach new heights of sophistication with graphic simulation able to show precisely what is and could happen in the coal terminal's stockyard. The article describes an innovative coal terminal nearing completion on the Pacific coast at Lazaro Cardenas in Mexico, called the Petracalco terminal. Here coal is unloaded, stored and fed to the nearby power plant of Pdte Plutarco Elias Calles. The R & D department of the Italian company Techint, Italimpianti has developed MHATIS, a sophisticated software system for marine terminal management here, allowing analysis of performance with the use of graphical animation. Strategies can be tested before being put into practice and likely power station demand can be predicted. The design and operation of the MHATIS system is explained. Other integrated coal handling plants described in the article are that developed by the then PWH (renamed Krupp Foerdertechnik) of Germany for the Israel Electric Corporation and the installation by the same company of a further bucketwheel for a redesigned coal stockyard at the Port of Hamburg operated by Hansaport. 1 fig., 4 photos.

  5. In Silico Dynamics: computer simulation in a Virtual Embryo ...

    Science.gov (United States)

    Abstract: Utilizing cell biological information to predict higher order biological processes is a significant challenge in predictive toxicology. This is especially true for highly dynamical systems such as the embryo where morphogenesis, growth and differentiation require precisely orchestrated interactions between diverse cell populations. In patterning the embryo, genetic signals setup spatial information that cells then translate into a coordinated biological response. This can be modeled as ‘biowiring diagrams’ representing genetic signals and responses. Because the hallmark of multicellular organization resides in the ability of cells to interact with one another via well-conserved signaling pathways, multiscale computational (in silico) models that enable these interactions provide a platform to translate cellular-molecular lesions perturbations into higher order predictions. Just as ‘the Cell’ is the fundamental unit of biology so too should it be the computational unit (‘Agent’) for modeling embryogenesis. As such, we constructed multicellular agent-based models (ABM) with ‘CompuCell3D’ (www.compucell3d.org) to simulate kinematics of complex cell signaling networks and enable critical tissue events for use in predictive toxicology. Seeding the ABMs with HTS/HCS data from ToxCast demonstrated the potential to predict, quantitatively, the higher order impacts of chemical disruption at the cellular or biochemical level. This is demonstrate

  6. Investigation on single carbon atom transporting through the single-walled carbon nanotube by MD simulation

    International Nuclear Information System (INIS)

    Ding Yinfeng; Zhang Zhibin; Ke Xuezhi; Zhu Zhiyuan; Zhu Dezhang; Wang Zhenxia; Xu Hongjie

    2005-01-01

    The single carbon atom transporting through the single-walled carbon nanotube has been studied by molecular-dynamics (MD) simulation. We got different trajectories of the carbon atom by changing the input parameters. The simulation results indicate that the single carbon atom with low energy can transport through the carbon nanotube under some input conditions and result in different trajectories being straight line or 'rosette' or circular. (authors)

  7. Equivalence of two models in single-phase multicomponent flow simulations

    KAUST Repository

    Wu, Yuanqing

    2016-02-28

    In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.

  8. Equivalence of two models in single-phase multicomponent flow simulations

    KAUST Repository

    Wu, Yuanqing; Sun, Shuyu

    2016-01-01

    In this work, two models to simulate the single-phase multicomponent flow in reservoirs are introduced: single-phase multicomponent flow model and two-phase compositional flow model. Because the single-phase multicomponent flow is a special case of the two-phase compositional flow, the two-phase compositional flow model can also simulate the case. We compare and analyze the two models when simulating the single-phase multicomponent flow, and then demonstrate the equivalence of the two models mathematically. An experiment is also carried out to verify the equivalence of the two models.

  9. Interoceanic canal excavation scheduling via computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Baldonado, Orlino C [Holmes and Narver, Inc., Los Angeles, CA (United States)

    1970-05-15

    The computer simulation language GPSS/360 was used to simulate the schedule of several nuclear detonation programs for the interoceanic canal project. The effects of using different weather restriction categories due to air blast and fallout were investigated. The effect of increasing the number of emplacement and stemming crews and the effect of varying the reentry period after detonating a row charge or salvo were also studied. Detonation programs were simulated for the proposed Routes 17A and 25E. The study demonstrates the method of using computer simulation so that a schedule and its associated constraints can be assessed for feasibility. Since many simulation runs can be made for a given set of detonation program constraints, one readily obtains an average schedule for a range of conditions. This provides a method for analyzing time-sensitive operations so that time and cost-effective operational schedules can be established. A comparison of the simulated schedules with those that were published shows them to be similar. (author)

  10. Interoceanic canal excavation scheduling via computer simulation

    International Nuclear Information System (INIS)

    Baldonado, Orlino C.

    1970-01-01

    The computer simulation language GPSS/360 was used to simulate the schedule of several nuclear detonation programs for the interoceanic canal project. The effects of using different weather restriction categories due to air blast and fallout were investigated. The effect of increasing the number of emplacement and stemming crews and the effect of varying the reentry period after detonating a row charge or salvo were also studied. Detonation programs were simulated for the proposed Routes 17A and 25E. The study demonstrates the method of using computer simulation so that a schedule and its associated constraints can be assessed for feasibility. Since many simulation runs can be made for a given set of detonation program constraints, one readily obtains an average schedule for a range of conditions. This provides a method for analyzing time-sensitive operations so that time and cost-effective operational schedules can be established. A comparison of the simulated schedules with those that were published shows them to be similar. (author)

  11. Biomes computed from simulated climatologies

    Energy Technology Data Exchange (ETDEWEB)

    Claussen, W.; Esch, M.

    1992-09-01

    The biome model of Prentice et al. is used to predict global patterns of potential natural plant formations, or biomes, from climatologies simulated by ECHAM, a model used for climate simulations at the Max-Planck-Institut fuer Meteorologie. This study is undertaken in order to show the advantage of this biome model in comprehensively diagnosing the performance of a climate model and assessing effects of past and future climate changes predicted by a climate model. Good overall agreement is found between global patterns of biomes computed from observed and simulated data of present climate. But there are also major discrepancies indicated by a difference in biomes in Australia, in the Kalahari Desert, and in the Middle West of North America. These discrepancies can be traced back to failures in simulated rain fall as well as summer or winter temperatures. Global patterns of biomes computed from an ice age simulation reveal that North America, Europe, and Siberia should have been covered largely by tundra and taiga, whereas only small differences are seen for the tropical rain forests. A potential North-East shift of biomes is expected from a simulation with enhanced CO{sub 2} concentration according to the IPCC Scenario A. Little change is seen in the tropical rain forest and the Sahara. Since the biome model used is not capable of predicting changes in vegetation patterns due to a rapid climate change, the latter simulation has to be taken as a prediction of changes in conditions favorable for the existence of certain biomes, not as a prediction of a future distribution of biomes. (orig.).

  12. Computer graphics in heat-transfer simulations

    International Nuclear Information System (INIS)

    Hamlin, G.A. Jr.

    1980-01-01

    Computer graphics can be very useful in the setup of heat transfer simulations and in the display of the results of such simulations. The potential use of recently available low-cost graphics devices in the setup of such simulations has not been fully exploited. Several types of graphics devices and their potential usefulness are discussed, and some configurations of graphics equipment are presented in the low-, medium-, and high-price ranges

  13. Parallel Computing for Brain Simulation.

    Science.gov (United States)

    Pastur-Romay, L A; Porto-Pazos, A B; Cedron, F; Pazos, A

    2017-01-01

    The human brain is the most complex system in the known universe, it is therefore one of the greatest mysteries. It provides human beings with extraordinary abilities. However, until now it has not been understood yet how and why most of these abilities are produced. For decades, researchers have been trying to make computers reproduce these abilities, focusing on both understanding the nervous system and, on processing data in a more efficient way than before. Their aim is to make computers process information similarly to the brain. Important technological developments and vast multidisciplinary projects have allowed creating the first simulation with a number of neurons similar to that of a human brain. This paper presents an up-to-date review about the main research projects that are trying to simulate and/or emulate the human brain. They employ different types of computational models using parallel computing: digital models, analog models and hybrid models. This review includes the current applications of these works, as well as future trends. It is focused on various works that look for advanced progress in Neuroscience and still others which seek new discoveries in Computer Science (neuromorphic hardware, machine learning techniques). Their most outstanding characteristics are summarized and the latest advances and future plans are presented. In addition, this review points out the importance of considering not only neurons: Computational models of the brain should also include glial cells, given the proven importance of astrocytes in information processing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Computational Dehydration of Crystalline Hydrates Using Molecular Dynamics Simulations

    DEFF Research Database (Denmark)

    Larsen, Anders Støttrup; Rantanen, Jukka; Johansson, Kristoffer E

    2017-01-01

    Molecular dynamics (MD) simulations have evolved to an increasingly reliable and accessible technique and are today implemented in many areas of biomedical sciences. We present a generally applicable method to study dehydration of hydrates based on MD simulations and apply this approach...... to the dehydration of ampicillin trihydrate. The crystallographic unit cell of the trihydrate is used to construct the simulation cell containing 216 ampicillin and 648 water molecules. This system is dehydrated by removing water molecules during a 2200 ps simulation, and depending on the computational dehydration....... The structural changes could be followed in real time, and in addition, an intermediate amorphous phase was identified. The computationally identified dehydrated structure (anhydrate) was slightly different from the experimentally known anhydrate structure suggesting that the simulated computational structure...

  15. Practical integrated simulation systems for coupled numerical simulations in parallel

    Energy Technology Data Exchange (ETDEWEB)

    Osamu, Hazama; Zhihong, Guo [Japan Atomic Energy Research Inst., Centre for Promotion of Computational Science and Engineering, Tokyo (Japan)

    2003-07-01

    In order for the numerical simulations to reflect 'real-world' phenomena and occurrences, incorporation of multidisciplinary and multi-physics simulations considering various physical models and factors are becoming essential. However, there still exist many obstacles which inhibit such numerical simulations. For example, it is still difficult in many instances to develop satisfactory software packages which allow for such coupled simulations and such simulations will require more computational resources. A precise multi-physics simulation today will require parallel processing which again makes it a complicated process. Under the international cooperative efforts between CCSE/JAERI and Fraunhofer SCAI, a German institute, a library called the MpCCI, or Mesh-based Parallel Code Coupling Interface, has been implemented together with a library called STAMPI to couple two existing codes to develop an 'integrated numerical simulation system' intended for meta-computing environments. (authors)

  16. Computer simulation of thermal plant operations

    CERN Document Server

    O'Kelly, Peter

    2012-01-01

    This book describes thermal plant simulation, that is, dynamic simulation of plants which produce, exchange and otherwise utilize heat as their working medium. Directed at chemical, mechanical and control engineers involved with operations, control and optimization and operator training, the book gives the mathematical formulation and use of simulation models of the equipment and systems typically found in these industries. The author has adopted a fundamental approach to the subject. The initial chapters provide an overview of simulation concepts and describe a suitable computer environment.

  17. Single cell adhesion assay using computer controlled micropipette.

    Directory of Open Access Journals (Sweden)

    Rita Salánki

    Full Text Available Cell adhesion is a fundamental phenomenon vital for all multicellular organisms. Recognition of and adhesion to specific macromolecules is a crucial task of leukocytes to initiate the immune response. To gain statistically reliable information of cell adhesion, large numbers of cells should be measured. However, direct measurement of the adhesion force of single cells is still challenging and today's techniques typically have an extremely low throughput (5-10 cells per day. Here, we introduce a computer controlled micropipette mounted onto a normal inverted microscope for probing single cell interactions with specific macromolecules. We calculated the estimated hydrodynamic lifting force acting on target cells by the numerical simulation of the flow at the micropipette tip. The adhesion force of surface attached cells could be accurately probed by repeating the pick-up process with increasing vacuum applied in the pipette positioned above the cell under investigation. Using the introduced methodology hundreds of cells adhered to specific macromolecules were measured one by one in a relatively short period of time (∼30 min. We blocked nonspecific cell adhesion by the protein non-adhesive PLL-g-PEG polymer. We found that human primary monocytes are less adherent to fibrinogen than their in vitro differentiated descendants: macrophages and dendritic cells, the latter producing the highest average adhesion force. Validation of the here introduced method was achieved by the hydrostatic step-pressure micropipette manipulation technique. Additionally the result was reinforced in standard microfluidic shear stress channels. Nevertheless, automated micropipette gave higher sensitivity and less side-effect than the shear stress channel. Using our technique, the probed single cells can be easily picked up and further investigated by other techniques; a definite advantage of the computer controlled micropipette. Our experiments revealed the existence of a

  18. A Computer-Based Simulation of an Acid-Base Titration

    Science.gov (United States)

    Boblick, John M.

    1971-01-01

    Reviews the advantages of computer simulated environments for experiments, referring in particular to acid-base titrations. Includes pre-lab instructions and a sample computer printout of a student's use of an acid-base simulation. Ten references. (PR)

  19. Advanced Simulation and Computing FY08-09 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Kusnezov, D; Bickel, T; McCoy, M; Hopson, J

    2007-09-13

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear-weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable Stockpile Life Extension Programs (SLEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining the support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from

  20. Single instruction computer architecture and its application in image processing

    Science.gov (United States)

    Laplante, Phillip A.

    1992-03-01

    A single processing computer system using only half-adder circuits is described. In addition, it is shown that only a single hard-wired instruction is needed in the control unit to obtain a complete instruction set for this general purpose computer. Such a system has several advantages. First it is intrinsically a RISC machine--in fact the 'ultimate RISC' machine. Second, because only a single type of logic element is employed the entire computer system can be easily realized on a single, highly integrated chip. Finally, due to the homogeneous nature of the computer's logic elements, the computer has possible implementations as an optical or chemical machine. This in turn suggests possible paradigms for neural computing and artificial intelligence. After showing how we can implement a full-adder, min, max and other operations using the half-adder, we use an array of such full-adders to implement the dilation operation for two black and white images. Next we implement the erosion operation of two black and white images using a relative complement function and the properties of erosion and dilation. This approach was inspired by papers by van der Poel in which a single instruction is used to furnish a complete set of general purpose instructions and by Bohm- Jacopini where it is shown that any problem can be solved using a Turing machine with one entry and one exit.

  1. Quantum simulations with noisy quantum computers

    Science.gov (United States)

    Gambetta, Jay

    Quantum computing is a new computational paradigm that is expected to lie beyond the standard model of computation. This implies a quantum computer can solve problems that can't be solved by a conventional computer with tractable overhead. To fully harness this power we need a universal fault-tolerant quantum computer. However the overhead in building such a machine is high and a full solution appears to be many years away. Nevertheless, we believe that we can build machines in the near term that cannot be emulated by a conventional computer. It is then interesting to ask what these can be used for. In this talk we will present our advances in simulating complex quantum systems with noisy quantum computers. We will show experimental implementations of this on some small quantum computers.

  2. Salesperson Ethics: An Interactive Computer Simulation

    Science.gov (United States)

    Castleberry, Stephen

    2014-01-01

    A new interactive computer simulation designed to teach sales ethics is described. Simulation learner objectives include gaining a better understanding of legal issues in selling; realizing that ethical dilemmas do arise in selling; realizing the need to be honest when selling; seeing that there are conflicting demands from a salesperson's…

  3. Simulations of Probabilities for Quantum Computing

    Science.gov (United States)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  4. Simulations of surface stress effects in nanoscale single crystals

    Science.gov (United States)

    Zadin, V.; Veske, M.; Vigonski, S.; Jansson, V.; Muszinsky, J.; Parviainen, S.; Aabloo, A.; Djurabekova, F.

    2018-04-01

    Onset of vacuum arcing near a metal surface is often associated with nanoscale asperities, which may dynamically appear due to different processes ongoing in the surface and subsurface layers in the presence of high electric fields. Thermally activated processes, as well as plastic deformation caused by tensile stress due to an applied electric field, are usually not accessible by atomistic simulations because of the long time needed for these processes to occur. On the other hand, finite element methods, able to describe the process of plastic deformations in materials at realistic stresses, do not include surface properties. The latter are particularly important for the problems where the surface plays crucial role in the studied process, as for instance, in the case of plastic deformations at a nanovoid. In the current study by means of molecular dynamics (MD) and finite element simulations we analyse the stress distribution in single crystal copper containing a nanovoid buried deep under the surface. We have developed a methodology to incorporate the surface effects into the solid mechanics framework by utilizing elastic properties of crystals, pre-calculated using MD simulations. The method leads to computationally efficient stress calculations and can be easily implemented in commercially available finite element software, making it an attractive analysis tool.

  5. Experimental and Computational Characterization of Biological Liquid Crystals: A Review of Single-Molecule Bioassays

    Directory of Open Access Journals (Sweden)

    Sungsoo Na

    2009-09-01

    Full Text Available Quantitative understanding of the mechanical behavior of biological liquid crystals such as proteins is essential for gaining insight into their biological functions, since some proteins perform notable mechanical functions. Recently, single-molecule experiments have allowed not only the quantitative characterization of the mechanical behavior of proteins such as protein unfolding mechanics, but also the exploration of the free energy landscape for protein folding. In this work, we have reviewed the current state-of-art in single-molecule bioassays that enable quantitative studies on protein unfolding mechanics and/or various molecular interactions. Specifically, single-molecule pulling experiments based on atomic force microscopy (AFM have been overviewed. In addition, the computational simulations on single-molecule pulling experiments have been reviewed. We have also reviewed the AFM cantilever-based bioassay that provides insight into various molecular interactions. Our review highlights the AFM-based single-molecule bioassay for quantitative characterization of biological liquid crystals such as proteins.

  6. Computer simulations of quench properties of thin, large superconducting solenoid magnets

    International Nuclear Information System (INIS)

    Kishimoto, Takeshi; Mori, Shigeki; Noguchi, Masaharu

    1983-01-01

    Measured quench data of a 1 m diameter x 1 m thin superconducting solenoid magnet with a single layer aluminum-stabilized NbTi/Cu superconductor of 269 turns were fitted by computer simulations using the one-dimensional approximation. Parameters obtained were used to study quench properties of a 3 m diameter x 5 m (1.5 Tesla) thin superconducting solenoid magnet with a stored magnetic energy of 30 x 10 6 J. Conductor dimensions with which the solenoid could be built substantially safe for the full field quench were optimized. (author)

  7. Computer Simulation of Reading.

    Science.gov (United States)

    Leton, Donald A.

    In recent years, coding and decoding have been claimed to be the processes for converting one language form to another. But there has been little effort to locate these processes in the human learner or to identify the nature of the internal codes. Computer simulation of reading is useful because the similarities in the human reception and…

  8. Evaluation of Computer Simulations for Teaching Apparel Merchandising Concepts.

    Science.gov (United States)

    Jolly, Laura D.; Sisler, Grovalynn

    1988-01-01

    The study developed and evaluated computer simulations for teaching apparel merchandising concepts. Evaluation results indicated that teaching method (computer simulation versus case study) does not significantly affect cognitive learning. Student attitudes varied, however, according to topic (profitable merchandising analysis versus retailing…

  9. A Perfect Match Genomic Landscape Provides a Unified Framework for the Precise Detection of Variation in Natural and Synthetic Haploid Genomes.

    Science.gov (United States)

    Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo

    2018-04-01

    We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. Copyright © 2018 by the Genetics Society of America.

  10. Simulation of the phenomenon of single-phase and two-phase natural circulation

    International Nuclear Information System (INIS)

    Castrillo, Lazara Silveira

    1998-02-01

    Natural convection phenomenon is often used to remove the residual heat from the surfaces of bodies where the heat is generated e.g. during accidents or transients of nuclear power plants. Experimental study of natural circulation can be done in small scale experimental circuits and the results can be extrapolated for larger operational facilities. The numerical analysis of transients can be carried out by using large computational codes that simulate the thermohydraulic behavior in such facilities. The computational code RELAP5/MOD2, (Reactor Excursion and Leak Analysis Program) was developed by U.S. Nuclear Regulatory Commissions's. Division of Reactor Safety Research with the objective of analysis of transients and postulated accidents in the light water reactor (LWR) systems, including small and large ruptures with loss of coolant accidents (LOCA's). The results obtained by the simulation of single-phase and two-phase natural circulation, using the RELAP5/MOD2, are presented in this work. The study was carried out using the experimental circuit built at the 'Departamento de Engenharia Quimica da Escola Politecnica da Universidade de Sao Paulo'. In the circuit, two experiments were carried out with different conditions of power and mass flow, obtaining a single-phase regime with a level of power of 4706 W and flow of 5.10 -5 m 3 /s (3 l/min) and a two-phase regime with a level of power of 6536 W and secondary flow 2,33.10 -5 m 3 /s (1,4 l/min). The study allowed tio evaluate the capacity of the code for representing such phenomena as well as comparing the transients obtained theoretically with the experimental results. The comparative analysis shows that the code represents fairly well the single-phase transient, but the results for two-phase transients, starting from the nodalization and calibration used for the case single-phase transient, did not reproduce faithfully some experimental results. (author)

  11. Methodology of modeling and measuring computer architectures for plasma simulations

    Science.gov (United States)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  12. Computer Simulation of a Hardwood Processing Plant

    Science.gov (United States)

    D. Earl Kline; Philip A. Araman

    1990-01-01

    The overall purpose of this paper is to introduce computer simulation as a decision support tool that can be used to provide managers with timely information. A simulation/animation modeling procedure is demonstrated for wood products manufacuring systems. Simulation modeling techniques are used to assist in identifying and solving problems. Animation is used for...

  13. Computational Modeling of Photonic Crystal Microcavity Single-Photon Emitters

    Science.gov (United States)

    Saulnier, Nicole A.

    Conventional cryptography is based on algorithms that are mathematically complex and difficult to solve, such as factoring large numbers. The advent of a quantum computer would render these schemes useless. As scientists work to develop a quantum computer, cryptographers are developing new schemes for unconditionally secure cryptography. Quantum key distribution has emerged as one of the potential replacements of classical cryptography. It relics on the fact that measurement of a quantum bit changes the state of the bit and undetected eavesdropping is impossible. Single polarized photons can be used as the quantum bits, such that a quantum system would in some ways mirror the classical communication scheme. The quantum key distribution system would include components that create, transmit and detect single polarized photons. The focus of this work is on the development of an efficient single-photon source. This source is comprised of a single quantum dot inside of a photonic crystal microcavity. To better understand the physics behind the device, a computational model is developed. The model uses Finite-Difference Time-Domain methods to analyze the electromagnetic field distribution in photonic crystal microcavities. It uses an 8-band k · p perturbation theory to compute the energy band structure of the epitaxially grown quantum dots. We discuss a method that combines the results of these two calculations for determining the spontaneous emission lifetime of a quantum dot in bulk material or in a microcavity. The computational models developed in this thesis are used to identify and characterize microcavities for potential use in a single-photon source. The computational tools developed are also used to investigate novel photonic crystal microcavities that incorporate 1D distributed Bragg reflectors for vertical confinement. It is found that the spontaneous emission enhancement in the quasi-3D cavities can be significantly greater than in traditional suspended slab

  14. Precision Modeling Of Targets Using The VALUE Computer Program

    Science.gov (United States)

    Hoffman, George A.; Patton, Ronald; Akerman, Alexander

    1989-08-01

    The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.

  15. Modeling and simulation of single-event effect in CMOS circuit

    International Nuclear Information System (INIS)

    Yue Suge; Zhang Xiaolin; Zhao Yuanfu; Liu Lin; Wang Hanning

    2015-01-01

    This paper reviews the status of research in modeling and simulation of single-event effects (SEE) in digital devices and integrated circuits. After introducing a brief historical overview of SEE simulation, different level simulation approaches of SEE are detailed, including material-level physical simulation where two primary methods by which ionizing radiation releases charge in a semiconductor device (direct ionization and indirect ionization) are introduced, device-level simulation where the main emerging physical phenomena affecting nanometer devices (bipolar transistor effect, charge sharing effect) and the methods envisaged for taking them into account are focused on, and circuit-level simulation where the methods for predicting single-event response about the production and propagation of single-event transients (SETs) in sequential and combinatorial logic are detailed, as well as the soft error rate trends with scaling are particularly addressed. (review)

  16. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    Science.gov (United States)

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  17. Interferences and events on epistemic shifts in physics through computer simulations

    CERN Document Server

    Warnke, Martin

    2017-01-01

    Computer simulations are omnipresent media in today's knowledge production. For scientific endeavors such as the detection of gravitational waves and the exploration of subatomic worlds, simulations are essential; however, the epistemic status of computer simulations is rather controversial as they are neither just theory nor just experiment. Therefore, computer simulations have challenged well-established insights and common scientific practices as well as our very understanding of knowledge. This volume contributes to the ongoing discussion on the epistemic position of computer simulations in a variety of physical disciplines, such as quantum optics, quantum mechanics, and computational physics. Originating from an interdisciplinary event, it shows that accounts of contemporary physics can constructively interfere with media theory, philosophy, and the history of science.

  18. Frontend electronics for high-precision single photo-electron timing using FPGA-TDCs

    Energy Technology Data Exchange (ETDEWEB)

    Cardinali, M., E-mail: cardinal@kph.uni-mainz.de [Institut für Kernphysik, Johannes Gutenberg-University Mainz, Mainz (Germany); Helmholtz Institut Mainz, Mainz (Germany); Dzyhgadlo, R.; Gerhardt, A.; Götzen, K.; Hohler, R.; Kalicy, G.; Kumawat, H.; Lehmann, D.; Lewandowski, B.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Ugur, C.; Zühlsdorf, M. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt (Germany); Dodokhov, V.Kh. [Joint Institute for Nuclear Research, Dubna (Russian Federation); Britting, A. [Friedrich Alexander-University of Erlangen-Nuremberg, Erlangen (Germany); and others

    2014-12-01

    The next generation of high-luminosity experiments requires excellent particle identification detectors which calls for Imaging Cherenkov counters with fast electronics to cope with the expected hit rates. A Barrel DIRC will be used in the central region of the Target Spectrometer of the planned PANDA experiment at FAIR. A single photo-electron timing resolution of better than 100 ps is required by the Barrel DIRC to disentangle the complicated patterns created on the image plane. R and D studies have been performed to provide a design based on the TRB3 readout using FPGA-TDCs with a precision better than 20 ps RMS and custom frontend electronics with high-bandwidth pre-amplifiers and fast discriminators. The discriminators also provide time-over-threshold information thus enabling walk corrections to improve the timing resolution. Two types of frontend electronics cards optimised for reading out 64-channel PHOTONIS Planacon MCP-PMTs were tested: one based on the NINO ASIC and the other, called PADIWA, on FPGA discriminators. Promising results were obtained in a full characterisation using a fast laser setup and in a test experiment at MAMI, Mainz, with a small scale DIRC prototype. - Highlights: • Frontend electronics for Cherenkov detectors have been developed. • FPGA-TDCs have been used for high precision timing. • Time over threshold has been utilised for walk correction. • Single photo-electron timing resolution less than 100 ps has been achieved.

  19. Computed radiography simulation using the Monte Carlo code MCNPX

    International Nuclear Information System (INIS)

    Correa, S.C.A.; Souza, E.M.; Silva, A.X.; Lopes, R.T.

    2009-01-01

    Simulating x-ray images has been of great interest in recent years as it makes possible an analysis of how x-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data. (author)

  20. Computed radiography simulation using the Monte Carlo code MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)

    2010-09-15

    Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.

  1. Proceedings of the meeting on large scale computer simulation research

    International Nuclear Information System (INIS)

    2004-04-01

    The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)

  2. Thermal-Hydraulic Simulations of Single Pin and Assembly Sector for IVG- 1M Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Garner, P. [Argonne National Lab. (ANL), Argonne, IL (United States); Hanan, N. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-15

    Thermal-hydraulic simulations have been performed using computational fluid dynamics (CFD) for the highly-enriched uranium (HEU) design of the IVG.1M reactor at the Institute of Atomic Energy (IAE) at the National Nuclear Center (NNC) in the Republic of Kazakhstan. Steady-state simulations were performed for both types of fuel assembly (FA), i.e. the FA in rows 1 & 2 and the FA in row 3, as well as for single pins in those FA (600 mm and 800 mm pins). Both single pin calculations and bundle sectors have been simulated for the most conservative operating conditions corresponding to the 10 MW output power, which corresponds to a pin unit cell Reynolds number of only about 7500. Simulations were performed using the commercial code STAR-CCM+ for the actual twisted pin geometry as well as a straight-pin approximation. Various Reynolds-Averaged Navier-Stokes (RANS) turbulence models gave different results, and so some validation runs with a higher-fidelity Large Eddy Simulation (LES) code were performed given the lack of experimental data. These singled out the Realizable Two-Layer k-ε as the most accurate turbulence model for estimating surface temperature. Single-pin results for the twisted case, based on the average flow rate per pin and peak pin power, were conservative for peak clad surface temperature compared to the bundle results. Also the straight-pin calculations were conservative as compared to the twisted pin simulations, as expected, but the single-pin straight case was not always conservative with regard to the straight-pin bundle. This was due to the straight-pin temperature distribution being strongly influenced by the pin orientation, particularly near the outer boundary. The straight-pin case also predicted the peak temperature to be in a different location than the twisted-pin case. This is a limitation of the straight-pin approach. The peak temperature pin was in a different location from the peak power pin in every case simulated, and occurred at an

  3. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  4. A computer code to simulate X-ray imaging techniques

    International Nuclear Information System (INIS)

    Duvauchelle, Philippe; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-01-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests

  5. A computer code to simulate X-ray imaging techniques

    Energy Technology Data Exchange (ETDEWEB)

    Duvauchelle, Philippe E-mail: philippe.duvauchelle@insa-lyon.fr; Freud, Nicolas; Kaftandjian, Valerie; Babot, Daniel

    2000-09-01

    A computer code was developed to simulate the operation of radiographic, radioscopic or tomographic devices. The simulation is based on ray-tracing techniques and on the X-ray attenuation law. The use of computer-aided drawing (CAD) models enables simulations to be carried out with complex three-dimensional (3D) objects and the geometry of every component of the imaging chain, from the source to the detector, can be defined. Geometric unsharpness, for example, can be easily taken into account, even in complex configurations. Automatic translations or rotations of the object can be performed to simulate radioscopic or tomographic image acquisition. Simulations can be carried out with monochromatic or polychromatic beam spectra. This feature enables, for example, the beam hardening phenomenon to be dealt with or dual energy imaging techniques to be studied. The simulation principle is completely deterministic and consequently the computed images present no photon noise. Nevertheless, the variance of the signal associated with each pixel of the detector can be determined, which enables contrast-to-noise ratio (CNR) maps to be computed, in order to predict quantitatively the detectability of defects in the inspected object. The CNR is a relevant indicator for optimizing the experimental parameters. This paper provides several examples of simulated images that illustrate some of the rich possibilities offered by our software. Depending on the simulation type, the computation time order of magnitude can vary from 0.1 s (simple radiographic projection) up to several hours (3D tomography) on a PC, with a 400 MHz microprocessor. Our simulation tool proves to be useful in developing new specific applications, in choosing the most suitable components when designing a new testing chain, and in saving time by reducing the number of experimental tests.

  6. Inovation of the computer system for the WWER-440 simulator

    International Nuclear Information System (INIS)

    Schrumpf, L.

    1988-01-01

    The configuration of the WWER-440 simulator computer system consists of four SMEP computers. The basic data processing unit consists of two interlinked SM 52/11.M1 computers with 1 MB of main memory. This part of the computer system of the simulator controls the operation of the entire simulator, processes the programs of technology behavior simulation, of the unit information system and of other special systems, guarantees program support and the operation of the instructor's console. An SM 52/11 computer with 256 kB of main memory is connected to each unit. It is used as a communication unit for data transmission using the DASIO 600 interface. Semigraphic color displays are based on the microprocessor modules of the SM 50/40 and SM 53/10 kit supplemented with a modified TESLA COLOR 110 ST tv receiver. (J.B.). 1 fig

  7. Computer Based Modelling and Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:

  8. Quantum computers based on electron spins controlled by ultrafast off-resonant single optical pulses.

    Science.gov (United States)

    Clark, Susan M; Fu, Kai-Mei C; Ladd, Thaddeus D; Yamamoto, Yoshihisa

    2007-07-27

    We describe a fast quantum computer based on optically controlled electron spins in charged quantum dots that are coupled to microcavities. This scheme uses broadband optical pulses to rotate electron spins and provide the clock signal to the system. Nonlocal two-qubit gates are performed by phase shifts induced by electron spins on laser pulses propagating along a shared waveguide. Numerical simulations of this scheme demonstrate high-fidelity single-qubit and two-qubit gates with operation times comparable to the inverse Zeeman frequency.

  9. Computer Simulation (Microcultures): An Effective Model for Multicultural Education.

    Science.gov (United States)

    Nelson, Jorge O.

    This paper presents a rationale for using high-fidelity computer simulation in planning for and implementing effective multicultural education strategies. Using computer simulation, educators can begin to understand and plan for the concept of cultural sensitivity in delivering instruction. The model promises to emphasize teachers' understanding…

  10. Computer simulation in cell radiobiology

    International Nuclear Information System (INIS)

    Yakovlev, A.Y.; Zorin, A.V.

    1988-01-01

    This research monograph demonstrates the possible ways of using stochastic simulation for exploring cell kinetics, emphasizing the effects of cell radiobiology. In vitro kinetics of normal and irradiated cells is the main subject, but some approaches to the simulation of controlled cell systems are considered as well: the epithelium of the small intestine in mice taken as a case in point. Of particular interest is the evaluation of simulation modelling as a tool for gaining insight into biological processes and hence the new inferences from concrete experimental data, concerning regularities in cell population response to irradiation. The book is intended to stimulate interest among computer science specialists in developing new, more efficient means for the simulation of cell systems and to help radiobiologists in interpreting the experimental data

  11. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  12. Computer simulation of human motion in sports biomechanics.

    Science.gov (United States)

    Vaughan, C L

    1984-01-01

    This chapter has covered some important aspects of the computer simulation of human motion in sports biomechanics. First the definition and the advantages and limitations of computer simulation were discussed; second, research on various sporting activities were reviewed. These activities included basic movements, aquatic sports, track and field athletics, winter sports, gymnastics, and striking sports. This list was not exhaustive and certain material has, of necessity, been omitted. However, it was felt that a sufficiently broad and interesting range of activities was chosen to illustrate both the advantages and the pitfalls of simulation. It is almost a decade since Miller [53] wrote a review chapter similar to this one. One might be tempted to say that things have changed radically since then--that computer simulation is now a widely accepted and readily applied research tool in sports biomechanics. This is simply not true, however. Biomechanics researchers still tend to emphasize the descriptive type of study, often unfortunately, when a little theoretical explanation would have been more helpful [29]. What will the next decade bring? Of one thing we can be certain: The power of computers, particularly the readily accessible and portable microcomputer, will expand beyond all recognition. The memory and storage capacities will increase dramatically on the hardware side, and on the software side the trend will be toward "user-friendliness." It is likely that a number of software simulation packages designed specifically for studying human motion [31, 96] will be extensively tested and could gain wide acceptance in the biomechanics research community. Nevertheless, a familiarity with Newtonian and Lagrangian mechanics, optimization theory, and computers in general, as well as practical biomechanical insight, will still be a prerequisite for successful simulation models of human motion. Above all, the biomechanics researcher will still have to bear in mind that

  13. Learning by computer simulation does not lead to better test performance than textbook study in the diagnosis and treatment of dysrhythmias.

    Science.gov (United States)

    Kim, Jong Hoon; Kim, Won Oak; Min, Kyeong Tae; Yang, Jong Yoon; Nam, Yong Taek

    2002-08-01

    To compare computer-based learning with traditional learning methods in studying advanced cardiac life support (ACLS). Prospective, randomized study. University hospital. Senior medical students were randomized to perform computer simulation and textbook study. Each group studied ACLS for 150 minutes. Tests were performed 1 week before, immediately after, and 1 week after the study period. Testing consisted of 20 questions. All questions were formulated in such a way that there was a single best answer. Each student also completed a questionnaire designed to assess computer skills, as well as satisfaction with and benefit from the study materials. Test scores improved after both textbook study and computer simulation study in both groups, although the improvement in scores was significantly higher for the textbook group only immediately after the study. There was no significant difference between groups in their computer skill and satisfaction with the study materials. The textbook group reported greater benefit from study materials than did the computer simulation group. Studying ACLS with a hard-copy textbook may be more effective than computer simulation for acquiring simple information during a brief period. However, the difference in effectiveness is likely transient.

  14. Computer simulation of the topography evolution on ion bombarded surfaces

    CERN Document Server

    Zier, M

    2003-01-01

    The development of roughness on ion bombarded surfaces (facets, ripples) on single crystalline and amorphous homogeneous solids plays an important role for example in depth profiling techniques. To verify a faceting mechanism based not only on sputtering by directly impinging ions but also on the contribution of reflected ions and the redeposition of sputtered material a computer simulation has been carried out. The surface in this model is treated as a two-dimensional line segment profile. The model describes the topography evolution on ion bombarded surfaces including the growth mechanism of a facetted surface, using only the interplay of reflected and primary ions and redeposited atoms.

  15. Simulation of biological ion channels with technology computer-aided design.

    Science.gov (United States)

    Pandey, Santosh; Bortei-Doku, Akwete; White, Marvin H

    2007-01-01

    Computer simulations of realistic ion channel structures have always been challenging and a subject of rigorous study. Simulations based on continuum electrostatics have proven to be computationally cheap and reasonably accurate in predicting a channel's behavior. In this paper we discuss the use of a device simulator, SILVACO, to build a solid-state model for KcsA channel and study its steady-state response. SILVACO is a well-established program, typically used by electrical engineers to simulate the process flow and electrical characteristics of solid-state devices. By employing this simulation program, we have presented an alternative computing platform for performing ion channel simulations, besides the known methods of writing codes in programming languages. With the ease of varying the different parameters in the channel's vestibule and the ability of incorporating surface charges, we have shown the wide-ranging possibilities of using a device simulator for ion channel simulations. Our simulated results closely agree with the experimental data, validating our model.

  16. A Computational Experiment on Single-Walled Carbon Nanotubes

    Science.gov (United States)

    Simpson, Scott; Lonie, David C.; Chen, Jiechen; Zurek, Eva

    2013-01-01

    A computational experiment that investigates single-walled carbon nanotubes (SWNTs) has been developed and employed in an upper-level undergraduate physical chemistry laboratory course. Computations were carried out to determine the electronic structure, radial breathing modes, and the influence of the nanotube's diameter on the…

  17. Computational algorithms for simulations in atmospheric optics.

    Science.gov (United States)

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  18. Diagnosis of dementia with single photon emission computed tomography

    International Nuclear Information System (INIS)

    Jagust, W.J.; Budinger, T.F.; Reed, B.R.

    1987-01-01

    Single photon emission computed tomography is a practical modality for the study of physiologic cerebral activity in vivo. We utilized single photon emission computed tomography and N-isopropyl-p-iodoamphetamine iodine 123 to evaluate regional cerebral blood flow in nine patients with Alzheimer's disease (AD), five healthy elderly control subjects, and two patients with multi-infarct dementia. We found that all subjects with AD demonstrated flow deficits in temporoparietal cortex bilaterally, and that the ratio of activity in bilateral temporoparietal cortex to activity in the whole slice allowed the differentiation of all patients with AD from both the controls and from the patients with multi-infarct dementia. Furthermore, this ratio showed a strong correlation with disease severity in the AD group. Single photon emission computed tomography appears to be useful in the differential diagnosis of dementia and reflects clinical features of the disease

  19. Preservice Teachers' Computer Use in Single Computer Training Courses; Relationships and Predictions

    Science.gov (United States)

    Zogheib, Salah

    2015-01-01

    Single computer courses offered at colleges of education are expected to provide preservice teachers with the skills and expertise needed to adopt computer technology in their future classrooms. However, preservice teachers still find difficulty adopting such technology. This research paper investigated relationships among preservice teachers'…

  20. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Tamaki, Hitoshi [Department of Safety Research Technical Support, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kanai, Shigeru [Fuji Research Institute Corporation, Tokyo (Japan)

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  1. TITAN: a computer program for accident occurrence frequency analyses by component Monte Carlo simulation

    International Nuclear Information System (INIS)

    Nomura, Yasushi; Tamaki, Hitoshi; Kanai, Shigeru

    2000-04-01

    In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)

  2. SiMon: Simulation Monitor for Computational Astrophysics

    Science.gov (United States)

    Xuran Qian, Penny; Cai, Maxwell Xu; Portegies Zwart, Simon; Zhu, Ming

    2017-09-01

    Scientific discovery via numerical simulations is important in modern astrophysics. This relatively new branch of astrophysics has become possible due to the development of reliable numerical algorithms and the high performance of modern computing technologies. These enable the analysis of large collections of observational data and the acquisition of new data via simulations at unprecedented accuracy and resolution. Ideally, simulations run until they reach some pre-determined termination condition, but often other factors cause extensive numerical approaches to break down at an earlier stage. In those cases, processes tend to be interrupted due to unexpected events in the software or the hardware. In those cases, the scientist handles the interrupt manually, which is time-consuming and prone to errors. We present the Simulation Monitor (SiMon) to automatize the farming of large and extensive simulation processes. Our method is light-weight, it fully automates the entire workflow management, operates concurrently across multiple platforms and can be installed in user space. Inspired by the process of crop farming, we perceive each simulation as a crop in the field and running simulation becomes analogous to growing crops. With the development of SiMon we relax the technical aspects of simulation management. The initial package was developed for extensive parameter searchers in numerical simulations, but it turns out to work equally well for automating the computational processing and reduction of observational data reduction.

  3. Numerical simulation of random stresses on an annular turbulent flow

    International Nuclear Information System (INIS)

    Marti-Moreno, Marta

    2000-01-01

    The flow along a circular cylinder may induce structural vibrations. For the predictive analysis of such vibrations, the turbulent forcing spectrum needs to be characterized. The aim of this work is to study the turbulent fluid forces acting on a single tube in axial flow. More precisely we have performed numerical simulations of an annular flow. These simulations were carried out on a cylindrical staggered mesh by a finite difference method. We consider turbulent flow with Reynolds number up to 10 6 . The Large Eddy Simulation Method has been used. A survey of existent experiments showed that hydraulic diameter acts as an important parameter. We first showed the accuracy of the numerical code by reproducing the experiments of Mulcahy. The agreement between pressure spectra from computations and from experiments is good. Then, we applied this code to simulate new numerical experiments varying the hydraulic diameter and the flow velocity. (author) [fr

  4. Filtration theory using computer simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Corey, I. [Lawrence Livermore National Lab., CA (United States)

    1997-08-01

    We have used commercially available fluid dynamics codes based on Navier-Stokes theory and the Langevin particle equation of motion to compute the particle capture efficiency and pressure drop through selected two- and three-dimensional fiber arrays. The approach we used was to first compute the air velocity vector field throughout a defined region containing the fiber matrix. The particle capture in the fiber matrix is then computed by superimposing the Langevin particle equation of motion over the flow velocity field. Using the Langevin equation combines the particle Brownian motion, inertia and interception mechanisms in a single equation. In contrast, most previous investigations treat the different capture mechanisms separately. We have computed the particle capture efficiency and the pressure drop through one, 2-D and two, 3-D fiber matrix elements. 5 refs., 11 figs.

  5. Real-time GPS seismology using a single receiver: method comparison, error analysis and precision validation

    Science.gov (United States)

    Li, Xingxing

    2014-05-01

    Earthquake monitoring and early warning system for hazard assessment and mitigation has traditional been based on seismic instruments. However, for large seismic events, it is difficult for traditional seismic instruments to produce accurate and reliable displacements because of the saturation of broadband seismometers and problematic integration of strong-motion data. Compared with the traditional seismic instruments, GPS can measure arbitrarily large dynamic displacements without saturation, making them particularly valuable in case of large earthquakes and tsunamis. GPS relative positioning approach is usually adopted to estimate seismic displacements since centimeter-level accuracy can be achieved in real-time by processing double-differenced carrier-phase observables. However, relative positioning method requires a local reference station, which might itself be displaced during a large seismic event, resulting in misleading GPS analysis results. Meanwhile, the relative/network approach is time-consuming, particularly difficult for the simultaneous and real-time analysis of GPS data from hundreds or thousands of ground stations. In recent years, several single-receiver approaches for real-time GPS seismology, which can overcome the reference station problem of the relative positioning approach, have been successfully developed and applied to GPS seismology. One available method is real-time precise point positioning (PPP) relied on precise satellite orbit and clock products. However, real-time PPP needs a long (re)convergence period, of about thirty minutes, to resolve integer phase ambiguities and achieve centimeter-level accuracy. In comparison with PPP, Colosimo et al. (2011) proposed a variometric approach to determine the change of position between two adjacent epochs, and then displacements are obtained by a single integration of the delta positions. This approach does not suffer from convergence process, but the single integration from delta positions to

  6. Computer Simulation of Diffraction Patterns.

    Science.gov (United States)

    Dodd, N. A.

    1983-01-01

    Describes an Apple computer program (listing available from author) which simulates Fraunhofer and Fresnel diffraction using vector addition techniques (vector chaining) and allows user to experiment with different shaped multiple apertures. Graphics output include vector resultants, phase difference, diffraction patterns, and the Cornu spiral…

  7. 3-D electromagnetic plasma particle simulations on the Intel Delta parallel computer

    International Nuclear Information System (INIS)

    Wang, J.; Liewer, P.C.

    1994-01-01

    A three-dimensional electromagnetic PIC code has been developed on the 512 node Intel Touchstone Delta MIMD parallel computer. This code is based on the General Concurrent PIC algorithm which uses a domain decomposition to divide the computation among the processors. The 3D simulation domain can be partitioned into 1-, 2-, or 3-dimensional sub-domains. Particles must be exchanged between processors as they move among the subdomains. The Intel Delta allows one to use this code for very-large-scale simulations (i.e. over 10 8 particles and 10 6 grid cells). The parallel efficiency of this code is measured, and the overall code performance on the Delta is compared with that on Cray supercomputers. It is shown that their code runs with a high parallel efficiency of ≥ 95% for large size problems. The particle push time achieved is 115 nsecs/particle/time step for 162 million particles on 512 nodes. Comparing with the performance on a single processor Cray C90, this represents a factor of 58 speedup. The code uses a finite-difference leap frog method for field solve which is significantly more efficient than fast fourier transforms on parallel computers. The performance of this code on the 128 node Cray T3D will also be discussed

  8. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z [Reading Hospital, West Reading, PA (United States); Gao, M [ProCure Treatment Centers, Warrenville, IL (United States)

    2014-06-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm{sup 2}, 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers.

  9. SU-E-T-314: The Application of Cloud Computing in Pencil Beam Scanning Proton Therapy Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Wang, Z; Gao, M

    2014-01-01

    Purpose: Monte Carlo simulation plays an important role for proton Pencil Beam Scanning (PBS) technique. However, MC simulation demands high computing power and is limited to few large proton centers that can afford a computer cluster. We study the feasibility of utilizing cloud computing in the MC simulation of PBS beams. Methods: A GATE/GEANT4 based MC simulation software was installed on a commercial cloud computing virtual machine (Linux 64-bits, Amazon EC2). Single spot Integral Depth Dose (IDD) curves and in-air transverse profiles were used to tune the source parameters to simulate an IBA machine. With the use of StarCluster software developed at MIT, a Linux cluster with 2–100 nodes can be conveniently launched in the cloud. A proton PBS plan was then exported to the cloud where the MC simulation was run. Results: The simulated PBS plan has a field size of 10×10cm 2 , 20cm range, 10cm modulation, and contains over 10,000 beam spots. EC2 instance type m1.medium was selected considering the CPU/memory requirement and 40 instances were used to form a Linux cluster. To minimize cost, master node was created with on-demand instance and worker nodes were created with spot-instance. The hourly cost for the 40-node cluster was $0.63 and the projected cost for a 100-node cluster was $1.41. Ten million events were simulated to plot PDD and profile, with each job containing 500k events. The simulation completed within 1 hour and an overall statistical uncertainty of < 2% was achieved. Good agreement between MC simulation and measurement was observed. Conclusion: Cloud computing is a cost-effective and easy to maintain platform to run proton PBS MC simulation. When proton MC packages such as GATE and TOPAS are combined with cloud computing, it will greatly facilitate the pursuing of PBS MC studies, especially for newly established proton centers or individual researchers

  10. Characterisation of surface roughness for ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    Li Huifen; Cheung, C F; Lee, W B; To, S; Jiang, X Q

    2005-01-01

    Ultra-precision freeform surfaces are widely used in many advanced optics applications which demand for having surface roughness down to nanometer range. Although a lot of research work has been reported on the study of surface generation, reconstruction and surface characterization such as MOTIF and fractal analysis, most of them are focused on axial symmetric surfaces such as aspheric surfaces. Relative little research work has been found in the characterization of surface roughness in ultra-precision freeform surfaces. In this paper, a novel Robust Gaussian Filtering (RGF) method is proposed for the characterisation of surface roughness for ultra-precision freeform surfaces with known mathematic model or a cloud of discrete points. A series of computer simulation and measurement experiments were conducted to verify the capability of the proposed method. The experimental results were found to agree well with the theoretical results

  11. Advanced Simulation and Computing FY10-FY11 Implementation Plan Volume 2, Rev. 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Peery, J; McCoy, M; Hopson, J

    2009-09-08

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering (D&E) programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model

  12. Advanced Simulation and Computing FY09-FY10 Implementation Plan, Volume 2, Revision 0.5

    Energy Technology Data Exchange (ETDEWEB)

    Meisner, R; Hopson, J; Peery, J; McCoy, M

    2008-10-07

    The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the surety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with current and future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing Program (ASC)1 is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapons design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balanced resource, including technical staff, hardware, simulation software, and computer science solutions. In its first decade, the ASC strategy focused on demonstrating simulation capabilities of unprecedented scale in three spatial dimensions. In its second decade, ASC is focused on increasing its predictive capabilities in a three-dimensional simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (focused on sufficient resolution, dimensionality and scientific details); to quantify critical margins and uncertainties (QMU); and to resolve increasingly difficult analyses needed for the SSP. Moreover, ASC has restructured its business model from one

  13. Learning by Computer Simulation Does Not Lead to Better Test Performance on Advanced Cardiac Life Support Than Textbook Study.

    Science.gov (United States)

    Kim, Jong Hoon; Kim, Won Oak; Min, Kyeong Tae; Yang, Jong Yoon; Nam, Yong Taek

    2002-01-01

    For an effective acquisition and the practical application of rapidly increasing amounts of information, computer-based learning has already been introduced in medical education. However, there have been few studies that compare this innovative method to traditional learning methods in studying advanced cardiac life support (ACLS). Senior medical students were randomized to computer simulation and a textbook study. Each group studied ACLS for 150 minutes. Tests were done one week before, immediately after, and one week after the study period. Testing consisted of 20 questions. All questions were formulated in such a way that there was a single best answer. Each student also completed a questionnaire designed to assess computer skills as well as satisfaction with and benefit from the study materials. Test scores improved after both textbook study and computer simulation study in both groups but the improvement in scores was significantly higher for the textbook group only immediately after the study. There was no significant difference between groups in their computer skill and satisfaction with the study materials. The textbook group reported greater benefit from study materials than did the computer simulation group. Studying ACLS with a hard copy textbook may be more effective than computer simulation for the acquisition of simple information during a brief period. However, the difference in effectiveness is likely transient.

  14. Effects of Boron and Graphite Uncertainty in Fuel for TREAT Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Vaughn, Kyle; Mausolff, Zander; Gonzalez, Esteban; DeHart, Mark; Goluoglu, Sedat

    2017-03-01

    Advanced modeling techniques and current computational capacity make full core TREAT simulations possible, with the goal of such simulations to understand the pre-test core and minimize the number of required calibrations. But, in order to simulate TREAT with a high degree of precision the reactor materials and geometry must also be modeled with a high degree of precision. This paper examines how uncertainty in the reported values of boron and graphite have an effect on simulations of TREAT.

  15. Computer-determined assay time based on preset precision

    International Nuclear Information System (INIS)

    Foster, L.A.; Hagan, R.; Martin, E.R.; Wachter, J.R.; Bonner, C.A.; Malcom, J.E.

    1994-01-01

    Most current assay systems for special nuclear materials (SNM) operate on the principle of a fixed assay time which provides acceptable measurement precision without sacrificing the required throughput of the instrument. Waste items to be assayed for SNM content can contain a wide range of nuclear material. Counting all items for the same preset assay time results in a wide range of measurement precision and wastes time at the upper end of the calibration range. A short time sample taken at the beginning of the assay could optimize the analysis time on the basis of the required measurement precision. To illustrate the technique of automatically determining the assay time, measurements were made with a segmented gamma scanner at the Plutonium Facility of Los Alamos National Laboratory with the assay time for each segment determined by counting statistics in that segment. Segments with very little SNM were quickly determined to be below the lower limit of the measurement range and the measurement was stopped. Segments with significant SNM were optimally assays to the preset precision. With this method the total assay time for each item is determined by the desired preset precision. This report describes the precision-based algorithm and presents the results of measurements made to test its validity

  16. [Animal experimentation, computer simulation and surgical research].

    Science.gov (United States)

    Carpentier, Alain

    2009-11-01

    We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.

  17. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  18. 3D Surgical Simulation

    OpenAIRE

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2010-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive ...

  19. Numerical simulation of mechatronic sensors and actuators

    CERN Document Server

    Kaltenbacher, Manfred

    2007-01-01

    Focuses on the physical modeling of mechatronic sensors and actuators and their precise numerical simulation using the Finite Element Method (FEM). This book discusses the physical modeling as well as numerical computation. It also gives a comprehensive introduction to finite elements, including their computer implementation.

  20. Detection of simulated pulmonary nodules by single-exposure dual-energy computed radiography of the chest: effect of a computer-aided diagnosis system (Part 2)

    International Nuclear Information System (INIS)

    Kido, Shoji; Kuriyama, Keiko; Kuroda, Chikazumi; Nakamura, Hironobu; Ito, Wataru; Shimura, Kazuo; Kato, Hisatoyo

    2002-01-01

    Objective: To evaluate the performance of the computer-aided diagnosis (CAD) scheme on the detection of pulmonary nodules (PNs) in single-exposure dual-energy subtraction computed radiography (CR) images of the chest, and to evaluate the effect of this CAD scheme on radiologists' detectabilities. Methods and material: We compared the detectability by the CAD scheme with the detectability by 12 observers by using conventional CR (C-CR) and bone-subtracted CR (BS-CR) images of 25 chest phantoms with a low-contrast nylon nodule. Results: Both in the CAD scheme and for the observers, the detectability of BS-CR images was superior to that of C-CR images (P<0.005). The detection performance of the CAD scheme was equal to that of the observers. The nodules detected by the CAD did not necessarily coincide with those by the observers. Thus, if observers can use the results of the CAD system as a 'second opinion', their detectabilities increase. Conclusion: The CAD system for detection of PNs in the single-exposure dual-energy subtraction method is promising for improving radiologists' detectabilities of PNs

  1. A Computational Framework for Efficient Low Temperature Plasma Simulations

    Science.gov (United States)

    Verma, Abhishek Kumar; Venkattraman, Ayyaswamy

    2016-10-01

    Over the past years, scientific computing has emerged as an essential tool for the investigation and prediction of low temperature plasmas (LTP) applications which includes electronics, nanomaterial synthesis, metamaterials etc. To further explore the LTP behavior with greater fidelity, we present a computational toolbox developed to perform LTP simulations. This framework will allow us to enhance our understanding of multiscale plasma phenomenon using high performance computing tools mainly based on OpenFOAM FVM distribution. Although aimed at microplasma simulations, the modular framework is able to perform multiscale, multiphysics simulations of physical systems comprises of LTP. Some salient introductory features are capability to perform parallel, 3D simulations of LTP applications on unstructured meshes. Performance of the solver is tested based on numerical results assessing accuracy and efficiency of benchmarks for problems in microdischarge devices. Numerical simulation of microplasma reactor at atmospheric pressure with hemispherical dielectric coated electrodes will be discussed and hence, provide an overview of applicability and future scope of this framework.

  2. Use of computer graphics simulation for teaching of flexible sigmoidoscopy.

    Science.gov (United States)

    Baillie, J; Jowell, P; Evangelou, H; Bickel, W; Cotton, P

    1991-05-01

    The concept of simulation training in endoscopy is now well-established. The systems currently under development employ either computer graphics simulation or interactive video technology; each has its strengths and weaknesses. A flexible sigmoidoscopy training device has been designed which uses graphic routines--such as object oriented programming and double buffering--in entirely new ways. These programming techniques compensate for the limitations of currently available desk-top microcomputers. By boosting existing computer 'horsepower' with next generation coprocessors and sophisticated graphics tools such as intensity interpolation (Gouraud shading), the realism of computer simulation of flexible sigmoidoscopy is being greatly enhanced. The computer program has teaching and scoring capabilities, making it a truly interactive system. Use has been made of this ability to record, grade and store each trainee encounter in computer memory as part of a multi-center, prospective trial of simulation training being conducted currently in the USA. A new input device, a dummy endoscope, has been designed that allows application of variable resistance to the insertion tube. This greatly enhances tactile feedback, such as resistance during looping. If carefully designed trials show that computer simulation is an attractive and effective training tool, it is expected that this technology will evolve rapidly and be made widely available to trainee endoscopists.

  3. Effect of computer game playing on baseline laparoscopic simulator skills.

    Science.gov (United States)

    Halvorsen, Fredrik H; Cvancarova, Milada; Fosse, Erik; Mjåland, Odd

    2013-08-01

    Studies examining the possible association between computer game playing and laparoscopic performance in general have yielded conflicting results and neither has a relationship between computer game playing and baseline performance on laparoscopic simulators been established. The aim of this study was to examine the possible association between previous and present computer game playing and baseline performance on a virtual reality laparoscopic performance in a sample of potential future medical students. The participating students completed a questionnaire covering the weekly amount and type of computer game playing activity during the previous year and 3 years ago. They then performed 2 repetitions of 2 tasks ("gallbladder dissection" and "traverse tube") on a virtual reality laparoscopic simulator. Performance on the simulator were then analyzed for association to their computer game experience. Local high school, Norway. Forty-eight students from 2 high school classes volunteered to participate in the study. No association between prior and present computer game playing and baseline performance was found. The results were similar both for prior and present action game playing and prior and present computer game playing in general. Our results indicate that prior and present computer game playing may not affect baseline performance in a virtual reality simulator.

  4. Noise simulation in cone beam CT imaging with parallel computing

    International Nuclear Information System (INIS)

    Tu, S.-J.; Shaw, Chris C; Chen, Lingyun

    2006-01-01

    We developed a computer noise simulation model for cone beam computed tomography imaging using a general purpose PC cluster. This model uses a mono-energetic x-ray approximation and allows us to investigate three primary performance components, specifically quantum noise, detector blurring and additive system noise. A parallel random number generator based on the Weyl sequence was implemented in the noise simulation and a visualization technique was accordingly developed to validate the quality of the parallel random number generator. In our computer simulation model, three-dimensional (3D) phantoms were mathematically modelled and used to create 450 analytical projections, which were then sampled into digital image data. Quantum noise was simulated and added to the analytical projection image data, which were then filtered to incorporate flat panel detector blurring. Additive system noise was generated and added to form the final projection images. The Feldkamp algorithm was implemented and used to reconstruct the 3D images of the phantoms. A 24 dual-Xeon PC cluster was used to compute the projections and reconstructed images in parallel with each CPU processing 10 projection views for a total of 450 views. Based on this computer simulation system, simulated cone beam CT images were generated for various phantoms and technique settings. Noise power spectra for the flat panel x-ray detector and reconstructed images were then computed to characterize the noise properties. As an example among the potential applications of our noise simulation model, we showed that images of low contrast objects can be produced and used for image quality evaluation

  5. Single-Frame Cinema. Three Dimensional Computer-Generated Imaging.

    Science.gov (United States)

    Cheetham, Edward Joseph, II

    This master's thesis provides a description of the proposed art form called single-frame cinema, which is a category of computer imagery that takes the temporal polarities of photography and cinema and unites them into a single visual vignette of time. Following introductory comments, individual chapters discuss (1) the essential physical…

  6. The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method

    Directory of Open Access Journals (Sweden)

    Dipakkumar Gohil

    2012-06-01

    Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.

  7. Prototyping and Simulating Parallel, Distributed Computations with VISA

    National Research Council Canada - National Science Library

    Demeure, Isabelle M; Nutt, Gary J

    1989-01-01

    ...] to support the design, prototyping, and simulation of parallel, distributed computations. In particular, VISA is meant to guide the choice of partitioning and communication strategies for such computations, based on their performance...

  8. Slab cooling system design using computer simulation

    NARCIS (Netherlands)

    Lain, M.; Zmrhal, V.; Drkal, F.; Hensen, J.L.M.

    2007-01-01

    For a new technical library building in Prague computer simulations were carried out to help design of slab cooling system and optimize capacity of chillers. In the paper is presented concept of new technical library HVAC system, the model of the building, results of the energy simulations for

  9. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. How Many Times Should One Run a Computational Simulation?

    DEFF Research Database (Denmark)

    Seri, Raffaello; Secchi, Davide

    2017-01-01

    This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces statisti......This chapter is an attempt to answer the question “how many runs of a computational simulation should one do,” and it gives an answer by means of statistical analysis. After defining the nature of the problem and which types of simulation are mostly affected by it, the article introduces...

  11. Computer simulation of local atomic displacements in alloys. Application to Guinier-Preston zones in Al-Cu

    International Nuclear Information System (INIS)

    Kyobu, J.; Murata, Y.; Morinaga, M.

    1994-01-01

    A new computer program has been developed for the simulation of local atomic displacements in alloys with face-centered-cubic and body-centered-cubic lattices. The combined use of this program with the Gehlen-Cohen program for the simulation of chemical short-range order completely describes atomic fluctuations in alloys. The method has been applied to the structural simulation of Guinier-Preston (GP) zones in an Al-Cu alloy, using the experimental data of Matsubara and Cohen. Characteristic displacements of atoms have been observed around the GP zones and new structural models including local displacements have been proposed for a single-layer zone and several multilayer zones. (orig.)

  12. Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment

    Science.gov (United States)

    Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.

    2013-12-01

    Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a

  13. Computer simulation of gear tooth manufacturing processes

    Science.gov (United States)

    Mavriplis, Dimitri; Huston, Ronald L.

    1990-01-01

    The use of computer graphics to simulate gear tooth manufacturing procedures is discussed. An analytical basis for the simulation is established for spur gears. The simulation itself, however, is developed not only for spur gears, but for straight bevel gears as well. The applications of the developed procedure extend from the development of finite element models of heretofore intractable geometrical forms, to exploring the fabrication of nonstandard tooth forms.

  14. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  15. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    Science.gov (United States)

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  16. Methodologies for the measurement of bone density and their precision and accuracy

    International Nuclear Information System (INIS)

    Goodwin, P.N.

    1987-01-01

    Radiographic methods of determining bone density have been available for many years, but recently most of the efforts in this field have focused on the development of instruments which would accurately and automatically measure bone density by absorption, or by the use of x-ray computed tomography (CT). Single energy absorptiometers using I-125 have been available for some years, and have been used primarily for measurements on the radius, although recently equipment for measuring the os calcis has become available. Accuracy of single energy measurements is about 3% to 5%; precision, which has been poor because of the difficulty of exact repositioning, has recently been improved by automatic methods so that it now approaches 1% or better. Dual energy sources offer the advantages of greater accuracy and the ability to measure the spine and other large bones. A number of dual energy scanners are now on the market, mostly using gadolinium-153 as a source. Dual energy scanning is capable of an accuracy of a few percent, but the precision when scanning patients can vary widely, due to the difficulty of comparing exactly the same areas; 2 to 4% would appear to be typical. Quantitative computed tomography (QCT) can be used to directly measure the trabecular bone within the vertebral body. The accuracy of single-energy QCT is affected by the amount of marrow fat present, which can lead to underestimations of 10% or more. An increase in marrow fat would cause an apparent decrease in bone mineral. However, the precision can be quite good, 1% or 2% on phantoms, and nearly as good on patients when four vertebrae are averaged. Dual energy scanning can correct for the presence of fat, but is less precise, and not available on all CT units. 52 references

  17. Precision ESR Measurements of Transverse Anisotropy in the Single-molecule Magnet Ni4

    Science.gov (United States)

    Friedman, Jonathan; Collett, Charles; Allao Cassaro, Rafael

    We present a method to precisely determine the transverse anisotropy in a single-molecule magnet (SMM) through electron-spin resonance measurements of a tunnel splitting that arises from the anisotropy via first-order perturbation theory. We demonstrate the technique using the SMM Ni4 diluted via co-crystallization in a diamagnetic isostructural analogue. At 5% dilution, we find markedly narrower resonance peaks than are observed in undiluted samples. Ni4 has a zero-field tunnel splitting of 4 GHz, and we measure that transition at several nearby frequencies using custom loop-gap resonators, allowing a precise determination of the tunnel splitting. Because the transition under investigation arises due to a first-order perturbation from the transverse anisotropy, and lies at zero field, we can relate the splitting to the transverse anisotropy independent of any other Hamiltonian parameters. This method can be applied to other SMMs with zero-field tunnel splittings arising from first-order transverse anisotropy perturbations. NSF Grant No. DMR-1310135.

  18. Simulation and analysis of single-ribosome translation

    International Nuclear Information System (INIS)

    Tinoco, Ignacio Jr; Wen, Jin-Der

    2009-01-01

    In the cell, proteins are synthesized by ribosomes in a multi-step process called translation. The ribosome translocates along the messenger RNA to read the codons that encode the amino acid sequence of a protein. Elongation factors, including EF-G and EF-Tu, are used to catalyze the process. Recently, we have shown that translation can be followed at the single-molecule level using optical tweezers; this technique allows us to study the kinetics of translation by measuring the lifetime the ribosome spends at each codon. Here, we analyze the data from single-molecule experiments and fit the data with simple kinetic models. We also simulate the translation kinetics based on a multi-step mechanism from ensemble kinetic measurements. The mean lifetimes from the simulation were consistent with our experimental single-molecule measurements. We found that the calculated lifetime distributions were fit in general by equations with up to five rate-determining steps. Two rate-determining steps were only obtained at low concentrations of elongation factors. These analyses can be used to design new single-molecule experiments to better understand the kinetics and mechanism of translation

  19. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  20. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  1. Internal Clock Drift Estimation in Computer Clusters

    Directory of Open Access Journals (Sweden)

    Hicham Marouani

    2008-01-01

    Full Text Available Most computers have several high-resolution timing sources, from the programmable interrupt timer to the cycle counter. Yet, even at a precision of one cycle in ten millions, clocks may drift significantly in a single second at a clock frequency of several GHz. When tracing the low-level system events in computer clusters, such as packet sending or reception, each computer system records its own events using an internal clock. In order to properly understand the global system behavior and performance, as reported by the events recorded on each computer, it is important to estimate precisely the clock differences and drift between the different computers in the system. This article studies the clock precision and stability of several computer systems, with different architectures. It also studies the typical network delay characteristics, since time synchronization algorithms rely on the exchange of network packets and are dependent on the symmetry of the delays. A very precise clock, based on the atomic time provided by the GPS satellite network, was used as a reference to measure clock drifts and network delays. The results obtained are of immediate use to all applications which depend on computer clocks or network time synchronization accuracy.

  2. Label-free, single-object sensing with a microring resonator: FDTD simulation.

    Science.gov (United States)

    Nguyen, Dan T; Norwood, Robert A

    2013-01-14

    Label-free, single-object sensing with a microring resonator is investigated numerically using the finite difference time-domain (FDTD) method. A pulse with ultra-wide bandwidth that spans over several resonant modes of the ring and of the sensing object is used for simulation, enabling a single-shot simulation of the microring sensing. The FDTD simulation not only can describe the circulation of the light in a whispering-gallery-mode (WGM) microring and multiple interactions between the light and the sensing object, but also other important factors of the sensing system, such as scattering and radiation losses. The FDTD results show that the simulation can yield a resonant shift of the WGM cavity modes. Furthermore, it can also extract eigenmodes of the sensing object, and therefore information from deep inside the object. The simulation method is not only suitable for a single object (single molecule, nano-, micro-scale particle) but can be extended to the problem of multiple objects as well.

  3. Three-dimensional simulation of the motion of a single particle under a simulated turbulent velocity field

    Science.gov (United States)

    Moreno-Casas, P. A.; Bombardelli, F. A.

    2015-12-01

    A 3D Lagrangian particle tracking model is coupled to a 3D channel velocity field to simulate the saltation motion of a single sediment particle moving in saltation mode. The turbulent field is a high-resolution three dimensional velocity field that reproduces a by-pass transition to turbulence on a flat plate due to free-stream turbulence passing above de plate. In order to reduce computational costs, a decoupled approached is used, i.e., the turbulent flow is simulated independently from the tracking model, and then used to feed the 3D Lagrangian particle model. The simulations are carried using the point-particle approach. The particle tracking model contains three sub-models, namely, particle free-flight, a post-collision velocity and bed representation sub-models. The free-flight sub-model considers the action of the following forces: submerged weight, non-linear drag, lift, virtual mass, Magnus and Basset forces. The model also includes the effect of particle angular velocity. The post-collision velocities are obtained by applying conservation of angular and linear momentum. The complete model was validated with experimental results from literature within the sand range. Results for particle velocity time series and distribution of particle turbulent intensities are presented.

  4. Flexible single-layer ionic organic-inorganic frameworks towards precise nano-size separation

    Science.gov (United States)

    Yue, Liang; Wang, Shan; Zhou, Ding; Zhang, Hao; Li, Bao; Wu, Lixin

    2016-02-01

    Consecutive two-dimensional frameworks comprised of molecular or cluster building blocks in large area represent ideal candidates for membranes sieving molecules and nano-objects, but challenges still remain in methodology and practical preparation. Here we exploit a new strategy to build soft single-layer ionic organic-inorganic frameworks via electrostatic interaction without preferential binding direction in water. Upon consideration of steric effect and additional interaction, polyanionic clusters as connection nodes and cationic pseudorotaxanes acting as bridging monomers connect with each other to form a single-layer ionic self-assembled framework with 1.4 nm layer thickness. Such soft supramolecular polymer frameworks possess uniform and adjustable ortho-tetragonal nanoporous structure in pore size of 3.4-4.1 nm and exhibit greatly convenient solution processability. The stable membranes maintaining uniform porous structure demonstrate precisely size-selective separation of semiconductor quantum dots within 0.1 nm of accuracy and may hold promise for practical applications in selective transport, molecular separation and dialysis systems.

  5. The visual simulators for architecture and computer organization learning

    OpenAIRE

    Nikolić Boško; Grbanović Nenad; Đorđević Jovan

    2009-01-01

    The paper proposes a method of an effective distance learning of architecture and computer organization. The proposed method is based on a software system that is possible to be applied in any course in this field. Within this system students are enabled to observe simulation of already created computer systems. The system provides creation and simulation of switch systems, too.

  6. Programme for the simulation of the TPA-i 1001 computer on the CDC-1604-A computer

    International Nuclear Information System (INIS)

    Belyaev, A.V.

    1976-01-01

    The basic features and capacities of the program simulating the 1001 TPA-i computer with the help of CDC-1604-A are described. The program is essentially aimed at translation of programs in the SLAHG language for the TPA-type computers. The basic part of the program simulates the work of the central TPA processor. This subprogram consequently performs the actions changing in the necessary manner the registers and memory states of the TPA computer. The simulated TPA computer has subprograms-analogous of external devices, i.e. the ASR-33 teletype, the FS 1501 tape reader, and the FACIT perforator. Work according to the program takes 1.65 - 2 times less time as against the work with TPA with the minimum set of external equipment [ru

  7. Large scale particle simulations in a virtual memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.

    1983-01-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)

  8. Large-scale particle simulations in a virtual-memory computer

    International Nuclear Information System (INIS)

    Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.

    1982-08-01

    Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time

  9. The identification of spinal pathology in chronic low back pain using single photon emission computed tomography

    International Nuclear Information System (INIS)

    Ryan, R.J.; Gibson, T.; Fogelman, I.

    1992-01-01

    Single photon emission computed tomography (SPECT) findings were investigated in 80 consecutive patients (aged 18-70 years, median 44) referred to a rheumatology outpatient clinic with low back pain persisting for more than 3 months. Lesions of the lumbar spine were demonstrated in 60% of patients using SPECT but in only 35% with planar imaging. Fifty-one per cent of all lesions were only detected by SPECT, and lesions visualized on SPECT could be precisely localized to the vertebral body, or different parts of the posterior elements. Fifty per cent of lesions involved the facetal joints of which almost 60% were identified on SPECT alone. X-rays of the lumbar spine, with posterior oblique views, failed to demonstrate abnormalities corresponding to almost all SPECT posterior element lesions although it identified abnormalities corresponding to over 60% of anterior SPECT lesions. Computed tomography (CT) was performed in 30 patients with a SPECT lesion and sites of facetal joint activity corresponded to facetal osteoarthritis in 82%. (author)

  10. Uses of Computer Simulation Models in Ag-Research and Everyday Life

    Science.gov (United States)

    When the news media talks about models they could be talking about role models, fashion models, conceptual models like the auto industry uses, or computer simulation models. A computer simulation model is a computer code that attempts to imitate the processes and functions of certain systems. There ...

  11. Advanced Simulation and Computing FY17 Implementation Plan, Version 0

    Energy Technology Data Exchange (ETDEWEB)

    McCoy, Michel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Archer, Bill [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hendrickson, Bruce [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wade, Doug [National Nuclear Security Administration (NNSA), Washington, DC (United States). Office of Advanced Simulation and Computing and Institutional Research and Development; Hoang, Thuc [National Nuclear Security Administration (NNSA), Washington, DC (United States). Computational Systems and Software Environment

    2016-08-29

    The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

  12. Using computer simulations to facilitate conceptual understanding of electromagnetic induction

    Science.gov (United States)

    Lee, Yu-Fen

    This study investigated the use of computer simulations to facilitate conceptual understanding in physics. The use of computer simulations in the present study was grounded in a conceptual framework drawn from findings related to the use of computer simulations in physics education. To achieve the goal of effective utilization of computers for physics education, I first reviewed studies pertaining to computer simulations in physics education categorized by three different learning frameworks and studies comparing the effects of different simulation environments. My intent was to identify the learning context and factors for successful use of computer simulations in past studies and to learn from the studies which did not obtain a significant result. Based on the analysis of reviewed literature, I proposed effective approaches to integrate computer simulations in physics education. These approaches are consistent with well established education principles such as those suggested by How People Learn (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). The research based approaches to integrated computer simulations in physics education form a learning framework called Concept Learning with Computer Simulations (CLCS) in the current study. The second component of this study was to examine the CLCS learning framework empirically. The participants were recruited from a public high school in Beijing, China. All participating students were randomly assigned to two groups, the experimental (CLCS) group and the control (TRAD) group. Research based computer simulations developed by the physics education research group at University of Colorado at Boulder were used to tackle common conceptual difficulties in learning electromagnetic induction. While interacting with computer simulations, CLCS students were asked to answer reflective questions designed to stimulate qualitative reasoning and explanation. After receiving model reasoning online, students were asked to submit

  13. A Computer Simulation of Community Pharmacy Practice for Educational Use.

    Science.gov (United States)

    Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert

    2014-11-15

    To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.

  14. A hybrid parallel architecture for electrostatic interactions in the simulation of dissipative particle dynamics

    Science.gov (United States)

    Yang, Sheng-Chun; Lu, Zhong-Yuan; Qian, Hu-Jun; Wang, Yong-Lei; Han, Jie-Ping

    2017-11-01

    In this work, we upgraded the electrostatic interaction method of CU-ENUF (Yang, et al., 2016) which first applied CUNFFT (nonequispaced Fourier transforms based on CUDA) to the reciprocal-space electrostatic computation and made the computation of electrostatic interaction done thoroughly in GPU. The upgraded edition of CU-ENUF runs concurrently in a hybrid parallel way that enables the computation parallelizing on multiple computer nodes firstly, then further on the installed GPU in each computer. By this parallel strategy, the size of simulation system will be never restricted to the throughput of a single CPU or GPU. The most critical technical problem is how to parallelize a CUNFFT in the parallel strategy, which is conquered effectively by deep-seated research of basic principles and some algorithm skills. Furthermore, the upgraded method is capable of computing electrostatic interactions for both the atomistic molecular dynamics (MD) and the dissipative particle dynamics (DPD). Finally, the benchmarks conducted for validation and performance indicate that the upgraded method is able to not only present a good precision when setting suitable parameters, but also give an efficient way to compute electrostatic interactions for huge simulation systems. Program Files doi:http://dx.doi.org/10.17632/zncf24fhpv.1 Licensing provisions: GNU General Public License 3 (GPL) Programming language: C, C++, and CUDA C Supplementary material: The program is designed for effective electrostatic interactions of large-scale simulation systems, which runs on particular computers equipped with NVIDIA GPUs. It has been tested on (a) single computer node with Intel(R) Core(TM) i7-3770@ 3.40 GHz (CPU) and GTX 980 Ti (GPU), and (b) MPI parallel computer nodes with the same configurations. Nature of problem: For molecular dynamics simulation, the electrostatic interaction is the most time-consuming computation because of its long-range feature and slow convergence in simulation space

  15. Seventeenth Workshop on Computer Simulation Studies in Condensed-Matter Physics

    CERN Document Server

    Landau, David P; Schütler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVI

    2006-01-01

    This status report features the most recent developments in the field, spanning a wide range of topical areas in the computer simulation of condensed matter/materials physics. Both established and new topics are included, ranging from the statistical mechanics of classical magnetic spin models to electronic structure calculations, quantum simulations, and simulations of soft condensed matter. The book presents new physical results as well as novel methods of simulation and data analysis. Highlights of this volume include various aspects of non-equilibrium statistical mechanics, studies of properties of real materials using both classical model simulations and electronic structure calculations, and the use of computer simulations in teaching.

  16. Parallel Monte Carlo simulations on an ARC-enabled computing grid

    International Nuclear Information System (INIS)

    Nilsen, Jon K; Samset, Bjørn H

    2011-01-01

    Grid computing opens new possibilities for running heavy Monte Carlo simulations of physical systems in parallel. The presentation gives an overview of GaMPI, a system for running an MPI-based random walker simulation on grid resources. Integrating the ARC middleware and the new storage system Chelonia with the Ganga grid job submission and control system, we show that MPI jobs can be run on a world-wide computing grid with good performance and promising scaling properties. Results for relatively communication-heavy Monte Carlo simulations run on multiple heterogeneous, ARC-enabled computing clusters in several countries are presented.

  17. Computer simulation in nuclear science and engineering

    International Nuclear Information System (INIS)

    Akiyama, Mamoru; Miya, Kenzo; Iwata, Shuichi; Yagawa, Genki; Kondo, Shusuke; Hoshino, Tsutomu; Shimizu, Akinao; Takahashi, Hiroshi; Nakagawa, Masatoshi.

    1992-01-01

    The numerical simulation technology used for the design of nuclear reactors includes the scientific fields of wide range, and is the cultivated technology which grew in the steady efforts to high calculation accuracy through safety examination, reliability verification test, the assessment of operation results and so on. Taking the opportunity of putting numerical simulation to practical use in wide fields, the numerical simulation of five basic equations which describe the natural world and the progress of its related technologies are reviewed. It is expected that numerical simulation technology contributes to not only the means of design study but also the progress of science and technology such as the construction of new innovative concept, the exploration of new mechanisms and substances, of which the models do not exist in the natural world. The development of atomic energy and the progress of computers, Boltzmann's transport equation and its periphery, Navier-Stokes' equation and its periphery, Maxwell's electromagnetic field equation and its periphery, Schroedinger wave equation and its periphery, computational solid mechanics and its periphery, and probabilistic risk assessment and its periphery are described. (K.I.)

  18. COMPUTER LEARNING SIMULATOR WITH VIRTUAL REALITY FOR OPHTHALMOLOGY

    Directory of Open Access Journals (Sweden)

    Valeria V. Gribova

    2013-01-01

    Full Text Available A toolset of a medical computer learning simulator for ophthalmology with virtual reality and its implementation are considered in the paper. The simulator is oriented for professional skills training for students of medical universities. 

  19. Simulation in computer forensics teaching: the student experience

    OpenAIRE

    Crellin, Jonathan; Adda, Mo; Duke-Williams, Emma; Chandler, Jane

    2011-01-01

    The use of simulation in teaching computing is well established, with digital forensic investigation being a subject area where the range of simulation required is both wide and varied demanding a corresponding breadth of fidelity. Each type of simulation can be complex and expensive to set up resulting in students having only limited opportunities to participate and learn from the simulation. For example students' participation in mock trials in the University mock courtroom or in simulation...

  20. Molecular dynamics simulations and applications in computational toxicology and nanotoxicology.

    Science.gov (United States)

    Selvaraj, Chandrabose; Sakkiah, Sugunadevi; Tong, Weida; Hong, Huixiao

    2018-02-01

    Nanotoxicology studies toxicity of nanomaterials and has been widely applied in biomedical researches to explore toxicity of various biological systems. Investigating biological systems through in vivo and in vitro methods is expensive and time taking. Therefore, computational toxicology, a multi-discipline field that utilizes computational power and algorithms to examine toxicology of biological systems, has gained attractions to scientists. Molecular dynamics (MD) simulations of biomolecules such as proteins and DNA are popular for understanding of interactions between biological systems and chemicals in computational toxicology. In this paper, we review MD simulation methods, protocol for running MD simulations and their applications in studies of toxicity and nanotechnology. We also briefly summarize some popular software tools for execution of MD simulations. Published by Elsevier Ltd.

  1. A single-chip computer analysis system for liquid fluorescence

    International Nuclear Information System (INIS)

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  2. Precision of dosimetry-related measurements obtained on current multidetector computed tomography scanners

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; McNitt-Gray, Michael F.; Zhang, Di; Kim, Hyun J.; Cody, Dianna D.

    2010-01-01

    Purpose: Computed tomography (CT) intrascanner and interscanner variability has not been well characterized. Thus, the purpose of this study was to examine the within-run, between-run, and between-scanner precision of physical dosimetry-related measurements collected over the course of 1 yr on three different makes and models of multidetector row CT (MDCT) scanners. Methods: Physical measurements were collected using nine CT scanners (three scanners each of GE VCT, GE LightSpeed 16, and Siemens Sensation 64 CT). Measurements were made using various combinations of technical factors, including kVp, type of bowtie filter, and x-ray beam collimation, for several dosimetry-related quantities, including (a) free-in-air CT dose index (CTDI 100,air ); (b) calculated half-value layers and quarter-value layers; and (c) weighted CT dose index (CTDI w ) calculated from exposure measurements collected in both a 16 and 32 cm diameter CTDI phantom. Data collection was repeated at several different time intervals, ranging from seconds (for CTDI 100,air values) to weekly for 3 weeks and then quarterly or triannually for 1 yr. Precision of the data was quantified by the percent coefficient of variation (%CV). Results: The maximum relative precision error (maximum %CV value) across all dosimetry metrics, time periods, and scanners included in this study was 4.33%. The median observed %CV values for CTDI 100,air ranged from 0.05% to 0.19% over several seconds, 0.12%-0.52% over 1 week, and 0.58%-2.31% over 3-4 months. For CTDI w for a 16 and 32 cm CTDI phantom, respectively, the range of median %CVs was 0.38%-1.14% and 0.62%-1.23% in data gathered weekly for 3 weeks and 1.32%-2.79% and 0.84%-2.47% in data gathered quarterly or triannually for 1 yr. Conclusions: From a dosimetry perspective, the MDCT scanners tested in this study demonstrated a high degree of within-run, between-run, and between-scanner precision (with relative precision errors typically well under 5%).

  3. Computer simulation as representation of knowledge in education

    International Nuclear Information System (INIS)

    Krekic, Valerija Pinter; Namestovski, Zolt

    2009-01-01

    According to Aebli's operative method (1963) and Bruner's (1974) theory of representation the development of the process of thinking in teaching has the following phases - levels of abstraction: manipulation with specific things (specific phase), iconic representation (figural phase), symbolic representation (symbolic phase). Modern information technology has contributed to the enrichment of teaching and learning processes, especially in the fields of natural sciences and mathematics and those of production and technology. Simulation appears as a new possibility in the representation of knowledge. According to Guetzkow (1972) simulation is an operative representation of reality from a relevant aspect. It is about a model of an objective system, which is dynamic in itself. If that model is material it is a simple simulation, if it is abstract it is a reflective experiment, that is a computer simulation. This present work deals with the systematization and classification of simulation methods in the teaching of natural sciences and mathematics and of production and technology with special retrospective view on computer simulations and exemplar representation of the place and the role of this modern method of cognition. Key words: Representation of knowledge, modeling, simulation, education

  4. Verification of a hybrid adjoint methodology in Titan for single photon emission computed tomography - 316

    International Nuclear Information System (INIS)

    Royston, K.; Haghighat, A.; Yi, C.

    2010-01-01

    The hybrid deterministic transport code TITAN is being applied to a Single Photon Emission Computed Tomography (SPECT) simulation of a myocardial perfusion study. The TITAN code's hybrid methodology allows the use of a discrete ordinates solver in the phantom region and a characteristics method solver in the collimator region. Currently we seek to validate the adjoint methodology in TITAN for this application using a SPECT model that has been created in the MCNP5 Monte Carlo code. The TITAN methodology was examined based on the response of a single voxel detector placed in front of the heart with and without collimation. For the case without collimation, the TITAN response for single voxel-sized detector had a -9.96% difference relative to the MCNP5 response. To simulate collimation, the adjoint source was specified in directions located within the collimator acceptance angle. For a single collimator hole with a diameter matching the voxel dimension, a difference of -0.22% was observed. Comparisons to groupings of smaller collimator holes of two different sizes resulted in relative differences of 0.60% and 0.12%. The number of adjoint source directions within an acceptance angle was increased and showed no significant change in accuracy. Our results indicate that the hybrid adjoint methodology of TITAN yields accurate solutions greater than a factor of two faster than MCNP5. (authors)

  5. Computer simulations of shear thickening of concentrated dispersions

    NARCIS (Netherlands)

    Boersma, W.H.; Laven, J.; Stein, H.N.

    1995-01-01

    Stokesian dynamics computer simulations were performed on monolayers of equally sized spheres. The influence of repulsive and attractive forces on the rheological behavior and on the microstructure were studied. Under specific conditions shear thickening could be observed in the simulations, usually

  6. Computational fluid dynamics simulations and validations of results

    CSIR Research Space (South Africa)

    Sitek, MA

    2013-09-01

    Full Text Available Wind flow influence on a high-rise building is analyzed. The research covers full-scale tests, wind-tunnel experiments and numerical simulations. In the present paper computational model used in simulations is described and the results, which were...

  7. Augmented Reality Simulations on Handheld Computers

    Science.gov (United States)

    Squire, Kurt; Klopfer, Eric

    2007-01-01

    Advancements in handheld computing, particularly its portability, social interactivity, context sensitivity, connectivity, and individuality, open new opportunities for immersive learning environments. This article articulates the pedagogical potential of augmented reality simulations in environmental engineering education by immersing students in…

  8. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Science.gov (United States)

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  9. Computer Simulation of the Circulation Subsystem of a Library

    Science.gov (United States)

    Shaw, W. M., Jr.

    1975-01-01

    When circulation data are used as input parameters for a computer simulation of a library's circulation subsystem, the results of the simulation provide information on book availability and delays. The model may be used to simulate alternative loan policies. (Author/LS)

  10. Validating precision--how many measurements do we need?

    Science.gov (United States)

    ÅSberg, Arne; Solem, Kristine Bodal; Mikkelsen, Gustav

    2015-10-01

    A quantitative analytical method should be sufficiently precise, i.e. the imprecision measured as a standard deviation should be less than the numerical definition of the acceptable standard deviation. We propose that the entire 90% confidence interval for the true standard deviation shall lie below the numerical definition of the acceptable standard deviation in order to assure that the analytical method is sufficiently precise. We also present power function curves to ease the decision on the number of measurements to make. Computer simulation was used to calculate the probability that the upper limit of the 90% confidence interval for the true standard deviation was equal to or exceeded the acceptable standard deviation. Power function curves were constructed for different scenarios. The probability of failure to assure that the method is sufficiently precise increases with decreasing number of measurements and with increasing standard deviation when the true standard deviation is well below the acceptable standard deviation. For instance, the probability of failure is 42% for a precision experiment of 40 repeated measurements in one analytical run and 7% for 100 repeated measurements, when the true standard deviation is 80% of the acceptable standard deviation. Compared to the CLSI guidelines, validating precision according to the proposed principle is more reliable, but demands considerably more measurements. Using power function curves may help when planning studies to validate precision.

  11. Using EDUCache Simulator for the Computer Architecture and Organization Course

    Directory of Open Access Journals (Sweden)

    Sasko Ristov

    2013-07-01

    Full Text Available The computer architecture and organization course is essential in all computer science and engineering programs, and the most selected and liked elective course for related engineering disciplines. However, the attractiveness brings a new challenge, it requires a lot of effort by the instructor, to explain rather complicated concepts to beginners or to those who study related disciplines. The usage of visual simulators can improve both the teaching and learning processes. The overall goal is twofold: 1~to enable a visual environment to explain the basic concepts and 2~to increase the student's willingness and ability to learn the material.A lot of visual simulators have been used for the computer architecture and organization course. However, due to the lack of visual simulators for simulation of the cache memory concepts, we have developed a new visual simulator EDUCache simulator. In this paper we present that it can be effectively and efficiently used as a supporting tool in the learning process of modern multi-layer, multi-cache and multi-core multi-processors.EDUCache's features enable an environment for performance evaluation and engineering of software systems, i.e. the students will also understand the importance of computer architecture building parts and hopefully, will increase their curiosity for hardware courses in general.

  12. Feedback controlled electrical nerve stimulation: a computer simulation.

    Science.gov (United States)

    Doruk, R Ozgur

    2010-07-01

    The role of repetitive firing in neurophysiologic or neuropsychiatric disorders, such as Parkinson, epilepsy and bipolar type disorders, has always been a topic of medical research as therapies target either the cease of firing or a decrease in its frequency. In electrotherapy, one of the mechanisms to achieve the purpose in point is to apply a low density electric current to the nervous system. In this study, a computer simulation is provided of a treatment in which the stimulation current is computed by nerve fiber cell membrane potential feedback so that the level of the current is automatically instead of manually adjusted. The behavior of the nerve cell is represented by the Hodgkin-Huxley (HH) model, which is slightly modified into a linear model with state dependent coefficients. Due to this modification, the algebraic and differential Riccati equations can be applied, which allows an optimal controller minimizing a quadratic performance index given by the user. Using a controlled current injection can decrease unnecessarily long current injection times that may be harmful to the neuronal network. This study introduces a prototype for a possible future application to a network of neurons as it is more realistic than a single neuron. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Deterministic and robust generation of single photons from a single quantum dot with 99.5% indistinguishability using adiabatic rapid passage.

    Science.gov (United States)

    Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2014-11-12

    Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.

  14. NeuroManager: A workflow analysis based simulation management engine for computational neuroscience

    Directory of Open Access Journals (Sweden)

    David Bruce Stockton

    2015-10-01

    Full Text Available We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach 1 provides flexibility to adapt to a variety of neuroscience simulators, 2 simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and 3 improves tracking of simulator/simulation evolution. We implemented NeuroManager in Matlab, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in twenty-two stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to Matlab's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

  15. Simulating the 2012 High Plains Drought Using Three Single Column Models (SCM)

    Science.gov (United States)

    Medina, I. D.; Baker, I. T.; Denning, S.; Dazlich, D. A.

    2015-12-01

    The impact of changes in the frequency and severity of drought on fresh water sustainability is a great concern for many regions of the world. One such location is the High Plains, where the local economy is primarily driven by fresh water withdrawals from the Ogallala Aquifer, which accounts for approximately 30% of total irrigation withdrawals from all U.S. aquifers combined. Modeling studies that focus on the feedback mechanisms that control the climate and eco-hydrology during times of drought are limited, and have used conventional General Circulation Models (GCMs) with grid length scales ranging from one hundred to several hundred kilometers. Additionally, these models utilize crude statistical parameterizations of cloud processes for estimating sub-grid fluxes of heat and moisture and have a poor representation of land surface heterogeneity. For this research, we focus on the 2012 High Plains drought and perform numerical simulations using three single column model (SCM) versions of BUGS5 (Colorado State University (CSU) GCM coupled to the Simple Biosphere Model (SiB3)). In the first version of BUGS5, the model is used in its standard bulk setting (single atmospheric column coupled to a single instance of SiB3), secondly, the Super-Parameterized Community Atmospheric Model (SP-CAM), a cloud resolving model (CRM) (CRM consists of 32 atmospheric columns), replaces the single CSU GCM atmospheric parameterization and is coupled to a single instance of SiB3, and for the third version of BUGS5, an instance of SiB3 is coupled to each CRM column of the SP-CAM (32 CRM columns coupled to 32 instances of SiB3). To assess the physical realism of the land-atmosphere feedbacks simulated by all three versions of BUGS5, differences in simulated energy and moisture fluxes are computed between the 2011 and 2012 period and are compared to those calculated using observational data from the AmeriFlux Tower Network for the same period at the ARM Site in Lamont, OK. This research

  16. Prospective randomized study of contrast reaction management curricula: Computer-based interactive simulation versus high-fidelity hands-on simulation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Carolyn L., E-mail: wangcl@uw.edu [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Schopp, Jennifer G.; Kani, Kimia [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Petscavage-Thomas, Jonelle M. [Penn State Hershey Medical Center, Department of Radiology, 500 University Drive, Hershey, PA 17033 (United States); Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H. [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States)

    2013-12-01

    Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation.

  17. Prospective randomized study of contrast reaction management curricula: Computer-based interactive simulation versus high-fidelity hands-on simulation

    International Nuclear Information System (INIS)

    Wang, Carolyn L.; Schopp, Jennifer G.; Kani, Kimia; Petscavage-Thomas, Jonelle M.; Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H.

    2013-01-01

    Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation

  18. The use of filtering methods to compensate for constant attenuation in single-photon emission computed tomography

    International Nuclear Information System (INIS)

    Gullberg, G.T.; Budinger, T.F.

    1981-01-01

    A back projection of filtered projection (BKFIL) reconstruction algorithm is presented that is applicable to single-photon emission computed tomography (ECT) in the presence of a constant attenuating medium such as the brain. The filters used in transmission computed tomography (TCT)-comprised of a ramp multiplied by window functions-are modified so that the single-photon ECT filter is a function of the constant attenuation coefficient. The filters give good reconstruction results with sufficient angular and lateral sampling. With continuous samples the BKFIL algorithm has a point spread function that is the Hankel transform of the window function. The resolution and statistical properties of the filters are demonstrated by various simulations which assume an ideal detector response. Statistical formulas for the reconstructed image show that the square of the percent-root-mean-square (percent-rms) uncertainty of the reconstruction is inversely proportional to the total measured counts. The results indicate that constant attenuation can be compensated for by using an attenuation-dependent filter that reconstructs the transverse section reliably. Computer time requirements are two times that of conventional TCT or positron ECT and there is no increase in memory requirements

  19. Hybrid quantum computation

    International Nuclear Information System (INIS)

    Sehrawat, Arun; Englert, Berthold-Georg; Zemann, Daniel

    2011-01-01

    We present a hybrid model of the unitary-evolution-based quantum computation model and the measurement-based quantum computation model. In the hybrid model, part of a quantum circuit is simulated by unitary evolution and the rest by measurements on star graph states, thereby combining the advantages of the two standard quantum computation models. In the hybrid model, a complicated unitary gate under simulation is decomposed in terms of a sequence of single-qubit operations, the controlled-z gates, and multiqubit rotations around the z axis. Every single-qubit and the controlled-z gate are realized by a respective unitary evolution, and every multiqubit rotation is executed by a single measurement on a required star graph state. The classical information processing in our model requires only an information flow vector and propagation matrices. We provide the implementation of multicontrol gates in the hybrid model. They are very useful for implementing Grover's search algorithm, which is studied as an illustrative example.

  20. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  1. Phase-coexistence simulations of fluid mixtures by the Markov Chain Monte Carlo method using single-particle models

    KAUST Repository

    Li, Jun

    2013-09-01

    We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available. © 2013 Elsevier Inc.

  2. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  3. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  4. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  5. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  6. Using computer simulations to probe the structure and dynamics of biopolymers

    International Nuclear Information System (INIS)

    Levy, R.M.; Hirata, F.; Kim, K.; Zhang, P.

    1987-01-01

    The use of computer simulations to study internal motions and thermodynamic properties is receiving increased attention. One important use of the method is to provide a more fundamental understanding of the molecular information contained in various kinds of experiments on these complex systems. In the first part of this paper the authors review recent work in their laboratory concerned with the use of computer simulations for the interpretation of experimental probes of molecular structure and dynamics of proteins and nucleic acids. The interplay between computer simulations and three experimental techniques is emphasized: (1) nuclear magnetic resonance relaxation spectroscopy, (2) refinement of macro-molecular x-ray structures, and (3) vibrational spectroscopy. The treatment of solvent effects in biopolymer simulations is a difficult problem. It is not possible to study systematically the effect of solvent conditions, e.g. added salt concentration, on biopolymer properties by means of simulations alone. In the last part of the paper the authors review a more analytical approach they developed to study polyelectrolyte properties of solvated biopolymers. The results are compared with computer simulations

  7. Digital control computer upgrade at the Cernavoda NPP simulator

    International Nuclear Information System (INIS)

    Ionescu, T.

    2006-01-01

    The Plant Process Computer equips some Nuclear Power Plants, like CANDU-600, with Centralized Control performed by an assembly of two computers known as Digital Control Computers (DCC) and working in parallel for safely driving of the plan at steady state and during normal maneuvers but also during abnormal transients when the plant is automatically steered to a safe state. The Centralized Control means both hardware and software with obligatory presence in the frame of the Full Scope Simulator and subject to changing its configuration with specific requirements during the plant and simulator life and covered by this subsection

  8. Computer based training simulator for Hunterston Nuclear Power Station

    International Nuclear Information System (INIS)

    Bowden, R.S.M.; Hacking, D.

    1978-01-01

    For reasons which are stated, the Hunterston-B nuclear power station automatic control system includes a manual over-ride facility. It is therefore essential for the station engineers to be trained to recognise and control all feasible modes of plant and logic malfunction. A training simulator has been built which consists of a replica of the shutdown monitoring panel in the Central Control Room and is controlled by a mini-computer. This paper highlights the computer aspects of the simulator and relevant derived experience, under the following headings: engineering background; shutdown sequence equipment; simulator equipment; features; software; testing; maintenance. (U.K.)

  9. Matchgate circuits and compressed quantum computation

    International Nuclear Information System (INIS)

    Boyajian, W.L.

    2015-01-01

    exact diagonal- ization. In Part II, we deal with the compressed way of quantum computation mentioned above, used to simulate physically interesting behaviours of large systems. To give an example, consider an experimental set–up, where up to 8 qubits can be well controlled. Such a set–up can be used to simulate certain interactions of 2 8 = 256 qubits. In [Boyajian et al. (2013)], we generalised the results from [Kraus (2011)], and demonstrated how the adiabatic evolution of the 1D XY-model can be simulated via an exponentially smaller quantum system. More precisely, it is shown there, how the phase transition of such a model of a spin chain consisting out of n qubits can be observed via a compressed algorithm processing only log( n ) qubits. The feasibility of such a compressed quantum simulation is due to the fact that the adiabatic evolution and the measurement of the magnetization employed to observe the phase transition can be described by a matchgate circuit. Remarkably, the number of elementary gates, i.e. the number of single and two-qubit gates which are required to implement the compressed simulation can be even smaller than required to implement the original matchgate circuit. This compressed algorithm has already been experimentally realized using NMR quantum computing [Li et al. (2014)]. In [Boyajian et al. (2013)] we showed that not only the quantum phase transition can be observed in this way, but that various other interesting processes, such as quantum quenching, where the evolution is non–adiabatic, and general time evolutions can be simulated with an exponentially smaller system. In Part II, we also recall the results from [Boyajian and Kraus (2015)] where we extend the notion of compressed quantum simulation even further. We consider the XY-model and derive compressed circuits to simulate the behavior of the thermal and any excited state of the system. To this end, we use the diagonalization of the XY-Hamiltonian presented in[ Verstraete et al

  10. Linear optical quantum computing in a single spatial mode.

    Science.gov (United States)

    Humphreys, Peter C; Metcalf, Benjamin J; Spring, Justin B; Moore, Merritt; Jin, Xian-Min; Barbieri, Marco; Kolthammer, W Steven; Walmsley, Ian A

    2013-10-11

    We present a scheme for linear optical quantum computing using time-bin-encoded qubits in a single spatial mode. We show methods for single-qubit operations and heralded controlled-phase (cphase) gates, providing a sufficient set of operations for universal quantum computing with the Knill-Laflamme-Milburn [Nature (London) 409, 46 (2001)] scheme. Our protocol is suited to currently available photonic devices and ideally allows arbitrary numbers of qubits to be encoded in the same spatial mode, demonstrating the potential for time-frequency modes to dramatically increase the quantum information capacity of fixed spatial resources. As a test of our scheme, we demonstrate the first entirely single spatial mode implementation of a two-qubit quantum gate and show its operation with an average fidelity of 0.84±0.07.

  11. Computer simulation games in population and education.

    Science.gov (United States)

    Moreland, R S

    1988-01-01

    Computer-based simulation games are effective training tools that have several advantages. They enable players to learn in a nonthreatening manner and develop strategies to achieve goals in a dynamic environment. They also provide visual feedback on the effects of players' decisions, encourage players to explore and experiment with options before making final decisions, and develop players' skills in analysis, decision making, and cooperation. 2 games have been developed by the Research Triangle Institute for public-sector planning agencies interested in or dealing with developing countries. The UN Population and Development Game teaches players about the interaction between population variables and the national economy and how population policies complement other national policies, such as education. The BRIDGES Education Planning Game focuses on the effects education has on national policies. In both games, the computer simulates the reactions of a fictional country's socioeconomic system to players' decisions. Players can change decisions after seeing their effects on a computer screen and thus can improve their performance in achieving goals.

  12. Single-step reinitialization and extending algorithms for level-set based multi-phase flow simulations

    Science.gov (United States)

    Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-12-01

    We propose efficient single-step formulations for reinitialization and extending algorithms, which are critical components of level-set based interface-tracking methods. The level-set field is reinitialized with a single-step (non iterative) "forward tracing" algorithm. A minimum set of cells is defined that describes the interface, and reinitialization employs only data from these cells. Fluid states are extrapolated or extended across the interface by a single-step "backward tracing" algorithm. Both algorithms, which are motivated by analogy to ray-tracing, avoid multiple block-boundary data exchanges that are inevitable for iterative reinitialization and extending approaches within a parallel-computing environment. The single-step algorithms are combined with a multi-resolution conservative sharp-interface method and validated by a wide range of benchmark test cases. We demonstrate that the proposed reinitialization method achieves second-order accuracy in conserving the volume of each phase. The interface location is invariant to reapplication of the single-step reinitialization. Generally, we observe smaller absolute errors than for standard iterative reinitialization on the same grid. The computational efficiency is higher than for the standard and typical high-order iterative reinitialization methods. We observe a 2- to 6-times efficiency improvement over the standard method for serial execution. The proposed single-step extending algorithm, which is commonly employed for assigning data to ghost cells with ghost-fluid or conservative interface interaction methods, shows about 10-times efficiency improvement over the standard method while maintaining same accuracy. Despite their simplicity, the proposed algorithms offer an efficient and robust alternative to iterative reinitialization and extending methods for level-set based multi-phase simulations.

  13. A note on simulated annealing to computer laboratory scheduling ...

    African Journals Online (AJOL)

    The concepts, principles and implementation of simulated Annealing as a modem heuristic technique is presented. Simulated Annealing algorithm is used in solving real life problem of Computer Laboratory scheduling in order to maximize the use of scarce and insufficient resources. KEY WORDS: Simulated Annealing ...

  14. Time reversibility, computer simulation, algorithms, chaos

    CERN Document Server

    Hoover, William Graham

    2012-01-01

    A small army of physicists, chemists, mathematicians, and engineers has joined forces to attack a classic problem, the "reversibility paradox", with modern tools. This book describes their work from the perspective of computer simulation, emphasizing the author's approach to the problem of understanding the compatibility, and even inevitability, of the irreversible second law of thermodynamics with an underlying time-reversible mechanics. Computer simulation has made it possible to probe reversibility from a variety of directions and "chaos theory" or "nonlinear dynamics" has supplied a useful vocabulary and a set of concepts, which allow a fuller explanation of irreversibility than that available to Boltzmann or to Green, Kubo and Onsager. Clear illustration of concepts is emphasized throughout, and reinforced with a glossary of technical terms from the specialized fields which have been combined here to focus on a common theme. The book begins with a discussion, contrasting the idealized reversibility of ba...

  15. Simulation of Robot Kinematics Using Interactive Computer Graphics.

    Science.gov (United States)

    Leu, M. C.; Mahajan, R.

    1984-01-01

    Development of a robot simulation program based on geometric transformation softwares available in most computer graphics systems and program features are described. The program can be extended to simulate robots coordinating with external devices (such as tools, fixtures, conveyors) using geometric transformations to describe the…

  16. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  17. Numerical precision control and GRACE

    International Nuclear Information System (INIS)

    Fujimoto, J.; Hamaguchi, N.; Ishikawa, T.; Kaneko, T.; Morita, H.; Perret-Gallix, D.; Tokura, A.; Shimizu, Y.

    2006-01-01

    The control of the numerical precision of large-scale computations like those generated by the GRACE system for automatic Feynman diagram calculations has become an intrinsic part of those packages. Recently, Hitachi Ltd. has developed in FORTRAN a new library HMLIB for quadruple and octuple precision arithmetic where the number of lost-bits is made available. This library has been tested with success on the 1-loop radiative correction to e + e - ->e + e - τ + τ - . It is shown that the approach followed by HMLIB provides an efficient way to track down the source of numerical significance losses and to deliver high-precision results yet minimizing computing time

  18. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    Science.gov (United States)

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  19. Towards a precise measurement of atomic parity violation in a single Ra+ ion

    International Nuclear Information System (INIS)

    Nuñez Portela, M.; Berg, J. E. van den; Bekker, H.; Böll, O.; Dijck, E. A.; Giri, G. S.; Hoekstra, S.; Jungmann, K.; Mohanty, A.; Onderwater, C. J. G.; Santra, B.; Schlesser, S.; Timmermans, R. G. E.; Versolato, O. O.; Wansbeek, L. W.; Willmann, L.; Wilschut, H. W.

    2013-01-01

    A single trapped Ra  +  (Z = 88) ion provides a very promising route towards a most precise measurement of Atomic Parity Violation (APV), since APV effects grow faster than Z 3 . This experiment promises the best determination of the electroweak coupling constant at the lowest accessible energies. Such a measurement provides a sensitive test of the Standard Model in particle physics. At the present stage of the experiment, we focus on trapping and laser cooling stable Ba  +  ions as a precursor for radioactive Ra  +  . Online laser spectroscopy of the isotopes 209 − 214 Ra  +  in a linear Paul trap has provided information on transition wavelengths, fine and hyperfine structures and excited state lifetimes as test of atomic structure calculations. Additionaly, a single trapped Ra  +  ion could function as a very stable clock.

  20. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  1. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    Science.gov (United States)

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the

  2. The advanced computational testing and simulation toolkit (ACTS)

    International Nuclear Information System (INIS)

    Drummond, L.A.; Marques, O.

    2002-01-01

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  3. The advanced computational testing and simulation toolkit (ACTS)

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, L.A.; Marques, O.

    2002-05-21

    During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the Advanced Computational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

  4. The Use of Computer Simulation Gaming in Teaching Broadcast Economics.

    Science.gov (United States)

    Mancuso, Louis C.

    The purpose of this study was to develop a broadcast economic computer simulation and to ascertain how a lecture-computer simulation game compared as a teaching method with a more traditional lecture and case study instructional methods. In each of three sections of a broadcast economics course, a different teaching methodology was employed: (1)…

  5. SPINET: A Parallel Computing Approach to Spine Simulations

    Directory of Open Access Journals (Sweden)

    Peter G. Kropf

    1996-01-01

    Full Text Available Research in scientitic programming enables us to realize more and more complex applications, and on the other hand, application-driven demands on computing methods and power are continuously growing. Therefore, interdisciplinary approaches become more widely used. The interdisciplinary SPINET project presented in this article applies modern scientific computing tools to biomechanical simulations: parallel computing and symbolic and modern functional programming. The target application is the human spine. Simulations of the spine help us to investigate and better understand the mechanisms of back pain and spinal injury. Two approaches have been used: the first uses the finite element method for high-performance simulations of static biomechanical models, and the second generates a simulation developmenttool for experimenting with different dynamic models. A finite element program for static analysis has been parallelized for the MUSIC machine. To solve the sparse system of linear equations, a conjugate gradient solver (iterative method and a frontal solver (direct method have been implemented. The preprocessor required for the frontal solver is written in the modern functional programming language SML, the solver itself in C, thus exploiting the characteristic advantages of both functional and imperative programming. The speedup analysis of both solvers show very satisfactory results for this irregular problem. A mixed symbolic-numeric environment for rigid body system simulations is presented. It automatically generates C code from a problem specification expressed by the Lagrange formalism using Maple.

  6. Computer simulation of two-phase flow in nuclear reactors

    International Nuclear Information System (INIS)

    Wulff, W.

    1993-01-01

    Two-phase flow models dominate the requirements of economic resources for the development and use of computer codes which serve to analyze thermohydraulic transients in nuclear power plants. An attempt is made to reduce the effort of analyzing reactor transients by combining purpose-oriented modelling with advanced computing techniques. Six principles are presented on mathematical modeling and the selection of numerical methods, along with suggestions on programming and machine selection, all aimed at reducing the cost of analysis. Computer simulation is contrasted with traditional computer calculation. The advantages of run-time interactive access operation in a simulation environment are demonstrated. It is explained that the drift-flux model is better suited than the two-fluid model for the analysis of two-phase flow in nuclear reactors, because of the latter's closure problems. The advantage of analytical over numerical integration is demonstrated. Modeling and programming techniques are presented which minimize the number of needed arithmetical and logical operations and thereby increase the simulation speed, while decreasing the cost. (orig.)

  7. Factors cost effectively improved using computer simulations of ...

    African Journals Online (AJOL)

    LPhidza

    effectively managed using computer simulations in semi-arid conditions pertinent to much of sub-Saharan Africa. ... small scale farmers to obtain optimal crop yields thus ensuring their food security and livelihood is ... those that simultaneously incorporate and simulate processes involved throughout the course of crop ...

  8. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  9. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  10. Program computes single-point failures in critical system designs

    Science.gov (United States)

    Brown, W. R.

    1967-01-01

    Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.

  11. A real-time computer simulation of nuclear simulator software using standard PC hardware and linux environments

    International Nuclear Information System (INIS)

    Cha, K. H.; Kweon, K. C.

    2001-01-01

    A feasibility study, which standard PC hardware and Real-Time Linux are applied to real-time computer simulation of software for a nuclear simulator, is presented in this paper. The feasibility prototype was established with the existing software in the Compact Nuclear Simulator (CNS). Throughout the real-time implementation in the feasibility prototype, we has identified that the approach can enable the computer-based predictive simulation to be approached, due to both the remarkable improvement in real-time performance and the less efforts for real-time implementation under standard PC hardware and Real-Time Linux envrionments

  12. SAFSIM theory manual: A computer program for the engineering simulation of flow systems

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.

    1993-12-01

    SAFSIM (System Analysis Flow SIMulator) is a FORTRAN computer program for simulating the integrated performance of complex flow systems. SAFSIM provides sufficient versatility to allow the engineering simulation of almost any system, from a backyard sprinkler system to a clustered nuclear reactor propulsion system. In addition to versatility, speed and robustness are primary SAFSIM development goals. SAFSIM contains three basic physics modules: (1) a fluid mechanics module with flow network capability; (2) a structure heat transfer module with multiple convection and radiation exchange surface capability; and (3) a point reactor dynamics module with reactivity feedback and decay heat capability. Any or all of the physics modules can be implemented, as the problem dictates. SAFSIM can be used for compressible and incompressible, single-phase, multicomponent flow systems. Both the fluid mechanics and structure heat transfer modules employ a one-dimensional finite element modeling approach. This document contains a description of the theory incorporated in SAFSIM, including the governing equations, the numerical methods, and the overall system solution strategies.

  13. Brain receptor single-photon emission computer tomography with 123I Datscan in Parkinson's disease

    International Nuclear Information System (INIS)

    Minchev, D.; Peshev, N.; Kostadinova, I.; Grigorova, O.; Trindev, P.; Shotekov, P.

    2005-01-01

    Clinical aspects of Parkinson's disease are not enough for the early diagnosis of the disease. Positron emission tomography and the receptor single - photon emission tomography can be used for imaging functional integrity of nigrostriatal dopaminergic structures. 24 patient (17 men and 7 women) were investigated. 20 of them are with Parkinson's disease and 4 are with essential tremor. The radiopharmaceutical - 123I-Datscan (ioflupane, bind with 123I) represent a cocaine analogue with selective affinity to dopamine transporters, located in the dopaminergic nigrostriatal terminals in the striatum. Single - photon emission computer tomography was performed with SPECT gamma camera (ADAC, SH Epic detector). The scintigraphic study was made 3 to 6 hours after intravenous injection of the radiopharmaceutical - 123I- Datscan in dose 185 MBq. 120 frames are registered with duration of each one 22 seconds and gamma camera rotation 360. After generation of transversal slices we generated two composites pictures. The first composite picture image the striatum, the second - the occipital region. Two ratios were calculated representing the uptake of the radiopharmaceutical in the left and right striatum. Qualitative and quantitative criteria were elaborated for evaluating the scintigraphic patterns. Decreased, nonhomogeneous and asymmetric uptake of the radiopharmaceutical coupled with low quantitative parameters in range from 1.44 to 2.87 represents the characteristic scintigraphic pattern for Parkinson's disease with clear clinical picture. Homogenous with high intensity and symmetric uptake of the radiopharmaceutical in the striatum coupled with his clear frontier and with quantitative parameters up to 4.40 represent the scintigraphic pattern in two patients with essential tremor. Receptor single - photon emission computer tomography with 123I - Datscan represents an accurate nuclear-medicine method for precise diagnosis of Parkinson's disease and for its differentiation from

  14. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  15. Formal Analysis of Dynamics Within Philosophy of Mind by Computer Simulation

    NARCIS (Netherlands)

    Bosse, T.; Schut, M.C.; Treur, J.

    2009-01-01

    Computer simulations can be useful tools to support philosophers in validating their theories, especially when these theories concern phenomena showing nontrivial dynamics. Such theories are usually informal, whilst for computer simulation a formally described model is needed. In this paper, a

  16. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  17. A compositional reservoir simulator on distributed memory parallel computers

    International Nuclear Information System (INIS)

    Rame, M.; Delshad, M.

    1995-01-01

    This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. A portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented

  18. Computer simulation of ultrasonic waves in solids

    International Nuclear Information System (INIS)

    Thibault, G.A.; Chaplin, K.

    1992-01-01

    A computer model that simulates the propagation of ultrasonic waves has been developed at AECL Research, Chalk River Laboratories. This program is called EWE, short for Elastic Wave Equations, the mathematics governing the propagation of ultrasonic waves. This report contains a brief summary of the use of ultrasonic waves in non-destructive testing techniques, a discussion of the EWE simulation code explaining the implementation of the equations and the types of output received from the model, and an example simulation showing the abilities of the model. (author). 2 refs., 2 figs

  19. COMPUTER MODEL AND SIMULATION OF A GLOVE BOX PROCESS

    International Nuclear Information System (INIS)

    Foster, C.

    2001-01-01

    The development of facilities to deal with the disposition of nuclear materials at an acceptable level of Occupational Radiation Exposure (ORE) is a significant issue facing the nuclear community. One solution is to minimize the worker's exposure though the use of automated systems. However, the adoption of automated systems for these tasks is hampered by the challenging requirements that these systems must meet in order to be cost effective solutions in the hazardous nuclear materials processing environment. Retrofitting current glove box technologies with automation systems represents potential near-term technology that can be applied to reduce worker ORE associated with work in nuclear materials processing facilities. Successful deployment of automation systems for these applications requires the development of testing and deployment strategies to ensure the highest level of safety and effectiveness. Historically, safety tests are conducted with glove box mock-ups around the finished design. This late detection of problems leads to expensive redesigns and costly deployment delays. With wide spread availability of computers and cost effective simulation software it is possible to discover and fix problems early in the design stages. Computer simulators can easily create a complete model of the system allowing a safe medium for testing potential failures and design shortcomings. The majority of design specification is now done on computer and moving that information to a model is relatively straightforward. With a complete model and results from a Failure Mode Effect Analysis (FMEA), redesigns can be worked early. Additional issues such as user accessibility, component replacement, and alignment problems can be tackled early in the virtual environment provided by computer simulation. In this case, a commercial simulation package is used to simulate a lathe process operation at the Los Alamos National Laboratory (LANL). The Lathe process operation is indicative of

  20. Thermodynamic and transport properties of nitrogen fluid: Molecular theory and computer simulations

    Science.gov (United States)

    Eskandari Nasrabad, A.; Laghaei, R.

    2018-04-01

    Computer simulations and various theories are applied to compute the thermodynamic and transport properties of nitrogen fluid. To model the nitrogen interaction, an existing potential in the literature is modified to obtain a close agreement between the simulation results and experimental data for the orthobaric densities. We use the Generic van der Waals theory to calculate the mean free volume and apply the results within the modified Cohen-Turnbull relation to obtain the self-diffusion coefficient. Compared to experimental data, excellent results are obtained via computer simulations for the orthobaric densities, the vapor pressure, the equation of state, and the shear viscosity. We analyze the results of the theory and computer simulations for the various thermophysical properties.

  1. Computer simulation for sodium-concrete reactions

    International Nuclear Information System (INIS)

    Zhang Bin; Zhu Jizhou

    2006-01-01

    In the liquid metal cooled fast breeder reactors (LMFBRs), direct contacts between sodium and concrete is unavoidable. Due to sodium's high chemical reactivity, sodium would react with concrete violently. Lots of hydrogen gas and heat would be released then. This would harm the ignorantly of the containment. This paper developed a program to simualte sodium-conrete reactions across-the-board. It could give the reaction zone temperature, pool temperature, penetration depth, penetration rate, hydrogen flux and reaction heat and so on. Concrete was considered to be composed of silica and water only in this paper. The variable, the quitient of sodium hydroxide, was introduced in the continuity equation to simulate the chemical reactions more realistically. The product of the net gas flux and boundary depth was ably transformed to that of penetration rate and boundary depth. The complex chemical kinetics equations was simplified under some hypothesises. All the technique applied above simplified the computer simulation consumedly. In other words, they made the computer simulation feasible. Theoretics models that applied in the program and the calculation procedure were expatiated in detail. Good agreements of an overall transient behavior were obtained in the series of sodium-concrete reaction experiment analysis. The comparison between the analytical and experimental results showed the program presented in this paper was creditable and reasonable for simulating the sodium-concrete reactions. This program could be used for nuclear safety judgement. (authors)

  2. A review of computer-based simulators for ultrasound training.

    Science.gov (United States)

    Blum, Tobias; Rieger, Andreas; Navab, Nassir; Friess, Helmut; Martignoni, Marc

    2013-04-01

    Computer-based simulators for ultrasound training are a topic of recent interest. During the last 15 years, many different systems and methods have been proposed. This article provides an overview and classification of systems in this domain and a discussion of their advantages. Systems are classified and discussed according to the image simulation method, user interactions and medical applications. Computer simulation of ultrasound has one key advantage over traditional training. It enables novel training concepts, for example, through advanced visualization, case databases, and automatically generated feedback. Qualitative evaluations have mainly shown positive learning effects. However, few quantitative evaluations have been performed and long-term effects have to be examined.

  3. Computer Graphics Simulations of Sampling Distributions.

    Science.gov (United States)

    Gordon, Florence S.; Gordon, Sheldon P.

    1989-01-01

    Describes the use of computer graphics simulations to enhance student understanding of sampling distributions that arise in introductory statistics. Highlights include the distribution of sample proportions, the distribution of the difference of sample means, the distribution of the difference of sample proportions, and the distribution of sample…

  4. Numerical simulations of single and multi-staged injection of H2 in a supersonic scramjet combustor

    Directory of Open Access Journals (Sweden)

    L. Abu-Farah

    2014-12-01

    Full Text Available Computational fluid dynamics (CFD simulations of a single staged injection of H2 through a central wedge shaped strut and a multi-staged injection through wall injectors are carried out by using Ansys CFX-12 code. Unstructured tetrahedral grids for narrow channel and quarter geometries of the combustor are generated by using ICEM CFD. Steady three-dimensional (3D Reynolds-averaged Navier-stokes (RANS simulations are carried out in the case of no H2 injection and compared with the simulations of single staged pilot and/or main H2 injections and multistage injection. Shear stress transport (SST based on k-ω turbulent model is adopted. Flow field visualization (complex shock waves interactions and static pressure distribution along the wall of the combustor are predicted and compared with the experimental schlieren images and measured wall static pressures for validation. A good agreement is found between the CFD predicted results and the measured data. The narrow and quarter geometries of the combustor give similar results with very small differences. Multi-staged injections of H2 enhance the turbulent H2/air mixing by forming vortices and additional shock waves (bow shocks.

  5. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  6. Computer simulation of nonequilibrium processes

    International Nuclear Information System (INIS)

    Wallace, D.C.

    1985-07-01

    The underlying concepts of nonequilibrium statistical mechanics, and of irreversible thermodynamics, will be described. The question at hand is then, how are these concepts to be realize in computer simulations of many-particle systems. The answer will be given for dissipative deformation processes in solids, on three hierarchical levels: heterogeneous plastic flow, dislocation dynamics, an molecular dynamics. Aplication to the shock process will be discussed

  7. Building an adiabatic quantum computer simulation in the classroom

    Science.gov (United States)

    Rodríguez-Laguna, Javier; Santalla, Silvia N.

    2018-05-01

    We present a didactic introduction to adiabatic quantum computation (AQC) via the explicit construction of a classical simulator of quantum computers. This constitutes a suitable route to introduce several important concepts for advanced undergraduates in physics: quantum many-body systems, quantum phase transitions, disordered systems, spin-glasses, and computational complexity theory.

  8. Finite state projection based bounds to compare chemical master equation models using single-cell data

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Zachary [School of Biomedical Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States); Neuert, Gregor [Department of Molecular Physiology and Biophysics, Vanderbilt University School of Medicine, Nashville, Tennessee 37232 (United States); Department of Pharmacology, School of Medicine, Vanderbilt University, Nashville, Tennessee 37232 (United States); Department of Biomedical Engineering, Vanderbilt University School of Engineering, Nashville, Tennessee 37232 (United States); Munsky, Brian [School of Biomedical Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States); Department of Chemical and Biological Engineering, Colorado State University, Fort Collins, Colorado 80523 (United States)

    2016-08-21

    Emerging techniques now allow for precise quantification of distributions of biological molecules in single cells. These rapidly advancing experimental methods have created a need for more rigorous and efficient modeling tools. Here, we derive new bounds on the likelihood that observations of single-cell, single-molecule responses come from a discrete stochastic model, posed in the form of the chemical master equation. These strict upper and lower bounds are based on a finite state projection approach, and they converge monotonically to the exact likelihood value. These bounds allow one to discriminate rigorously between models and with a minimum level of computational effort. In practice, these bounds can be incorporated into stochastic model identification and parameter inference routines, which improve the accuracy and efficiency of endeavors to analyze and predict single-cell behavior. We demonstrate the applicability of our approach using simulated data for three example models as well as for experimental measurements of a time-varying stochastic transcriptional response in yeast.

  9. Quantum computer gate simulations | Dada | Journal of the Nigerian ...

    African Journals Online (AJOL)

    A new interactive simulator for Quantum Computation has been developed for simulation of the universal set of quantum gates and for construction of new gates of up to 3 qubits. The simulator also automatically generates an equivalent quantum circuit for any arbitrary unitary transformation on a qubit. Available quantum ...

  10. Communication: A method to compute the transport coefficient of pure fluids diffusing through planar interfaces from equilibrium molecular dynamics simulations.

    Science.gov (United States)

    Vermorel, Romain; Oulebsir, Fouad; Galliero, Guillaume

    2017-09-14

    The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.

  11. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    Science.gov (United States)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  12. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  13. Advanced computational simulations of water waves interacting with wave energy converters

    Science.gov (United States)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  14. Precision of Carbon-14 analysis in a single laboratory

    International Nuclear Information System (INIS)

    Nashriyah Mat; Misman Sumin; Holland, P.T.

    2009-01-01

    In a single laboratory, one operator has used a Biological Material Oxidizer (BMO) unit to prepare (combust) solid samples before analyzing (counting) the radioactivity by using various Liquid Scintillation Counters (LSCs). The different batches of commercially available solid Certified Reference Material (CRM, Amersham, UK) standards were analyzed depending on the time of analysis over a period of seven years. The certified radioactivity and accuracy of the C-14 standards as cellulose tabs, designated as the Certified Reference Material (CRM), was 5000 + 3% DPM. Each analysis was carried out using triplicate tabs. The medium of counting was commercially available cocktail containing the sorbent solution for the oxidizer gases, although of different batches were used depending on the date of analysis. The mean DPM of the solutions was measured after correction for quenching by the LSC internal standard procedure and subtracting the mean DPM of control. The precisions of the standard and control counts and of the recovery percentage for the CRM were measured as the coefficients of variation (CV), for the C-14 determination over the seven year period. The results from a recently acquired Sample Oxidizer unit were also included for comparison. (Author)

  15. Significant decimal digits for energy representation on short-word computers

    International Nuclear Information System (INIS)

    Sartori, E.

    1989-01-01

    The general belief that single precision floating point numbers have always at least seven significant decimal digits on short word computers such as IBM is erroneous. Seven significant digits are required however for representing the energy variable in nuclear cross-section data sets containing sharp p-wave resonances at 0 Kelvin. It is suggested that either the energy variable is stored in double precision or that cross-section resonances are reconstructed to room temperature or higher on short word computers

  16. Cluster computing for lattice QCD simulations

    International Nuclear Information System (INIS)

    Coddington, P.D.; Williams, A.G.

    2000-01-01

    Full text: Simulations of lattice quantum chromodynamics (QCD) require enormous amounts of compute power. In the past, this has usually involved sharing time on large, expensive machines at supercomputing centres. Over the past few years, clusters of networked computers have become very popular as a low-cost alternative to traditional supercomputers. The dramatic improvements in performance (and more importantly, the ratio of price/performance) of commodity PCs, workstations, and networks have made clusters of off-the-shelf computers an attractive option for low-cost, high-performance computing. A major advantage of clusters is that since they can have any number of processors, they can be purchased using any sized budget, allowing research groups to install a cluster for their own dedicated use, and to scale up to more processors if additional funds become available. Clusters are now being built for high-energy physics simulations. Wuppertal has recently installed ALiCE, a cluster of 128 Alpha workstations running Linux, with a peak performance of 158 G flops. The Jefferson Laboratory in the US has a 16 node Alpha cluster and plans to upgrade to a 256 processor machine. In Australia, several large clusters have recently been installed. Swinburne University of Technology has a cluster of 64 Compaq Alpha workstations used for astrophysics simulations. Early this year our DHPC group constructed a cluster of 116 dual Pentium PCs (i.e. 232 processors) connected by a Fast Ethernet network, which is used by chemists at Adelaide University and Flinders University to run computational chemistry codes. The Australian National University has recently installed a similar PC cluster with 192 processors. The Centre for the Subatomic Structure of Matter (CSSM) undertakes large-scale high-energy physics calculations, mainly lattice QCD simulations. The choice of the computer and network hardware for a cluster depends on the particular applications to be run on the machine. Our

  17. Computer Networks E-learning Based on Interactive Simulations and SCORM

    Directory of Open Access Journals (Sweden)

    Francisco Andrés Candelas

    2011-05-01

    Full Text Available This paper introduces a new set of compact interactive simulations developed for the constructive learning of computer networks concepts. These simulations, which compose a virtual laboratory implemented as portable Java applets, have been created by combining EJS (Easy Java Simulations with the KivaNS API. Furthermore, in this work, the skills and motivation level acquired by the students are evaluated and measured when these simulations are combined with Moodle and SCORM (Sharable Content Object Reference Model documents. This study has been developed to improve and stimulate the autonomous constructive learning in addition to provide timetable flexibility for a Computer Networks subject.

  18. Computer Simulation of Angle-measuring System of Photoelectric Theodolite

    International Nuclear Information System (INIS)

    Zeng, L; Zhao, Z W; Song, S L; Wang, L T

    2006-01-01

    In this paper, a virtual test platform based on malfunction phenomena is designed, using the methods of computer simulation and numerical mask. It is used in the simulation training of angle-measuring system of photoelectric theodolite. Actual application proves that this platform supplies good condition for technicians making deep simulation training and presents a useful approach for the establishment of other large equipment simulation platforms

  19. Validation and computing and performance studies for the ATLAS simulation

    CERN Document Server

    Marshall, Z; The ATLAS collaboration

    2009-01-01

    We present the validation of the ATLAS simulation software pro ject. Software development is controlled by nightly builds and several levels of automatic tests to ensure stability. Computing validation, including CPU time, memory, and disk space required per event, is benchmarked for all software releases. Several different physics processes and event types are checked to thoroughly test all aspects of the detector simulation. The robustness of the simulation software is demonstrated by the production of 500 million events on the World-wide LHC Computing Grid in the last year.

  20. On efficiency of fire simulation realization: parallelization with greater number of computational meshes

    Science.gov (United States)

    Valasek, Lukas; Glasa, Jan

    2017-12-01

    Current fire simulation systems are capable to utilize advantages of high-performance computer (HPC) platforms available and to model fires efficiently in parallel. In this paper, efficiency of a corridor fire simulation on a HPC computer cluster is discussed. The parallel MPI version of Fire Dynamics Simulator is used for testing efficiency of selected strategies of allocation of computational resources of the cluster using a greater number of computational cores. Simulation results indicate that if the number of cores used is not equal to a multiple of the total number of cluster node cores there are allocation strategies which provide more efficient calculations.