WorldWideScience

Sample records for high performance simulation

  1. High performance electromagnetic simulation tools

    Gedney, Stephen D.; Whites, Keith W.

    1994-10-01

    Army Research Office Grant #DAAH04-93-G-0453 has supported the purchase of 24 additional compute nodes that were installed in the Intel iPsC/860 hypercube at the Univesity Of Kentucky (UK), rendering a 32-node multiprocessor. This facility has allowed the investigators to explore and extend the boundaries of electromagnetic simulation for important areas of defense concerns including microwave monolithic integrated circuit (MMIC) design/analysis and electromagnetic materials research and development. The iPSC/860 has also provided an ideal platform for MMIC circuit simulations. A number of parallel methods based on direct time-domain solutions of Maxwell's equations have been developed on the iPSC/860, including a parallel finite-difference time-domain (FDTD) algorithm, and a parallel planar generalized Yee-algorithm (PGY). The iPSC/860 has also provided an ideal platform on which to develop a 'virtual laboratory' to numerically analyze, scientifically study and develop new types of materials with beneficial electromagnetic properties. These materials simulations are capable of assembling hundreds of microscopic inclusions from which an electromagnetic full-wave solution will be obtained in toto. This powerful simulation tool has enabled research of the full-wave analysis of complex multicomponent MMIC devices and the electromagnetic properties of many types of materials to be performed numerically rather than strictly in the laboratory.

  2. High-Performance Beam Simulator for the LANSCE Linac

    Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.

    2012-01-01

    A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.

  3. High performance real-time flight simulation at NASA Langley

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  4. MUMAX: A new high-performance micromagnetic simulation tool

    Vansteenkiste, A.; Van de Wiele, B.

    2011-01-01

    We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.

  5. High performance ultrasonic field simulation on complex geometries

    Chouh, H.; Rougeron, G.; Chatillon, S.; Iehl, J. C.; Farrugia, J. P.; Ostromoukhov, V.

    2016-02-01

    Ultrasonic field simulation is a key ingredient for the design of new testing methods as well as a crucial step for NDT inspection simulation. As presented in a previous paper [1], CEA-LIST has worked on the acceleration of these simulations focusing on simple geometries (planar interfaces, isotropic materials). In this context, significant accelerations were achieved on multicore processors and GPUs (Graphics Processing Units), bringing the execution time of realistic computations in the 0.1 s range. In this paper, we present recent works that aim at similar performances on a wider range of configurations. We adapted the physical model used by the CIVA platform to design and implement a new algorithm providing a fast ultrasonic field simulation that yields nearly interactive results for complex cases. The improvements over the CIVA pencil-tracing method include adaptive strategies for pencil subdivisions to achieve a good refinement of the sensor geometry while keeping a reasonable number of ray-tracing operations. Also, interpolation of the times of flight was used to avoid time consuming computations in the impulse response reconstruction stage. To achieve the best performance, our algorithm runs on multi-core superscalar CPUs and uses high performance specialized libraries such as Intel Embree for ray-tracing, Intel MKL for signal processing and Intel TBB for parallelization. We validated the simulation results by comparing them to the ones produced by CIVA on identical test configurations including mono-element and multiple-element transducers, homogeneous, meshed 3D CAD specimens, isotropic and anisotropic materials and wave paths that can involve several interactions with interfaces. We show performance results on complete simulations that achieve computation times in the 1s range.

  6. Simulation model of a twin-tail, high performance airplane

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  7. Crystal and molecular simulation of high-performance polymers.

    Colquhoun, H M; Williams, D J

    2000-03-01

    Single-crystal X-ray analyses of oligomeric models for high-performance aromatic polymers, interfaced to computer-based molecular modeling and diffraction simulation, have enabled the determination of a range of previously unknown polymer crystal structures from X-ray powder data. Materials which have been successfully analyzed using this approach include aromatic polyesters, polyetherketones, polythioetherketones, polyphenylenes, and polycarboranes. Pure macrocyclic homologues of noncrystalline polyethersulfones afford high-quality single crystals-even at very large ring sizes-and have provided the first examples of a "protein crystallographic" approach to the structures of conventionally amorphous synthetic polymers.

  8. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation, Phase I

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...

  9. Simulations of KSTAR high performance steady state operation scenarios

    Na, Yong-Su; Kessel, C.E.; Park, J.M.; Yi, Sumin; Kim, J.Y.; Becoulet, A.; Sips, A.C.C.

    2009-01-01

    We report the results of predictive modelling of high performance steady state operation scenarios in KSTAR. Firstly, the capabilities of steady state operation are investigated with time-dependent simulations using a free-boundary plasma equilibrium evolution code coupled with transport calculations. Secondly, the reproducibility of high performance steady state operation scenarios developed in the DIII-D tokamak, of similar size to that of KSTAR, is investigated using the experimental data taken from DIII-D. Finally, the capability of ITER-relevant steady state operation is investigated in KSTAR. It is found that KSTAR is able to establish high performance steady state operation scenarios; β N above 3, H 98 (y, 2) up to 2.0, f BS up to 0.76 and f NI equals 1.0. In this work, a realistic density profile is newly introduced for predictive simulations by employing the scaling law of a density peaking factor. The influence of the current ramp-up scenario and the transport model is discussed with respect to the fusion performance and non-inductive current drive fraction in the transport simulations. As observed in the experiments, both the heating and the plasma current waveforms in the current ramp-up phase produce a strong effect on the q-profile, the fusion performance and also on the non-inductive current drive fraction in the current flattop phase. A criterion in terms of q min is found to establish ITER-relevant steady state operation scenarios. This will provide a guideline for designing the current ramp-up phase in KSTAR. It is observed that the transport model also affects the predictive values of fusion performance as well as the non-inductive current drive fraction. The Weiland transport model predicts the highest fusion performance as well as non-inductive current drive fraction in KSTAR. In contrast, the GLF23 model exhibits the lowest ones. ITER-relevant advanced scenarios cannot be obtained with the GLF23 model in the conditions given in this work

  10. High performance stream computing for particle beam transport simulations

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  11. High performance thermal stress analysis on the earth simulator

    Noriyuki, Kushida; Hiroshi, Okuda; Genki, Yagawa

    2003-01-01

    In this study, the thermal stress finite element analysis code optimized for the earth simulator was developed. A processor node of which of the earth simulator is the 8-way vector processor, and each processor can communicate using the message passing interface. Thus, there are two ways to parallelize the finite element method on the earth simulator. The first method is to assign one processor for one sub-domain, and the second method is to assign one node (=8 processors) for one sub-domain considering the shared memory type parallelization. Considering that the preconditioned conjugate gradient (PCG) method, which is one of the suitable linear equation solvers for the large-scale parallel finite element methods, shows the better convergence behavior if the number of domains is the smaller, we have determined to employ PCG and the hybrid parallelization, which is based on the shared and distributed memory type parallelization. It has been said that it is hard to obtain the good parallel or vector performance, since the finite element method is based on unstructured grids. In such situation, the reordering is inevitable to improve the computational performance [2]. In this study, we used three reordering methods, i.e. Reverse Cuthil-McKee (RCM), cyclic multicolor (CM) and diagonal jagged descending storage (DJDS)[3]. RCM provides the good convergence of the incomplete lower-upper (ILU) PCG, but causes the load imbalance. On the other hand, CM provides the good load balance, but worsens the convergence of ILU PCG if the vector length is so long. Therefore, we used the combined-method of RCM and CM. DJDS is the method to store the sparse matrices such that longer vector length can be obtained. For attaining the efficient inter-node parallelization, such partitioning methods as the recursive coordinate bisection (RCM) or MeTIS have been used. Computational performance of the practical large-scale engineering problems will be shown at the meeting. (author)

  12. Mixed-Language High-Performance Computing for Plasma Simulations

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  13. High performance computer code for molecular dynamics simulations

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  14. Performance simulation in high altitude platforms (HAPs) communications systems

    Ulloa-Vásquez, Fernando; Delgado-Penin, J. A.

    2002-07-01

    This paper considers the analysis by simulation of a digital narrowband communication system for an scenario which consists of a High-Altitude aeronautical Platform (HAP) and fixed/mobile terrestrial transceivers. The aeronautical channel is modelled considering geometrical (angle of elevation vs. horizontal distance of the terrestrial reflectors) and statistical arguments and under these circumstances a serial concatenated coded digital transmission is analysed for several hypothesis related to radio-electric coverage areas. The results indicate a good feasibility for the communication system proposed and analysed.

  15. A high performance scientific cloud computing environment for materials simulations

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  16. A high performance scientific cloud computing environment for materials simulations

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  17. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai; Yan, Mi; Allen, Rebecca; Salama, Amgad; Lu, Ligang; Jordan, Kirk E.; Sun, Shuyu; Keyes, David E.

    2015-01-01

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems

  18. An Advanced, Interactive, High-Performance Liquid Chromatography Simulator and Instructor Resources

    Boswell, Paul G.; Stoll, Dwight R.; Carr, Peter W.; Nagel, Megan L.; Vitha, Mark F.; Mabbott, Gary A.

    2013-01-01

    High-performance liquid chromatography (HPLC) simulation software has long been recognized as an effective educational tool, yet many of the existing HPLC simulators are either too expensive, outdated, or lack many important features necessary to make them widely useful for educational purposes. Here, a free, open-source HPLC simulator is…

  19. High performance simulation of lattice physics using enhanced transputer arrays

    Hey, A.J.G.; Jesshope, C.R.; Nicole, D.A.

    1986-01-01

    The authors describe an architecture under construction at Southampton using arrays of communicating transputers with enhanced floating-point capabilities. Performance in the Gigaflop range is expected. Algorithms for taking explicit advantage of this MIMD architecture are discussed using the Occam programming paradigm. (Auth.)

  20. High Performance Electrical Modeling and Simulation Verification Test Suite - Tier I; TOPICAL

    SCHELLS, REGINA L.; BOGDAN, CAROLYN W.; WIX, STEVEN D.

    2001-01-01

    This document describes the High Performance Electrical Modeling and Simulation (HPEMS) Global Verification Test Suite (VERTS). The VERTS is a regression test suite used for verification of the electrical circuit simulation codes currently being developed by the HPEMS code development team. This document contains descriptions of the Tier I test cases

  1. Aging analysis of high performance FinFET flip-flop under Dynamic NBTI simulation configuration

    Zainudin, M. F.; Hussin, H.; Halim, A. K.; Karim, J.

    2018-03-01

    A mechanism known as Negative-bias Temperature Instability (NBTI) degrades a main electrical parameters of a circuit especially in terms of performance. So far, the circuit design available at present are only focussed on high performance circuit without considering the circuit reliability and robustness. In this paper, the main circuit performances of high performance FinFET flip-flop such as delay time, and power were studied with the presence of the NBTI degradation. The aging analysis was verified using a 16nm High Performance Predictive Technology Model (PTM) based on different commands available at Synopsys HSPICE. The results shown that the circuit under the longer dynamic NBTI simulation produces the highest impact in the increasing of gate delay and decrease in the average power reduction from a fresh simulation until the aged stress time under a nominal condition. In addition, the circuit performance under a varied stress condition such as temperature and negative stress gate bias were also studied.

  2. High correlation between performance on a virtual-reality simulator and real-life cataract surgery

    Thomsen, Ann Sofia Skou; Smith, Phillip; Subhi, Yousif

    2017-01-01

    -tracking software of cataract surgical videos with a Pearson correlation coefficient of -0.70 (p = 0.017). CONCLUSION: Performance on the EyeSi simulator is significantly and highly correlated to real-life surgical performance. However, it is recommended that performance assessments are made using multiple data......PURPOSE: To investigate the correlation in performance of cataract surgery between a virtual-reality simulator and real-life surgery using two objective assessment tools with evidence of validity. METHODS: Cataract surgeons with varying levels of experience were included in the study. All...... antitremor training, forceps training, bimanual training, capsulorhexis and phaco divide and conquer. RESULTS: Eleven surgeons were enrolled. After a designated warm-up period, the proficiency-based test on the EyeSi simulator was strongly correlated to real-life performance measured by motion...

  3. Performance of space charge simulations using High Performance Computing (HPC) cluster

    Bartosik, Hannes; CERN. Geneva. ATS Department

    2017-01-01

    In 2016 a collaboration agreement between CERN and Istituto Nazionale di Fisica Nucleare (INFN) through its Centro Nazionale Analisi Fotogrammi (CNAF, Bologna) was signed [1], which foresaw the purchase and installation of a cluster of 20 nodes with 32 cores each, connected with InfiniBand, at CNAF for the use of CERN members to develop parallelized codes as well as conduct massive simulation campaigns with the already available parallelized tools. As outlined in [1], after the installation and the set up of the first 12 nodes, the green light to proceed with the procurement and installation of the next 8 nodes can be given only after successfully passing an acceptance test based on two specific benchmark runs. This condition is necessary to consider the first batch of the cluster operational and complying with the desired performance specifications. In this brief note, we report the results of the above mentioned acceptance test.

  4. High performance cellular level agent-based simulation with FLAME for the GPU.

    Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela

    2010-05-01

    Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.

  5. High-Fidelity Simulation in Occupational Therapy Curriculum: Impact on Level II Fieldwork Performance

    Rebecca Ozelie

    2016-10-01

    Full Text Available Simulation experiences provide experiential learning opportunities during artificially produced real-life medical situations in a safe environment. Evidence supports using simulation in health care education yet limited quantitative evidence exists in occupational therapy. This study aimed to evaluate the differences in scores on the AOTA Fieldwork Performance Evaluation for the Occupational Therapy Student of Level II occupational therapy students who received high-fidelity simulation training and students who did not. A retrospective analysis of 180 students from a private university was used. Independent samples nonparametric t tests examined mean differences between Fieldwork Performance Evaluation scores of those who did and did not receive simulation experiences in the curriculum. Mean ranks were also analyzed for subsection scores and practice settings. Results of this study found no significant difference in overall Fieldwork Performance Evaluation scores between the two groups. The students who completed simulation and had fieldwork in inpatient rehabilitation had the greatest increase in mean rank scores and increases in several subsections. The outcome measure used in this study was found to have limited discriminatory capability and may have affected the results; however, this study finds that using simulation may be a beneficial supplement to didactic coursework in occupational therapy curriculums.

  6. High Performance Wideband CMOS CCI and its Application in Inductance Simulator Design

    ARSLAN, E.

    2012-08-01

    Full Text Available In this paper, a new, differential pair based, low-voltage, high performance and wideband CMOS first generation current conveyor (CCI is proposed. The proposed CCI has high voltage swings on ports X and Y and very low equivalent impedance on port X due to super source follower configuration. It also has high voltage swings (close to supply voltages on input and output ports and wideband current and voltage transfer ratios. Furthermore, two novel grounded inductance simulator circuits are proposed as application examples. Using HSpice, it is shown that the simulation results of the proposed CCI and also of the presented inductance simulators are in very good agreement with the expected ones.

  7. Correlations between the simulated military tasks performance and physical fitness tests at high altitude

    Eduardo Borba Neves

    2017-11-01

    Full Text Available The aim of this study was to investigate the Correlations between the Simulated Military Tasks Performance and Physical Fitness Tests at high altitude. This research is part of a project to modernize the physical fitness test of the Colombian Army. Data collection was performed at the 13th Battalion of Instruction and Training, located 30km south of Bogota D.C., with a temperature range from 1ºC to 23ºC during the study period, and at 3100m above sea level. The sample was composed by 60 volunteers from three different platoons. The volunteers start the data collection protocol after 2 weeks of acclimation at this altitude. The main results were the identification of a high positive correlation between the 3 Assault wall in succession and the Simulated Military Tasks performance (r = 0.764, p<0.001, and a moderate negative correlation between pull-ups and the Simulated Military Tasks performance (r = -0.535, p<0.001. It can be recommended the use of the 20-consecutive overtaking of the 3 Assault wall in succession as a good way to estimate the performance in operational tasks which involve: assault walls, network of wires, military Climbing Nets, Tarzan jump among others, at high altitude.

  8. Performance of high-rate TRD prototypes for the CBM experiment in test beam and simulation

    Klein-Boesing, Melanie [Institut fuer Kernphysik, Muenster (Germany)

    2008-07-01

    The goal of the future Compressed Baryonic Matter (CBM) experiment is to explore the QCD phase diagram in the region of high baryon densities not covered by other experiments. Among other detectors, it will employ a Transition Radiation Detector (TRD) for tracking of charged particles and electron identification. To meet the demands for tracking and for electron identification at large particle densities and very high interaction rates, high efficiency TRD prototypes have been developed. These prototypes with double-sided pad plane electrodes based on Multiwire Proportional Chambers (MWPC) have been tested at GSI and implemented in the simulation framework of CBM. Results of the performance in a test beam and in simulations are shown. In addition, we present a study of the performance of CBM for electron identification and dilepton reconstruction with this new detector layout.

  9. OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation.

    Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S

    2013-01-08

    OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.

  10. High-Performance Modeling of Carbon Dioxide Sequestration by Coupling Reservoir Simulation and Molecular Dynamics

    Bao, Kai

    2015-10-26

    The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of

  11. Simulating Effects of High Angle of Attack on Turbofan Engine Performance

    Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei

    2013-01-01

    A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.

  12. High performance MRI simulations of motion on multi-GPU systems.

    Xanthis, Christos G; Venetis, Ioannis E; Aletras, Anthony H

    2014-07-04

    MRI physics simulators have been developed in the past for optimizing imaging protocols and for training purposes. However, these simulators have only addressed motion within a limited scope. The purpose of this study was the incorporation of realistic motion, such as cardiac motion, respiratory motion and flow, within MRI simulations in a high performance multi-GPU environment. Three different motion models were introduced in the Magnetic Resonance Imaging SIMULator (MRISIMUL) of this study: cardiac motion, respiratory motion and flow. Simulation of a simple Gradient Echo pulse sequence and a CINE pulse sequence on the corresponding anatomical model was performed. Myocardial tagging was also investigated. In pulse sequence design, software crushers were introduced to accommodate the long execution times in order to avoid spurious echoes formation. The displacement of the anatomical model isochromats was calculated within the Graphics Processing Unit (GPU) kernel for every timestep of the pulse sequence. Experiments that would allow simulation of custom anatomical and motion models were also performed. Last, simulations of motion with MRISIMUL on single-node and multi-node multi-GPU systems were examined. Gradient Echo and CINE images of the three motion models were produced and motion-related artifacts were demonstrated. The temporal evolution of the contractility of the heart was presented through the application of myocardial tagging. Better simulation performance and image quality were presented through the introduction of software crushers without the need to further increase the computational load and GPU resources. Last, MRISIMUL demonstrated an almost linear scalable performance with the increasing number of available GPU cards, in both single-node and multi-node multi-GPU computer systems. MRISIMUL is the first MR physics simulator to have implemented motion with a 3D large computational load on a single computer multi-GPU configuration. The incorporation

  13. Investigating the Mobility of Light Autonomous Tracked Vehicles using a High Performance Computing Simulation Capability

    Negrut, Dan; Mazhar, Hammad; Melanz, Daniel; Lamb, David; Jayakumar, Paramsothy; Letherwood, Michael; Jain, Abhinandan; Quadrelli, Marco

    2012-01-01

    This paper is concerned with the physics-based simulation of light tracked vehicles operating on rough deformable terrain. The focus is on small autonomous vehicles, which weigh less than 100 lb and move on deformable and rough terrain that is feature rich and no longer representable using a continuum approach. A scenario of interest is, for instance, the simulation of a reconnaissance mission for a high mobility lightweight robot where objects such as a boulder or a ditch that could otherwise be considered small for a truck or tank, become major obstacles that can impede the mobility of the light autonomous vehicle and negatively impact the success of its mission. Analyzing and gauging the mobility and performance of these light vehicles is accomplished through a modeling and simulation capability called Chrono::Engine. Chrono::Engine relies on parallel execution on Graphics Processing Unit (GPU) cards.

  14. High performance simulation for the Silva project using the tera computer

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  15. High performance simulation for the Silva project using the tera computer

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  16. High-resolution 3D simulations of NIF ignition targets performed on Sequoia with HYDRA

    Marinak, M. M.; Clark, D. S.; Jones, O. S.; Kerbel, G. D.; Sepke, S.; Patel, M. V.; Koning, J. M.; Schroeder, C. R.

    2015-11-01

    Developments in the multiphysics ICF code HYDRA enable it to perform large-scale simulations on the Sequoia machine at LLNL. With an aggregate computing power of 20 Petaflops, Sequoia offers an unprecedented capability to resolve the physical processes in NIF ignition targets for a more complete, consistent treatment of the sources of asymmetry. We describe modifications to HYDRA that enable it to scale to over one million processes on Sequoia. These include new options for replicating parts of the mesh over a subset of the processes, to avoid strong scaling limits. We consider results from a 3D full ignition capsule-only simulation performed using over one billion zones run on 262,000 processors which resolves surface perturbations through modes l = 200. We also report progress towards a high-resolution 3D integrated hohlraum simulation performed using 262,000 processors which resolves surface perturbations on the ignition capsule through modes l = 70. These aim for the most complete calculations yet of the interactions and overall impact of the various sources of asymmetry for NIF ignition targets. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  17. Optimized Parallel Discrete Event Simulation (PDES) for High Performance Computing (HPC) Clusters

    Abu-Ghazaleh, Nael

    2005-01-01

    The aim of this project was to study the communication subsystem performance of state of the art optimistic simulator Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES...

  18. Direct numerical simulation of reactor two-phase flows enabled by high-performance computing

    Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.

    2018-04-01

    Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.

  19. The computer program LIAR for the simulation and modeling of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.O.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-07-01

    High performance linear accelerators are the central components of the proposed next generation of linear colliders. They must provide acceleration of up to 750 GeV per beam while maintaining small normalized emittances. Standard simulation programs, mainly developed for storage rings, did not meet the specific requirements for high performance linacs with high bunch charges and strong wakefields. The authors present the program. LIAR (LInear Accelerator Research code) that includes single and multi-bunch wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. LIAR has been applied to and checked against the existing Stanford Linear Collider (SLC), the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS) at SLAC. Its modular structure allows easy extension for different purposes. The program is available for UNIX workstations and Windows PC's

  20. Simulation of the High Performance Time to Digital Converter for the ATLAS Muon Spectrometer trigger upgrade

    Meng, X.T.; Levin, D.S.; Chapman, J.W.; Zhou, B.

    2016-01-01

    The ATLAS Muon Spectrometer endcap thin-Resistive Plate Chamber trigger project compliments the New Small Wheel endcap Phase-1 upgrade for higher luminosity LHC operation. These new trigger chambers, located in a high rate region of ATLAS, will improve overall trigger acceptance and reduce the fake muon trigger incidence. These chambers must generate a low level muon trigger to be delivered to a remote high level processor within a stringent latency requirement of 43 bunch crossings (1075 ns). To help meet this requirement the High Performance Time to Digital Converter (HPTDC), a multi-channel ASIC designed by CERN Microelectronics group, has been proposed for the digitization of the fast front end detector signals. This paper investigates the HPTDC performance in the context of the overall muon trigger latency, employing detailed behavioral Verilog simulations in which the latency in triggerless mode is measured for a range of configurations and under realistic hit rate conditions. The simulation results show that various HPTDC operational configurations, including leading edge and pair measurement modes can provide high efficiency (>98%) to capture and digitize hits within a time interval satisfying the Phase-1 latency tolerance.

  1. COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena

    Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge

    2012-01-01

    Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of

  2. High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications

    Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad

    2012-01-01

    NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure

  3. The Fuel Accident Condition Simulator (FACS) furnace system for high temperature performance testing of VHTR fuel

    Demkowicz, Paul A., E-mail: paul.demkowicz@inl.gov [Idaho National Laboratory, 2525 Fremont Avenue, MS 3860, Idaho Falls, ID 83415-3860 (United States); Laug, David V.; Scates, Dawn M.; Reber, Edward L.; Roybal, Lyle G.; Walter, John B.; Harp, Jason M. [Idaho National Laboratory, 2525 Fremont Avenue, MS 3860, Idaho Falls, ID 83415-3860 (United States); Morris, Robert N. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer A system has been developed for safety testing of irradiated coated particle fuel. Black-Right-Pointing-Pointer FACS system is designed to facilitate remote operation in a shielded hot cell. Black-Right-Pointing-Pointer System will measure release of fission gases and condensable fission products. Black-Right-Pointing-Pointer Fuel performance can be evaluated at temperatures as high as 2000 Degree-Sign C in flowing helium. - Abstract: The AGR-1 irradiation of TRISO-coated particle fuel specimens was recently completed and represents the most successful such irradiation in US history, reaching peak burnups of greater than 19% FIMA with zero failures out of 300,000 particles. An extensive post-irradiation examination (PIE) campaign will be conducted on the AGR-1 fuel in order to characterize the irradiated fuel properties, assess the in-pile fuel performance in terms of coating integrity and fission metals release, and determine the fission product retention behavior during high temperature safety testing. A new furnace system has been designed, built, and tested to perform high temperature accident tests. The Fuel Accident Condition Simulator furnace system is designed to heat fuel specimens at temperatures up to 2000 Degree-Sign C in helium while monitoring the release of volatile fission metals (e.g. Cs, Ag, Sr, and Eu), iodine, and fission gases (Kr, Xe). Fission gases released from the fuel to the sweep gas are monitored in real time using dual cryogenic traps fitted with high purity germanium detectors. Condensable fission products are collected on a plate attached to a water-cooled cold finger that can be exchanged periodically without interrupting the test. Analysis of fission products on the condensation plates involves dry gamma counting followed by chemical analysis of selected isotopes. This paper will describe design and operational details of the Fuel Accident Condition Simulator furnace system and the associated

  4. Progress on H5Part: A Portable High Performance Parallel Data Interface for Electromagnetics Simulations

    Adelmann, Andreas; Gsell, Achim; Oswald, Benedikt; Schietinger, Thomas; Bethel, Wes; Shalf, John; Siegerist, Cristina; Stockinger, Kurt

    2007-01-01

    Significant problems facing all experimental and computational sciences arise from growing data size and complexity. Common to all these problems is the need to perform efficient data I/O on diverse computer architectures. In our scientific application, the largest parallel particle simulations generate vast quantities of six-dimensional data. Such a simulation run produces data for an aggregate data size up to several TB per run. Motivated by the need to address data I/O and access challenges, we have implemented H5Part, an open source data I/O API that simplifies the use of the Hierarchical Data Format v5 library (HDF5). HDF5 is an industry standard for high performance, cross-platform data storage and retrieval that runs on all contemporary architectures from large parallel supercomputers to laptops. H5Part, which is oriented to the needs of the particle physics and cosmology communities, provides support for parallel storage and retrieval of particles, structured and in the future unstructured meshes. In this paper, we describe recent work focusing on I/O support for particles and structured meshes and provide data showing performance on modern supercomputer architectures like the IBM POWER 5

  5. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  6. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  7. Prediction of SFL Interruption Performance from the Results of Arc Simulation during High-Current Phase

    Lee, Jong-Chul; Lee, Won-Ho; Kim, Woun-Jea

    2015-09-01

    The design and development procedures of SF6 gas circuit breakers are still largely based on trial and error through testing although the development costs go higher every year. The computation cannot cover the testing satisfactorily because all the real processes arc not taken into account. But the knowledge of the arc behavior and the prediction of the thermal-flow inside the interrupters by numerical simulations are more useful than those by experiments due to the difficulties to obtain physical quantities experimentally and the reduction of computational costs in recent years. In this paper, in order to get further information into the interruption process of a SF6 self-blast interrupter, which is based on a combination of thermal expansion and the arc rotation principle, gas flow simulations with a CFD-arc modeling are performed during the whole switching process such as high-current period, pre-current zero period, and current-zero period. Through the complete work, the pressure-rise and the ramp of the pressure inside the chamber before current zero as well as the post-arc current after current zero should be a good criterion to predict the short-line fault interruption performance of interrupters.

  8. Simulated Performances of a Very High Energy Tomograph for Non-Destructive Characterization of large objects

    Kistler, Marc; Estre, Nicolas; Merle, Elsa

    2018-01-01

    As part of its R&D activities on high-energy X-ray imaging for non-destructive characterization, the Nuclear Measurement Laboratory has started an upgrade of its imaging system currently implemented at the CEA-Cadarache center. The goals are to achieve a sub-millimeter spatial resolution and the ability to perform tomographies on very large objects (more than 100-cm standard concrete or 40-cm steel). This paper presentsresults on the detection part of the imaging system. The upgrade of the detection part needs a thorough study of the performance of two detectors: a series of CdTe semiconductor sensors and two arrays of segmented CdWO4 scintillators with different pixel sizes. This study consists in a Quantum Accounting Diagram (QAD) analysis coupled with Monte-Carlo simulations. The scintillator arrays are able to detect millimeter details through 140 cm of concrete, but are limited to 120 cm for smaller ones. CdTe sensors have lower but more stable performance, with a 0.5 mm resolution for 90 cm of concrete. The choice of the detector then depends on the preferred characteristic: the spatial resolution or the use on large volumes. The combination of the features of the source and the studies on the detectors gives the expected performance of the whole equipment, in terms of signal-over-noise ratio (SNR), spatial resolution and acquisition time.

  9. Cognitive load, emotion, and performance in high-fidelity simulation among beginning nursing students: a pilot study.

    Schlairet, Maura C; Schlairet, Timothy James; Sauls, Denise H; Bellflowers, Lois

    2015-03-01

    Establishing the impact of the high-fidelity simulation environment on student performance, as well as identifying factors that could predict learning, would refine simulation outcome expectations among educators. The purpose of this quasi-experimental pilot study was to explore the impact of simulation on emotion and cognitive load among beginning nursing students. Forty baccalaureate nursing students participated in teaching simulations, rated their emotional state and cognitive load, and completed evaluation simulations. Two principal components of emotion were identified representing the pleasant activation and pleasant deactivation components of affect. Mean rating of cognitive load following simulation was high. Linear regression identiffed slight but statistically nonsignificant positive associations between principal components of emotion and cognitive load. Logistic regression identified a negative but statistically nonsignificant effect of cognitive load on assessment performance. Among lower ability students, a more pronounced effect of cognitive load on assessment performance was observed; this also was statistically non-significant. Copyright 2015, SLACK Incorporated.

  10. GPU-based high performance Monte Carlo simulation in neutron transport

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  11. GPU-based high performance Monte Carlo simulation in neutron transport

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  12. Measurement and simulation of the performance of high energy physics data grids

    Crosby, Paul Andrew

    This thesis describes a study of resource brokering in a computational Grid for high energy physics. Such systems are being devised in order to manage the unprecedented workload of the next generation particle physics experiments such as those at the Large Hadron Collider. A simulation of the European Data Grid has been constructed, and calibrated using logging data from a real Grid testbed. This model is then used to explore the Grid's middleware configuration, and suggest improvements to its scheduling policy. The expansion of the simulation to include data analysis of the type conducted by particle physicists is then described. A variety of job and data management policies are explored, in order to determine how well they meet the needs of physicists, as well as how efficiently they make use of CPU and network resources. Appropriate performance indicators are introduced in order to measure how well jobs and resources are managed from different perspectives. The effects of inefficiencies in Grid middleware are explored, as are methods of compensating for them. It is demonstrated that a scheduling algorithm should alter its weighting on load balancing and data distribution, depending on whether data transfer or CPU requirements dominate, and also on the level of job loading. It is also shown that an economic model for data management and replication can improve the efficiency of network use and job processing.

  13. Use of high performance networks and supercomputers for real-time flight simulation

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  14. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  15. LIAR -- A computer program for the modeling and simulation of high performance linacs

    Assmann, R.; Adolphsen, C.; Bane, K.; Emma, P.; Raubenheimer, T.; Siemann, R.; Thompson, K.; Zimmermann, F.

    1997-04-01

    The computer program LIAR (LInear Accelerator Research Code) is a numerical modeling and simulation tool for high performance linacs. Amongst others, it addresses the needs of state-of-the-art linear colliders where low emittance, high-intensity beams must be accelerated to energies in the 0.05-1 TeV range. LIAR is designed to be used for a variety of different projects. LIAR allows the study of single- and multi-particle beam dynamics in linear accelerators. It calculates emittance dilutions due to wakefield deflections, linear and non-linear dispersion and chromatic effects in the presence of multiple accelerator imperfections. Both single-bunch and multi-bunch beams can be simulated. Several basic and advanced optimization schemes are implemented. Present limitations arise from the incomplete treatment of bending magnets and sextupoles. A major objective of the LIAR project is to provide an open programming platform for the accelerator physics community. Due to its design, LIAR allows straight-forward access to its internal FORTRAN data structures. The program can easily be extended and its interactive command language ensures maximum ease of use. Presently, versions of LIAR are compiled for UNIX and MS Windows operating systems. An interface for the graphical visualization of results is provided. Scientific graphs can be saved in the PS and EPS file formats. In addition a Mathematica interface has been developed. LIAR now contains more than 40,000 lines of source code in more than 130 subroutines. This report describes the theoretical basis of the program, provides a reference for existing features and explains how to add further commands. The LIAR home page and the ONLINE version of this manual can be accessed under: http://www.slac.stanford.edu/grp/arb/rwa/liar.htm

  16. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  17. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu

    2013-01-01

    multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our

  18. High-performance modeling of CO2 sequestration by coupling reservoir simulation and molecular dynamics

    Bao, Kai

    2013-01-01

    The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems

  19. High-Fidelity Contrast Reaction Simulation Training: Performance Comparison of Faculty, Fellows, and Residents.

    Pfeifer, Kyle; Staib, Lawrence; Arango, Jennifer; Kirsch, John; Arici, Mel; Kappus, Liana; Pahade, Jay

    2016-01-01

    Reactions to contrast material are uncommon in diagnostic radiology, and vary in clinical presentation from urticaria to life-threatening anaphylaxis. Prior studies have demonstrated a high error rate in contrast reaction management, with smaller studies using simulation demonstrating variable data on effectiveness. We sought to assess the effectiveness of high-fidelity simulation in teaching contrast reaction management for residents, fellows, and attendings. A 20-question multiple-choice test assessing contrast reaction knowledge, with Likert-scale questions assessing subjective comfort levels of management of contrast reactions, was created. Three simulation scenarios that represented a moderate reaction, a severe reaction, and a contrast reaction mimic were completed in a one-hour period in a simulation laboratory. All participants completed a pretest and a posttest at one month. A six-month delayed posttest was given, but was optional for all participants. A total of 150 radiologists participated (residents = 52; fellows = 24; faculty = 74) in the pretest and posttest; and 105 participants completed the delayed posttest (residents = 31; fellows = 17; faculty = 57). A statistically significant increase was found in the one-month posttest (P < .00001) and the six-month posttest scores (P < .00001) and Likert scores (P < .001) assessing comfort level in managing all contrast reactions, compared with the pretest. Test scores and comfort level for moderate and severe reactions significantly decreased at six months, compared with the one-month posttest (P < .05). High-fidelity simulation is an effective learning tool, allowing practice of "high-acuity" situation management in a nonthreatening environment; the simulation training resulted in significant improvement in test scores, as well as an increase in subjective comfort in management of reactions, across all levels of training. A six-month refresher course is suggested, to maintain knowledge and comfort level in

  20. Assessing Technical Performance and Determining the Learning Curve in Cleft Palate Surgery Using a High-Fidelity Cleft Palate Simulator.

    Podolsky, Dale J; Fisher, David M; Wong Riff, Karen W; Szasz, Peter; Looi, Thomas; Drake, James M; Forrest, Christopher R

    2018-06-01

    This study assessed technical performance in cleft palate repair using a newly developed assessment tool and high-fidelity cleft palate simulator through a longitudinal simulation training exercise. Three residents performed five and one resident performed nine consecutive endoscopically recorded cleft palate repairs using a cleft palate simulator. Two fellows in pediatric plastic surgery and two expert cleft surgeons also performed recorded simulated repairs. The Cleft Palate Objective Structured Assessment of Technical Skill (CLOSATS) and end-product scales were developed to assess performance. Two blinded cleft surgeons assessed the recordings and the final repairs using the CLOSATS, end-product scale, and a previously developed global rating scale. The average procedure-specific (CLOSATS), global rating, and end-product scores increased logarithmically after each successive simulation session for the residents. Reliability of the CLOSATS (average item intraclass correlation coefficient (ICC), 0.85 ± 0.093) and global ratings (average item ICC, 0.91 ± 0.02) among the raters was high. Reliability of the end-product assessments was lower (average item ICC, 0.66 ± 0.15). Standard setting linear regression using an overall cutoff score of 7 of 10 corresponded to a pass score for the CLOSATS and the global score of 44 (maximum, 60) and 23 (maximum, 30), respectively. Using logarithmic best-fit curves, 6.3 simulation sessions are required to reach the minimum standard. A high-fidelity cleft palate simulator has been developed that improves technical performance in cleft palate repair. The simulator and technical assessment scores can be used to determine performance before operating on patients.

  1. Simulation on following Performance of High-Speed Railway In Situ Testing System

    Fei-Long Zheng

    2013-01-01

    Full Text Available Subgrade bears both the weight of superstructures and the impacts of running trains. Its stability affects the line smoothness directly, but in situ testing method on it is inadequate. This paper presents a railway roadbed in situ testing device, the key component of which is an excitation hydraulic servo cylinder that can output the static pressure and dynamic pressure simultaneously to simulate the force of the trains to the subgrade. The principle of the excitation system is briefly introduced, and the transfer function of the closed-loop force control system is derived and simulated; that, it shows without control algorithm, the dynamic response is very low and the following performance is quite poor. So, the improvedadaptive model following control (AMFC algorithm based on direct state method is adopted. Then, control block diagram is built and simulated with the input of different waveforms and frequencies. The simulation results show that the system has been greatly improved; the output waveform can follow the input signal much better except for a little distortion when the signal varies severely. And the following performance becomes even better as the load stiffness increases.

  2. A high-performance model for shallow-water simulations in distributed and heterogeneous architectures

    Conde, Daniel; Canelas, Ricardo B.; Ferreira, Rui M. L.

    2017-04-01

    unstructured nature of the mesh topology with the corresponding employed solution, based on space-filling curves, being analyzed and discussed. Intra-node parallelism is achieved through OpenMP for CPUs and CUDA for GPUs, depending on which kind of device the process is running. Here the main difficulty is associated with the Object-Oriented approach, where the presence of complex data structures can degrade model performance considerably. STAV-2D now supports fully distributed and heterogeneous simulations where multiple different devices can be used to accelerate computation time. The advantages, short-comings and specific solutions for the employed unified Object-Oriented approach, where the source code for CPU and GPU has the same compilation units (no device specific branches like seen in available models), are discussed and quantified with a thorough scalability and performance analysis. The assembled parallel model is expected to achieve faster than real-time simulations for high resolutions (from meters to sub-meter) in large scaled problems (from cities to watersheds), effectively bridging the gap between detailed and timely simulation results. Acknowledgements This research as partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 and Doctoral Grant SFRH/BD/97933/2013 granted by the National Foundation for Science and Technology (FCT). References Canelas, R.; Murillo, J. & Ferreira, R.M.L. (2013), Two-dimensional depth-averaged modelling of dam-break flows over mobile beds. Journal of Hydraulic Research, 51(4), 392-407. Conde, D. A. S.; Baptista, M. A. V.; Sousa Oliveira, C. & Ferreira, R. M. L. (2013), A shallow-flow model for the propagation of tsunamis over complex geometries and mobile beds, Nat. Hazards and Earth Syst. Sci., 13, 2533-2542. Conde, D. A. S.; Telhado, M. J.; Viana Baptista, M. A. & Ferreira, R. M. L. (2015) Severity and exposure associated with tsunami actions in

  3. libRoadRunner: a high performance SBML simulation and analysis library.

    Somogyi, Endre T; Bouteiller, Jean-Marie; Glazier, James A; König, Matthias; Medley, J Kyle; Swat, Maciej H; Sauro, Herbert M

    2015-10-15

    This article presents libRoadRunner, an extensible, high-performance, cross-platform, open-source software library for the simulation and analysis of models expressed using Systems Biology Markup Language (SBML). SBML is the most widely used standard for representing dynamic networks, especially biochemical networks. libRoadRunner is fast enough to support large-scale problems such as tissue models, studies that require large numbers of repeated runs and interactive simulations. libRoadRunner is a self-contained library, able to run both as a component inside other tools via its C++ and C bindings, and interactively through its Python interface. Its Python Application Programming Interface (API) is similar to the APIs of MATLAB ( WWWMATHWORKSCOM: ) and SciPy ( HTTP//WWWSCIPYORG/: ), making it fast and easy to learn. libRoadRunner uses a custom Just-In-Time (JIT) compiler built on the widely used LLVM JIT compiler framework. It compiles SBML-specified models directly into native machine code for a variety of processors, making it appropriate for solving extremely large models or repeated runs. libRoadRunner is flexible, supporting the bulk of the SBML specification (except for delay and non-linear algebraic equations) including several SBML extensions (composition and distributions). It offers multiple deterministic and stochastic integrators, as well as tools for steady-state analysis, stability analysis and structural analysis of the stoichiometric matrix. libRoadRunner binary distributions are available for Mac OS X, Linux and Windows. The library is licensed under Apache License Version 2.0. libRoadRunner is also available for ARM-based computers such as the Raspberry Pi. http://www.libroadrunner.org provides online documentation, full build instructions, binaries and a git source repository. hsauro@u.washington.edu or somogyie@indiana.edu Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2015. This work is written

  4. H5Part A Portable High Performance Parallel Data Interface for Particle Simulations

    Adelmann, Andreas; Shalf, John M; Siegerist, Cristina

    2005-01-01

    Largest parallel particle simulations, in six dimensional phase space generate wast amont of data. It is also desirable to share data and data analysis tools such as ParViT (Particle Visualization Toolkit) among other groups who are working on particle-based accelerator simulations. We define a very simple file schema built on top of HDF5 (Hierarchical Data Format version 5) as well as an API that simplifies the reading/writing of the data to the HDF5 file format. HDF5 offers a self-describing machine-independent binary file format that supports scalable parallel I/O performance for MPI codes on a variety of supercomputing systems and works equally well on laptop computers. The API is available for C, C++, and Fortran codes. The file format will enable disparate research groups with very different simulation implementations to share data transparently and share data analysis tools. For instance, the common file format will enable groups that depend on completely different simulation implementations to share c...

  5. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    2016-11-01

    is unlimited. 10 6. References 1. Malbec L-M, Egúsquiza J, Bruneaux G, Meijer M. Characterization of a set of ECN spray A injectors : nozzle to...sprays and develop a predictive theory for comparison to measurements in the laboratory of turbulent diesel sprays. 15. SUBJECT TERMS high...models into future simulations of turbulent jet sprays and develop a predictive theory for comparison to measurements in the lab of turbulent diesel

  6. Simulated Performances of a Very High Energy Tomograph for Non-Destructive Characterization of large objects

    Kistler Marc

    2018-01-01

    The upgrade of the detection part needs a thorough study of the performance of two detectors: a series of CdTe semiconductor sensors and two arrays of segmented CdWO4 scintillators with different pixel sizes. This study consists in a Quantum Accounting Diagram (QAD analysis coupled with Monte-Carlo simulations. The scintillator arrays are able to detect millimeter details through 140 cm of concrete, but are limited to 120 cm for smaller ones. CdTe sensors have lower but more stable performance, with a 0.5 mm resolution for 90 cm of concrete. The choice of the detector then depends on the preferred characteristic: the spatial resolution or the use on large volumes. The combination of the features of the source and the studies on the detectors gives the expected performance of the whole equipment, in terms of signal-over-noise ratio (SNR, spatial resolution and acquisition time.

  7. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  8. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  9. LIAR: A COMPUTER PROGRAM FOR THE SIMULATION AND MODELING OF HIGH PERFORMANCE LINACS

    Adolphsen, Chris

    2003-01-01

    The computer program LIAR (''LInear Accelerator Research code'') is a numerical simulation and tracking program for linear colliders. The LIAR project was started at SLAC in August 1995 in order to provide a computing and simulation tool that specifically addresses the needs of high energy linear colliders. LIAR is designed to be used for a variety of different linear accelerators. It has been applied for and checked against the existing Stanford Linear Collider (SLC) as well as the linacs of the proposed Next Linear Collider (NLC) and the proposed Linac Coherent Light Source (LCLS). The program includes wakefield effects, a 4D coupled beam description, specific optimization algorithms and other advanced features. We describe the most important concepts and highlights of the program. After having presented the LIAR program at the LINAC96 and the PAC97 conferences, we do now introduce it to the European particle accelerator community

  10. Driving Simulator Development and Performance Study

    Juto, Erik

    2010-01-01

    The driving simulator is a vital tool for much of the research performed at theSwedish National Road and Transport Institute (VTI). Currently VTI posses three driving simulators, two high fidelity simulators developed and constructed by VTI, and a medium fidelity simulator from the German company Dr.-Ing. Reiner Foerst GmbH. The two high fidelity simulators run the same simulation software, developed at VTI. The medium fidelity simulator runs a proprietary simulation software. At VTI there is...

  11. Effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations

    Renden, Peter G.; Savelsbergh, Geert J. P.; Oudejans, Raoul R. D.

    2017-01-01

    We investigated the effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations. Police officers received this training as well as a regular police arrest and self-defence skills training (control training) in a crossover design. Officers’

  12. A lattice-particle approach for the simulation of fracture processes in fiber-reinforced high-performance concrete

    Montero-Chacón, F.; Schlangen, H.E.J.G.; Medina, F.

    2013-01-01

    The use of fiber-reinforced high-performance concrete (FRHPC) is becoming more extended; therefore it is necessary to develop tools to simulate and better understand its behavior. In this work, a discrete model for the analysis of fracture mechanics in FRHPC is presented. The plain concrete matrix,

  13. Development of three-dimensional neoclassical transport simulation code with high performance Fortran on a vector-parallel computer

    Satake, Shinsuke; Okamoto, Masao; Nakajima, Noriyoshi; Takamaru, Hisanori

    2005-11-01

    A neoclassical transport simulation code (FORTEC-3D) applicable to three-dimensional configurations has been developed using High Performance Fortran (HPF). Adoption of computing techniques for parallelization and a hybrid simulation model to the δf Monte-Carlo method transport simulation, including non-local transport effects in three-dimensional configurations, makes it possible to simulate the dynamism of global, non-local transport phenomena with a self-consistent radial electric field within a reasonable computation time. In this paper, development of the transport code using HPF is reported. Optimization techniques in order to achieve both high vectorization and parallelization efficiency, adoption of a parallel random number generator, and also benchmark results, are shown. (author)

  14. A High Performance Chemical Simulation Preprocessor and Source Code Generator, Phase I

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are a critical component of aerospace research, Earth systems research, and energy research. These simulations enable a...

  15. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  16. Optical Characterization and Energy Simulation of Glazing for High-Performance Windows

    Jonsson, Andreas

    2010-01-01

    This thesis focuses on one important component of the energy system - the window. Windows are installed in buildings mainly to create visual contact with the surroundings and to let in daylight, and should also be heat and sound insulating. This thesis covers four important aspects of windows: antireflection and switchable coatings, energy simulations and optical measurements. Energy simulations have been used to compare different windows and also to estimate the performance of smart or switchable windows, whose transmittance can be regulated. The results from this thesis show the potential of the emerging technology of smart windows, not only from a daylight and an energy perspective, but also for comfort and well-being. The importance of a well functioning control system for such windows, is pointed out. To fulfill all requirements of modern windows, they often have two or more panes. Each glass surface leads to reflection of light and therefore less daylight is transmitted. It is therefore of interest to find ways to increase the transmittance. In this thesis antireflection coatings, similar to those found on eye-glasses and LCD screens, have been investigated. For large area applications such as windows, it is necessary to use techniques which can easily be adapted to large scale manufacturing at low cost. Such a technique is dip-coating in a sol-gel of porous silica. Antireflection coatings have been deposited on glass and plastic materials to study both visual and energy performance and it has been shown that antireflection coatings increase the transmittance of windows without negatively affecting the thermal insulation and the energy efficiency. Optical measurements are important for quantifying product properties for comparisons and evaluations. It is important that new measurement routines are simple and applicable to standard commercial instruments. Different systematic error sources for optical measurements of patterned light diffusing samples using

  17. High Performance Computation of a Jet in Crossflow by Lattice Boltzmann Based Parallel Direct Numerical Simulation

    Jiang Lei

    2015-01-01

    Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.

  18. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  19. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  20. High Performance Electrical Modeling and Simulation Software Normal Environment Verification and Validation Plan, Version 1.0; TOPICAL

    WIX, STEVEN D.; BOGDAN, CAROLYN W.; MARCHIONDO JR., JULIO P.; DEVENEY, MICHAEL F.; NUNEZ, ALBERT V.

    2002-01-01

    The requirements in modeling and simulation are driven by two fundamental changes in the nuclear weapons landscape: (1) The Comprehensive Test Ban Treaty and (2) The Stockpile Life Extension Program which extends weapon lifetimes well beyond their originally anticipated field lifetimes. The move from confidence based on nuclear testing to confidence based on predictive simulation forces a profound change in the performance asked of codes. The scope of this document is to improve the confidence in the computational results by demonstration and documentation of the predictive capability of electrical circuit codes and the underlying conceptual, mathematical and numerical models as applied to a specific stockpile driver. This document describes the High Performance Electrical Modeling and Simulation software normal environment Verification and Validation Plan

  1. Design and Simulation of a High Performance Emergency Data Delivery Protocol

    Swartz, Kevin; Wang, Di

    2007-01-01

    The purpose of this project was to design a high performance data delivery protocol, capable of delivering data as quickly as possible to a base station or target node. This protocol was designed particularly for wireless network topologies, but could also be applied towards a wired system....... An emergency is defined as any event with high priority that needs to be handled immediately. It is assumed that this emergency event is important enough that energy efficiency is not a factor in our protocol. The desired effect is for fast as possible delivery to the base station for rapid event handling....

  2. Numerical simulation of aerodynamic performance of a couple multiple units high-speed train

    Niu, Ji-qiang; Zhou, Dan; Liu, Tang-hong; Liang, Xi-feng

    2017-05-01

    In order to determine the effect of the coupling region on train aerodynamic performance, and how the coupling region affects aerodynamic performance of the couple multiple units trains when they both run and pass each other in open air, the entrance of two such trains into a tunnel and their passing each other in the tunnel was simulated in Fluent 14.0. The numerical algorithm employed in this study was verified by the data of scaled and full-scale train tests, and the difference lies within an acceptable range. The results demonstrate that the distribution of aerodynamic forces on the train cars is altered by the coupling region; however, the coupling region has marginal effect on the drag and lateral force on the whole train under crosswind, and the lateral force on the train cars is more sensitive to couple multiple units compared to the other two force coefficients. It is also determined that the component of the coupling region increases the fluctuation of aerodynamic coefficients for each train car under crosswind. Affected by the coupling region, a positive pressure pulse was introduced in the alternating pressure produced by trains passing by each other in the open air, and the amplitude of the alternating pressure was decreased by the coupling region. The amplitude of the alternating pressure on the train or on the tunnel was significantly decreased by the coupling region of the train. This phenomenon did not alter the distribution law of pressure on the train and tunnel; moreover, the effect of the coupling region on trains passing by each other in the tunnel is stronger than that on a single train passing through the tunnel.

  3. Multi-scale high-performance fluid flow: Simulations through porous media

    Perović, Nevena

    2016-08-03

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy\\'s law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  4. Multi-scale high-performance fluid flow: Simulations through porous media

    Perović, Nevena; Frisch, Jé rô me; Salama, Amgad; Sun, Shuyu; Rank, Ernst; Mundani, Ralf Peter

    2016-01-01

    Computational fluid dynamic (CFD) calculations on geometrically complex domains such as porous media require high geometric discretisation for accurately capturing the tested physical phenomena. Moreover, when considering a large area and analysing local effects, it is necessary to deploy a multi-scale approach that is both memory-intensive and time-consuming. Hence, this type of analysis must be conducted on a high-performance parallel computing infrastructure. In this paper, the coupling of two different scales based on the Navier–Stokes equations and Darcy's law is described followed by the generation of complex geometries, and their discretisation and numerical treatment. Subsequently, the necessary parallelisation techniques and a rather specific tool, which is capable of retrieving data from the supercomputing servers and visualising them during the computation runtime (i.e. in situ) are described. All advantages and possible drawbacks of this approach, together with the preliminary results and sensitivity analyses are discussed in detail.

  5. A Grid-Based Cyber Infrastructure for High Performance Chemical Dynamics Simulations

    Khadka Prashant

    2008-10-01

    Full Text Available Chemical dynamics simulation is an effective means to study atomic level motions of molecules, collections of molecules, liquids, surfaces, interfaces of materials, and chemical reactions. To make chemical dynamics simulations globally accessible to a broad range of users, recently a cyber infrastructure was developed that provides an online portal to VENUS, a popular chemical dynamics simulation program package, to allow people to submit simulation jobs that will be executed on the web server machine. In this paper, we report new developments of the cyber infrastructure for the improvement of its quality of service by dispatching the submitted simulations jobs from the web server machine onto a cluster of workstations for execution, and by adding an animation tool, which is optimized for animating the simulation results. The separation of the server machine from the simulation-running machine improves the service quality by increasing the capacity to serve more requests simultaneously with even reduced web response time, and allows the execution of large scale, time-consuming simulation jobs on the powerful workstation cluster. With the addition of an animation tool, the cyber infrastructure automatically converts, upon the selection of the user, some simulation results into an animation file that can be viewed on usual web browsers without requiring installation of any special software on the user computer. Since animation is essential for understanding the results of chemical dynamics simulations, this animation capacity provides a better way for understanding simulation details of the chemical dynamics. By combining computing resources at locations under different administrative controls, this cyber infrastructure constitutes a grid environment providing physically and administratively distributed functionalities through a single easy-to-use online portal

  6. Computer simulation for prediction of performance and thermodynamic parameters of high energy materials

    Muthurajan, H.; Sivabalan, R.; Talawar, M.B.; Asthana, S.N.

    2004-01-01

    A new code viz., Linear Output Thermodynamic User-friendly Software for Energetic Systems (LOTUSES) developed during this work predicts the theoretical performance parameters such as density, detonation factor, velocity of detonation, detonation pressure and thermodynamic properties such as heat of detonation, heat of explosion, volume of explosion gaseous products. The same code also assists in the prediction of possible explosive decomposition products after explosion and power index. The developed code has been validated by calculating the parameters of standard explosives such as TNT, PETN, RDX, and HMX. Theoretically predicated parameters are accurate to the order of ±5% deviation. To the best of our knowledge, no such code is reported in literature which can predict a wide range of characteristics of known/unknown explosives with minimum input parameters. The code can be used to obtain thermochemical and performance parameters of high energy materials (HEMs) with reasonable accuracy. The code has been developed in Visual Basic having enhanced windows environment, and thereby advantages over the conventional codes, written in Fortran. The theoretically predicted HEMs performance can be directly printed as well as stored in text (.txt) or HTML (.htm) or Microsoft Word (.doc) or Adobe Acrobat (.pdf) format in the hard disk. The output can also be copied into the Random Access Memory as clipboard text which can be imported/pasted in other software as in the case of other codes

  7. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-10-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  8. Inductively coupled plasma emission spectrometric detection of simulated high performance liquid chromatographic peaks

    Fraley, D.M.; Yates, D.; Manahan, S.E.

    1979-01-01

    Because of its multielement capability, element-specificity, and low detection limits, inductively coupled plasma optical emission spectrometry (ICP) is a very promising technique for the detection of specific elemental species separated by high performance liquid chromatography (HPLC). This paper evaluated ICP as a detector for HPLC peaks containing specific elements. Detection limits for a number of elements have been evaluated in terms of the minimum detectable concentration of the element at the chromatographic peak maximum. The elements studies were Al, As, B, Ba, Ca, Cd, Co, Cr, Cu, Fe, K, Li, Mg, Mn, Mo, Na, Ni, P, Pb, Sb, Se, Sr, Ti, V, and Zn. In addition, ICP was compared with atomic absorption spectrometry for the detection of HPLC peaks composed of EDTA and NTA chelates of copper. Furthermore, ICP was compared to uv solution absorption for the detection of copper chelates. 6 figures, 4 tables

  9. The Effect of High and Low Antiepileptic Drug Dosage on Simulated Driving Performance in Person's with Seizures: A Pilot Study

    Alexander M. Crizzle

    2015-10-01

    Full Text Available Background: Prior studies examining driving performance have not examined the effects of antiepileptic drugs (AED’s or their dosages in persons with epilepsy. AED’s are the primary form of treatment to control seizures, but they are shown to affect cognition, attention, and vision, all which may impair driving. The purpose of this study was to describe the characteristics of high and low AED dosages on simulated driving performance in persons with seizures. Method: Patients (N = 11; mean age 42.1 ± 6.3; 55% female; 100% Caucasian were recruited from the Epilepsy Monitoring Unit and had their driving assessed on a simulator. Results: No differences emerged in total or specific types of driving errors between high and low AED dosages. However, high AED drug dosage was significantly associated with errors of lane maintenance (r = .67, p < .05 and gap acceptance (r = .66, p < .05. The findings suggest that higher AED dosages may adversely affect driving performance, irrespective of having a diagnosis of epilepsy, conversion disorder, or other medical conditions. Conclusion: Future studies with larger samples are required to examine whether AED dosage or seizure focus alone can impair driving performance in persons with and without seizures.

  10. Development and testing of high performance pseudo random number generator for Monte Carlo simulation

    Chakraborty, Brahmananda

    2009-01-01

    Random number plays an important role in any Monte Carlo simulation. The accuracy of the results depends on the quality of the sequence of random numbers employed in the simulation. These include randomness of the random numbers, uniformity of their distribution, absence of correlation and long period. In a typical Monte Carlo simulation of particle transport in a nuclear reactor core, the history of a particle from its birth in a fission event until its death by an absorption or leakage event is tracked. The geometry of the core and the surrounding materials are exactly modeled in the simulation. To track a neutron history one needs random numbers for determining inter collision distance, nature of the collision, the direction of the scattered neutron etc. Neutrons are tracked in batches. In one batch approximately 2000-5000 neutrons are tracked. The statistical accuracy of the results of the simulation depends on the total number of particles (number of particles in one batch multiplied by the number of batches) tracked. The number of histories to be generated is usually large for a typical radiation transport problem. To track a very large number of histories one needs to generate a long sequence of independent random numbers. In other words the cycle length of the random number generator (RNG) should be more than the total number of random numbers required for simulating the given transport problem. The number of bits of the machine generally limits the cycle length. For a binary machine of p bits the maximum cycle length is 2 p . To achieve higher cycle length in the same machine one has to use either register arithmetic or bit manipulation technique

  11. Simulation-Driven Development and Optimization of a High-Performance Six-Dimensional Wrist Force/Torque Sensor

    Qiaokang LIANG

    2010-05-01

    Full Text Available This paper describes the Simulation-Driven Development and Optimization (SDDO of a six-dimensional force/torque sensor with high performance. By the implementation of the SDDO, the developed sensor possesses high performance such as high sensitivity, linearity, stiffness and repeatability simultaneously, which is hard for tranditional force/torque sensor. Integrated approach provided by software ANSYS was used to streamline and speed up the process chain and thereby to deliver results significantly faster than traditional approaches. The result of calibration experiment possesses some impressive characters, therefore the developed fore/torque sensor can be usefully used in industry and the methods of design can also be used to develop industrial product.

  12. Performance of a wiped film evaporator with simulated high level waste slurries

    Dierks, R.D.; Bonner, W.F.

    1975-01-01

    The horizontal, reverse taper, wiped film evaporator that was evaluated demonstrated a number of positive characteristics with respect to its applicability in the solidification of nuclear fuel recovery process wastes. Foremost among these is its ability to remove the bulk (80 to 90 percent) of the liquid associated with any of the purex-type high level, intermediate level, or mixed waste slurries. The major disadvantage of the evaporator is its current inability to discharge a product that is low enough in liquid content to avoid sticking to the evaporator discharge nozzle. Also, while the indirect indications of the torque required to turn the rotor and the power drawn by the drive motor are indicative of the liquid content of the discharged product, no reliable correlation has been found to cover all of the possible flow rates and feed stock compositions that the evaporator may be required to handle. In addition, no reliable means has been found to indicate the presence or absence of product flow through the discharge nozzle. The lack of a positive means of moving the product concentrate out of the evaporator and into a high temperature receiver is an undesirable feature of the evaporator. Pulverized glass former, or frit, was added to the evaporator feedstock in a ratio of frit to metal oxides of 2 to 1, and the resulting mixture successfully evaporated to a concentrate containing about 50 percent solids. In general, the performance of the wiped film evaporator evaluated was favorable for its use in a nuclear waste fixation process, however further development of the rotor design, power input, and operating techniques will be required to produce a free flowing solid product

  13. Applying GIS and high performance agent-based simulation for managing an Old World Screwworm fly invasion of Australia.

    Welch, M C; Kwan, P W; Sajeev, A S M

    2014-10-01

    Agent-based modelling has proven to be a promising approach for developing rich simulations for complex phenomena that provide decision support functions across a broad range of areas including biological, social and agricultural sciences. This paper demonstrates how high performance computing technologies, namely General-Purpose Computing on Graphics Processing Units (GPGPU), and commercial Geographic Information Systems (GIS) can be applied to develop a national scale, agent-based simulation of an incursion of Old World Screwworm fly (OWS fly) into the Australian mainland. The development of this simulation model leverages the combination of massively data-parallel processing capabilities supported by NVidia's Compute Unified Device Architecture (CUDA) and the advanced spatial visualisation capabilities of GIS. These technologies have enabled the implementation of an individual-based, stochastic lifecycle and dispersal algorithm for the OWS fly invasion. The simulation model draws upon a wide range of biological data as input to stochastically determine the reproduction and survival of the OWS fly through the different stages of its lifecycle and dispersal of gravid females. Through this model, a highly efficient computational platform has been developed for studying the effectiveness of control and mitigation strategies and their associated economic impact on livestock industries can be materialised. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.

  14. Finite element simulations and experiments of ballistic impacts on high performance PE composite material

    Herlaar, K.; Jagt-Deutekom, M.J. van der; Jacobs, M.J.N.

    2005-01-01

    The use of lightweight composite armour concepts is essential for the protection of future combat systems, both vehicles and personal. The design of such armour systems is challenging due to the complex material behaviour. Finite element simulations can be used to help understand the important

  15. High performance discrete event simulations to evaluate complex industrial systems, the case of automatic

    Hoekstra, A.G.; Dorst, L.; Bergman, M.; Lagerberg, J.; Visser, A.; Yakali, H.; Groen, F.; Hertzberger, L.O.

    1997-01-01

    We have developed a Modelling and Simulation platform for technical evaluation of Electronic Toll Collection on Motor Highways. This platform is used in a project of the Dutch government to assess the technical feasibility of Toll Collection systems proposed by industry. Motivated by this work we

  16. Time Step Considerations when Simulating Dynamic Behavior of High Performance Homes

    Tabares-Velasco, Paulo Cesar

    2016-09-01

    Building energy simulations, especially those concerning pre-cooling strategies and cooling/heating peak demand management, require careful analysis and detailed understanding of building characteristics. Accurate modeling of the building thermal response and material properties for thermally massive walls or advanced materials like phase change materials (PCMs) are critically important.

  17. C-STARS Baltimore Simulation Center Military Trauma Training Program: Training for High Performance Trauma Teams

    2013-09-19

    simulation room and intermittent access to conference and debriefing space. While the C-STARS program had priority for access to this space, it had to...Input (if required) Skin cool to touch Temp: 35.8 C FAST positive for splenic injury 500 mL blood in vac • Correctly interpret radiography...shoveling snow and the pain continued. He is moderately obese , does not exercise, uses EtOH frequently and has a 35-year hx of tobacco use

  18. Simulation and high performance computing-Building a predictive capability for fusion

    Strand, P.I.; Coelho, R.; Coster, D.; Eriksson, L.-G.; Imbeaux, F.; Guillerminet, Bernard

    2010-01-01

    The Integrated Tokamak Modelling Task Force (ITM-TF) is developing an infrastructure where the validation needs, as being formulated in terms of multi-device data access and detailed physics comparisons aiming for inclusion of synthetic diagnostics in the simulation chain, are key components. As the activity and the modelling tools are aimed for general use, although focused on ITER plasmas, a device independent approach to data transport and a standardized approach to data management (data structures, naming, and access) is being developed in order to allow cross-validation between different fusion devices using a single toolset. Extensive work has already gone into, and is continuing to go into, the development of standardized descriptions of the data (Consistent Physical Objects). The longer term aim is a complete simulation platform which is expected to last and be extended in different ways for the coming 30 years. The technical underpinning is therefore of vital importance. In particular the platform needs to be extensible and open-ended to be able to take full advantage of not only today's most advanced technologies but also be able to marshal future developments. As a full level comprehensive prediction of ITER physics rapidly becomes expensive in terms of computing resources, the simulation framework needs to be able to use both grid and HPC computing facilities. Hence data access and code coupling technologies are required to be available for a heterogeneous, possibly distributed, environment. The developments in this area are pursued in a separate project-EUFORIA (EU Fusion for ITER Applications) which is providing about 15 professional person year (ppy) per annum from 14 different institutes. The range and size of the activity is not only technically challenging but is providing some unique management challenges in that a large and geographically distributed team (a truly pan-European set of researchers) need to be coordinated on a fairly detailed

  19. A high performance computing framework for physics-based modeling and simulation of military ground vehicles

    Negrut, Dan; Lamb, David; Gorsich, David

    2011-06-01

    This paper describes a software infrastructure made up of tools and libraries designed to assist developers in implementing computational dynamics applications running on heterogeneous and distributed computing environments. Together, these tools and libraries compose a so called Heterogeneous Computing Template (HCT). The heterogeneous and distributed computing hardware infrastructure is assumed herein to be made up of a combination of CPUs and Graphics Processing Units (GPUs). The computational dynamics applications targeted to execute on such a hardware topology include many-body dynamics, smoothed-particle hydrodynamics (SPH) fluid simulation, and fluid-solid interaction analysis. The underlying theme of the solution approach embraced by HCT is that of partitioning the domain of interest into a number of subdomains that are each managed by a separate core/accelerator (CPU/GPU) pair. Five components at the core of HCT enable the envisioned distributed computing approach to large-scale dynamical system simulation: (a) the ability to partition the problem according to the one-to-one mapping; i.e., spatial subdivision, discussed above (pre-processing); (b) a protocol for passing data between any two co-processors; (c) algorithms for element proximity computation; and (d) the ability to carry out post-processing in a distributed fashion. In this contribution the components (a) and (b) of the HCT are demonstrated via the example of the Discrete Element Method (DEM) for rigid body dynamics with friction and contact. The collision detection task required in frictional-contact dynamics (task (c) above), is shown to benefit on the GPU of a two order of magnitude gain in efficiency when compared to traditional sequential implementations. Note: Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not imply its endorsement, recommendation, or favoring by the United States Army. The views and

  20. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  1. Machine vision-based high-resolution weed mapping and patch-sprayer performance simulation

    Tang, L.; Tian, L.F.; Steward, B.L.

    1999-01-01

    An experimental machine vision-based patch-sprayer was developed. This sprayer was primarily designed to do real-time weed density estimation and variable herbicide application rate control. However, the sprayer also had the capability to do high-resolution weed mapping if proper mapping techniques

  2. Lasertron performance simulation

    Dubrovin, A.; Coulon, J.P.

    1987-05-01

    This report presents a comparative simulation study of the Lasertron at different frequency and emission conditions, in view to establish choice criteria for future experiments. The RING program for these simulations is an improved version of the one presented in an other report. The self-consistent treatment of the R.F. extraction zone is added to it, together with the possibility to vary initial conditions to better describe the laser illumination and the electron extraction from cathode. Plane or curved cathodes are used [fr

  3. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the High Luminosity LHC will face a fivefold increase in the number of interactions per bunch crossing relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware based first trigger level of the experiment. This article will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out using data from the strip subsystem only or both strip and pixel subsystems.

  4. Development and verification of a high performance multi-group SP3 transport capability in the ARTEMIS core simulator

    Van Geemert, Rene

    2008-01-01

    For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)

  5. Design and performance simulation of a segmented-absorber based muon detection system for high energy heavy ion collision experiments

    Ahmad, S.; Bhaduri, P.P.; Jahan, H.; Senger, A.; Adak, R.; Samanta, S.; Prakash, A.; Dey, K.; Lebedev, A.; Kryshen, E.; Chattopadhyay, S.; Senger, P.; Bhattacharjee, B.; Ghosh, S.K.; Raha, S.; Irfan, M.; Ahmad, N.; Farooq, M.; Singh, B.

    2015-01-01

    A muon detection system (MUCH) based on a novel concept using a segmented and instrumented absorber has been designed for high-energy heavy-ion collision experiments. The system consists of 6 hadron absorber blocks and 6 tracking detector triplets. Behind each absorber block a detector triplet is located which measures the tracks of charged particles traversing the absorber. The performance of such a system has been simulated for the CBM experiment at FAIR (Germany) that is scheduled to start taking data in heavy ion collisions in the beam energy range of 6–45 A GeV from 2019. The muon detection system is mounted downstream to a Silicon Tracking System (STS) that is located in a large aperture dipole magnet which provides momentum information of the charged particle tracks. The reconstructed tracks from the STS are to be matched to the hits measured by the muon detector triplets behind the absorber segments. This method allows the identification of muon tracks over a broad range of momenta including tracks of soft muons which do not pass through all the absorber layers. Pairs of oppositely charged muons identified by MUCH could therefore be combined to measure the invariant masses in a wide range starting from low mass vector mesons (LMVM) up to charmonia. The properties of the absorber (material, thickness, position) and of the tracking chambers (granularity, geometry) have been varied in simulations of heavy-ion collision events generated with the UrQMD generator and propagated through the setup using the GEANT3, the particle transport code. The tracks are reconstructed by a Cellular Automaton algorithm followed by a Kalman Filter. The simulations demonstrate that low mass vector mesons and charmonia can be clearly identified in central Au+Au collisions at beam energies provided by the international Facility for Antiproton and Ion Research (FAIR)

  6. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  7. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  8. High-performance simulation-based algorithms for an alpine ski racer’s trajectory optimization in heterogeneous computer systems

    Dębski Roman

    2014-09-01

    Full Text Available Effective, simulation-based trajectory optimization algorithms adapted to heterogeneous computers are studied with reference to the problem taken from alpine ski racing (the presented solution is probably the most general one published so far. The key idea behind these algorithms is to use a grid-based discretization scheme to transform the continuous optimization problem into a search problem over a specially constructed finite graph, and then to apply dynamic programming to find an approximation of the global solution. In the analyzed example it is the minimum-time ski line, represented as a piecewise-linear function (a method of elimination of unfeasible solutions is proposed. Serial and parallel versions of the basic optimization algorithm are presented in detail (pseudo-code, time and memory complexity. Possible extensions of the basic algorithm are also described. The implementation of these algorithms is based on OpenCL. The included experimental results show that contemporary heterogeneous computers can be treated as μ-HPC platforms-they offer high performance (the best speedup was equal to 128 while remaining energy and cost efficient (which is crucial in embedded systems, e.g., trajectory planners of autonomous robots. The presented algorithms can be applied to many trajectory optimization problems, including those having a black-box represented performance measure

  9. Cellulose Nanocrystal Templated Graphene Nanoscrolls for High Performance Supercapacitors and Hydrogen Storage: An Experimental and Molecular Simulation Study.

    Dhar, Prodyut; Gaur, Surendra Singh; Kumar, Amit; Katiyar, Vimal

    2018-03-01

    Graphene nanoscrolls (GNS), due to their remarkably interesting properties, have attracted significant interest with applications in various engineering sectors. However, uncontrolled morphologies, poor yield and low quality GNS produced through traditional routes are major challenges associated. We demonstrate sustainable approach of utilizing bio-derived cellulose nanocrystals (CNCs) as template for fabrication of GNS with tunable morphological dimensions ranging from micron-to-nanoscale(controlled length 1 μm), alongwith encapsulation of catalytically active metallic-species in scroll interlayers. The surface-modified magnetic CNCs acts as structural-directing agents which provides enough momentum to initiate self-scrolling phenomenon of graphene through van der Waals forces and π-π interactions, mechanism of which is demonstrated through experimental and molecular simulation studies. The proposed approach of GNS fabrication provides flexibility to tune physico-chemical properties of GNS by simply varying interlayer spacing, scrolling density and fraction of encapsulated metallic nanoparticles. The hybrid GNS with confined palladium or platinum nanoparticles (at lower loading ~1 wt.%) shows enhanced hydrogen storage capacity (~0.2 wt.% at~20 bar and ~273 K) and excellent supercapacitance behavior (~223-357 F/g) for prolonged cycles (retention ~93.5-96.4% at ~10000 cycles). The current strategy of utilizing bio-based templates can be further extended to incorporate complex architectures or nanomaterials in GNS core or inter-layers, which will potentially broaden its applications in fabrication of high-performance devices.

  10. A Mesoscopic Simulation for the Early-Age Shrinkage Cracking Process of High Performance Concrete in Bridge Engineering

    Guodong Li

    2017-01-01

    Full Text Available On a mesoscopic level, high performance concrete (HPC was assumed to be a heterogeneous composite material consisting of aggregates, mortar, and pores. The concrete mesoscopic structure model had been established based on CT image reconstruction. By combining this model with continuum mechanics, damage mechanics, and fracture mechanics, a relatively complete system for concrete mesoscopic mechanics analysis was established to simulate the process of early-age shrinkage cracking in HPC. This process was based on the dispersion crack model. The results indicated that the interface between the aggregate and mortar was the crack point caused by shrinkage cracks in HPC. The locations of early-age shrinkage cracks in HPC were associated with the spacing and the size of the aggregate particle. However, the shrinkage deformation size of the mortar was related to the scope of concrete cracking and was independent of the crack position. Whereas lower water to cement ratios can improve the early strength of concrete, this ratio cannot control early-age shrinkage cracks in HPC.

  11. The design and simulated performance of a fast Level 1 track trigger for the ATLAS High Luminosity Upgrade

    Martensson, Mikael; The ATLAS collaboration

    2017-01-01

    The ATLAS experiment at the high-luminosity LHC will face a five-fold increase in the number of interactions per collision relative to the ongoing Run 2. This will require a proportional improvement in rejection power at the earliest levels of the detector trigger system, while preserving good signal efficiency. One critical aspect of this improvement will be the implementation of precise track reconstruction, through which sharper trigger turn-on curves can be achieved, and b-tagging and tau-tagging techniques can in principle be implemented. The challenge of such a project comes in the development of a fast, custom electronic device integrated in the hardware-based first trigger level of the experiment, with repercussions propagating as far as the detector read-out philosophy. This talk will discuss the requirements, architecture and projected performance of the system in terms of tracking, timing and physics, based on detailed simulations. Studies are carried out comparing two detector geometries and using...

  12. High Fidelity BWR Fuel Simulations

    Yoon, Su Jong [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-08-01

    This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fraction and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.

  13. Development and testing of high-performance fuel pin simulators for boiling experiments in liquid metal flow

    Casal, V.

    1976-01-01

    There are unknown phenomena, about local and integral boiling events in the core of sodium cooled fast breeder reactors. Therefore at GfK depend out-of-pile boiling experiments have been performed using electrically heated dummies of fuel element bundles. The success of these tests and the amount of information derived from them depend exclusively on the successful simulation of the fuel pins by electrically heated rods as regards the essential physical properties. The report deals with the development and testing of heater rods for sodium boiling experiments in bundles including up to 91 heated pins

  14. High performance shallow water kernels for parallel overland flow simulations based on FullSWOF2D

    Wittmann, Roland

    2017-01-25

    We describe code optimization and parallelization procedures applied to the sequential overland flow solver FullSWOF2D. Major difficulties when simulating overland flows comprise dealing with high resolution datasets of large scale areas which either cannot be computed on a single node either due to limited amount of memory or due to too many (time step) iterations resulting from the CFL condition. We address these issues in terms of two major contributions. First, we demonstrate a generic step-by-step transformation of the second order finite volume scheme in FullSWOF2D towards MPI parallelization. Second, the computational kernels are optimized by the use of templates and a portable vectorization approach. We discuss the load imbalance of the flux computation due to dry and wet cells and propose a solution using an efficient cell counting approach. Finally, scalability results are shown for different test scenarios along with a flood simulation benchmark using the Shaheen II supercomputer.

  15. Achieving high milk production performance at grass with minimal concentrate supplementation with spring-calving dairy cows: actual performance compared to simulated performance

    O'Donovan, M.; Ruelle, Elodie; Coughlan, F.; Delaby, Luc

    2015-01-01

    The aim of high-profitability grazing systems is to produce milk efficiency from grazed pasture. There is very limited information available on the milk production capacity of dairy cows offered a grass-only diet for the main part of her lactation. In this study, spring-calving dairy cows were managed to achieve high milk production levels throughout the grazing season without supplementation. The calving date of the herd was 12 April; the herd had access to grass as they calved a...

  16. The effects of anticipating a high-stress task on sleep and performance during simulated on-call work.

    Sprajcer, Madeline; Jay, Sarah M; Vincent, Grace E; Vakulin, Andrew; Lack, Leon; Ferguson, Sally A

    2018-04-22

    On-call work is used to manage around the clock working requirements in a variety of industries. Often, tasks that must be performed while on-call are highly important, difficult and/or stressful by nature and, as such, may impact the level of anxiety that is experienced by on-call workers. Heightened anxiety is associated with poor sleep, which affects next-day cognitive performance. Twenty-four male participants (20-35 years old) spent an adaptation, a control and two counterbalanced on-call nights in a time-isolated sleep laboratory. On one of the on-call nights they were told that they would be required to do a speech upon waking (high-stress condition), whereas on the other night they were instructed that they would be required to read to themselves (low-stress condition). Pre-bed anxiety was measured by the State Trait Anxiety Inventory form x-1, and polysomnography and quantitative electroencephalogram analyses were used to investigate sleep. Performance was assessed across each day using the 10-min psychomotor vigilance task (09:30 hours, 12:00 hours, 14:30 hours, 17:00 hours). The results indicated that participants experienced no significant changes in pre-bed anxiety or sleep between conditions. However, performance on the psychomotor vigilance task was best in the high-stress condition, possibly as a result of heightened physiological arousal caused by performing the stressful task that morning. This suggests that performing a high-stress task may be protective of cognitive performance to some degree when sleep is not disrupted. © 2018 European Sleep Research Society.

  17. Fabrication of Cu2O-TiO2 Nano-composite with High Photocatalytic Performance under Simulated Solar Light

    Yi Wentao

    2016-01-01

    Full Text Available Cu2O-P25 (TiO2 nano-heterostructures with different mass ratios were synthesized via a wet chemical precipitation and hydrothermal method, and were characterized by X-ray diffraction (XRD, field-emission scanning electron microscopy (FESEM, UV-vis diffuse reflectance spectra (DRS, and X-ray photoelectron spectroscopy (XPS. DRS results showed that the light absorption of P25 extended to the visible light region with the loading of Cu2O. XPS results showed that Cu existed in the state of Cu+ in the presence of hydroxylamine hydrochloride, confirming the formation of Cu2O. The obtained products exhibited efficient photocatalytic performance in degradation of methyl orange (MO and methylene blue (MB under simulated solar light. The sample of 5% Cu2O-P25 exhibited the highest photocatalytic activity among all as-prepared samples. And the photocatalysts can be recycled without obvious loss of photocatalytic activity.

  18. Implementation of a Monte Carlo simulation environment for fully 3D PET on a high-performance parallel platform

    Zaidi, H; Morel, Christian

    1998-01-01

    This paper describes the implementation of the Eidolon Monte Carlo program designed to simulate fully three-dimensional (3D) cylindrical positron tomographs on a MIMD parallel architecture. The original code was written in Objective-C and developed under the NeXTSTEP development environment. Different steps involved in porting the software on a parallel architecture based on PowerPC 604 processors running under AIX 4.1 are presented. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are described. A linear decrease of the computing time was achieved with the number of computing nodes. The improved time performances resulting from parallelisation of the Monte Carlo calculations makes it an attractive tool for modelling photon transport in 3D positron tomography. The parallelisation paradigm used in this work is independent from the chosen parallel architecture

  19. Arx: a toolset for the efficient simulation and direct synthesis of high-performance signal processing algorithms

    Hofstra, K.L.; Gerez, Sabih H.

    2007-01-01

    This paper addresses the efficient implementation of highperformance signal-processing algorithms. In early stages of such designs many computation-intensive simulations may be necessary. This calls for hardware description formalisms targeted for efficient simulation (such as the programming

  20. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  1. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  2. The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection

    Lievens, Filip; Patterson, Fiona

    2011-01-01

    In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…

  3. Evaluating the performance of coupled snow-soil models in SURFEXv8 to simulate the permafrost thermal regime at a high Arctic site

    Barrere, Mathieu; Domine, Florent; Decharme, Bertrand; Morin, Samuel; Vionnet, Vincent; Lafaysse, Matthieu

    2017-09-01

    Climate change projections still suffer from a limited representation of the permafrost-carbon feedback. Predicting the response of permafrost temperature to climate change requires accurate simulations of Arctic snow and soil properties. This study assesses the capacity of the coupled land surface and snow models ISBA-Crocus and ISBA-ES to simulate snow and soil properties at Bylot Island, a high Arctic site. Field measurements complemented with ERA-Interim reanalyses were used to drive the models and to evaluate simulation outputs. Snow height, density, temperature, thermal conductivity and thermal insulance are examined to determine the critical variables involved in the soil and snow thermal regime. Simulated soil properties are compared to measurements of thermal conductivity, temperature and water content. The simulated snow density profiles are unrealistic, which is most likely caused by the lack of representation in snow models of the upward water vapor fluxes generated by the strong temperature gradients within the snowpack. The resulting vertical profiles of thermal conductivity are inverted compared to observations, with high simulated values at the bottom of the snowpack. Still, ISBA-Crocus manages to successfully simulate the soil temperature in winter. Results are satisfactory in summer, but the temperature of the top soil could be better reproduced by adequately representing surface organic layers, i.e., mosses and litter, and in particular their water retention capacity. Transition periods (soil freezing and thawing) are the least well reproduced because the high basal snow thermal conductivity induces an excessively rapid heat transfer between the soil and the snow in simulations. Hence, global climate models should carefully consider Arctic snow thermal properties, and especially the thermal conductivity of the basal snow layer, to perform accurate predictions of the permafrost evolution under climate change.

  4. Comparison of turbulence measurements from DIII-D low-mode and high-performance plasmas to turbulence simulations and models

    Rhodes, T.L.; Leboeuf, J.-N.; Sydora, R.D.; Groebner, R.J.; Doyle, E.J.; McKee, G.R.; Peebles, W.A.; Rettig, C.L.; Zeng, L.; Wang, G.

    2002-01-01

    Measured turbulence characteristics (correlation lengths, spectra, etc.) in low-confinement (L-mode) and high-performance plasmas in the DIII-D tokamak [Luxon et al., Proceedings Plasma Physics and Controlled Nuclear Fusion Research 1986 (International Atomic Energy Agency, Vienna, 1987), Vol. I, p. 159] show many similarities with the characteristics determined from turbulence simulations. Radial correlation lengths Δr of density fluctuations from L-mode discharges are found to be numerically similar to the ion poloidal gyroradius ρ θ,s , or 5-10 times the ion gyroradius ρ s over the radial region 0.2 θ,s or 5-10 times ρ s , an experiment was performed which modified ρ θs while keeping other plasma parameters approximately fixed. It was found that the experimental Δr did not scale as ρ θ,s , which was similar to low-resolution UCAN simulations. Finally, both experimental measurements and gyrokinetic simulations indicate a significant reduction in the radial correlation length from high-performance quiescent double barrier discharges, as compared to normal L-mode, consistent with reduced transport in these high-performance plasmas

  5. Computer simulation analysis on the machinability of alumina dispersion enforced copper alloy for high performance compact heat exchanger

    Ishiyama, Shintaro; Muto, Yasushi

    2001-01-01

    Feasibility study on a HTGR-GT (High Temperature Gas cooled Reactor-Gas Turbine) system is examining the application of the high strength / high thermal conductivity alumina dispersed copper (AL-25) in the ultra-fine rectangle plate fin of the recuperator for the system. However, it is very difficult to manufacture a ultra-fine fin by large-scale plastic deformation from the hard and brittle Al-25 foil. Therefor, in present study, to establish the fine fin manufacturing technology of the AL-25 foil, it did the processing simulation of the fine fin first by the large-scale elasto-plastic finite element analysis (FEM) and it estimated a forming limit. Next, it experimentally made the manufacturing equipment where it is possible to do new processing using these analytical results, and it implemented a manufacturing experiment on the AL-25 foil. With these results, the following conclusion was obtained. (1) It did the processing simulation to manufacture a fine rectangle fin (fin height x pitch x thickness, 3 mm x 4 mm x 0.156 mm) from AL-25 foil (Thickness=0.156 mm) by the large-scale elasto-plastic FEM using the double action processing method. As a result, the manufacturing of a fine rectangle fin found a possible thing in the following condition by the double action processing method. It made that 0.8 mm and 0.25 mm were a best value respectively in the R part and the clearance between dies by making double action processing examination equipment experimentally and implementing a manufacturing examination using this equipment. (2) It succeeded in the manufacturing of the fine fin that the height x pitch x thickness is 3 mm x 4 mm x (0.156 mm±0.001 mm) after implementing a fine rectangle fin manufacturing examination from the AL-25 foil. (3) The change of the process of the deformation and the thickness by the processing of the AL-25 foil which was estimated by the large-scale elasto-plastic FEM showed the result of the processing experiment and good agreement

  6. High Order Large Eddy Simulation (LES) of Gliding Snake Aerodynamics: Effect of 3D Flow on Gliding Performance

    Delorme, Yann; Hassan, Syed Harris; Socha, Jake; Vlachos, Pavlos; Frankel, Steven

    2014-11-01

    Chrysopelea paradisi are snakes that are able to glide over long distances by morphing the cross section of their bodies from circular to a triangular airfoil, and undulating through the air. Snake glide is characterized by relatively low Reynolds number and high angle of attack as well as three dimensional and unsteady flow. Here we study the 3D dynamics of the flow using an in-house high-order large eddy simulation code. The code features a novel multi block immersed boundary method to accurately and efficiently represent the complex snake geometry. We investigate the steady state 3-dimensionality of the flow, especially the wake flow induced by the presence of the snake's body, as well as the vortex-body interaction thought to be responsible for part of the lift enhancement. Numerical predictions of global lift and drag will be compared to experimental measurements, as well as the lift distribution along the body of the snake due to cross sectional variations. Comparisons with previously published 2D results are made to highlight the importance of 3-dimensional effects. Additional efforts are made to quantify properties of the vortex shedding and Dynamic Mode Decomposition (DMD) is used to analyse the main modes responsible for the lift and drag forces.

  7. Design of the HELICS High-Performance Transmission-Distribution-Communication-Market Co-Simulation Framework: Preprint

    Palmintier, Bryan S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnamurthy, Dheepak [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Top, Philip [Lawrence Livermore National Laboratories; Smith, Steve [Lawrence Livermore National Laboratories; Daily, Jeff [Pacific Northwest National Laboratory; Fuller, Jason [Pacific Northwest National Laboratory

    2017-09-12

    This paper describes the design rationale for a new cyber-physical-energy co-simulation framework for electric power systems. This new framework will support very large-scale (100,000+ federates) co-simulations with off-the-shelf power-systems, communication, and end-use models. Other key features include cross-platform operating system support, integration of both event-driven (e.g. packetized communication) and time-series (e.g. power flow) simulation, and the ability to co-iterate among federates to ensure model convergence at each time step. After describing requirements, we begin by evaluating existing co-simulation frameworks, including HLA and FMI, and conclude that none provide the required features. Then we describe the design for the new layered co-simulation architecture.

  8. Approaching Sentient Building Performance Simulation Systems

    Negendahl, Kristoffer; Perkov, Thomas; Heller, Alfred

    2014-01-01

    Sentient BPS systems can combine one or more high precision BPS and provide near instantaneous performance feedback directly in the design tool, thus providing speed and precision of building performance in the early design stages. Sentient BPS systems are essentially combining: 1) design tools, 2......) parametric tools, 3) BPS tools, 4) dynamic databases 5) interpolation techniques and 6) prediction techniques as a fast and valid simulation system, in the early design stage....

  9. Impact of High-Fidelity Simulation and Pharmacist-Specific Didactic Lectures in Addition to ACLS Provider Certification on Pharmacy Resident ACLS Performance.

    Bartel, Billie J

    2014-08-01

    This pilot study explored the use of multidisciplinary high-fidelity simulation and additional pharmacist-focused training methods in training postgraduate year 1 (PGY1) pharmacy residents to provide Advanced Cardiovascular Life Support (ACLS) care. Pharmacy resident confidence and comfort level were assessed after completing these training requirements. The ACLS training requirements for pharmacy residents were revised to include didactic instruction on ACLS pharmacology and rhythm recognition and participation in multidisciplinary high-fidelity simulation ACLS experiences in addition to ACLS provider certification. Surveys were administered to participating residents to assess the impact of this additional education on resident confidence and comfort level in cardiopulmonary arrest situations. The new ACLS didactic and simulation training requirements resulted in increased resident confidence and comfort level in all assessed functions. Residents felt more confident in all areas except providing recommendations for dosing and administration of medications and rhythm recognition after completing the simulation scenarios than with ACLS certification training and the didactic components alone. All residents felt the addition of lectures and simulation experiences better prepared them to function as a pharmacist in the ACLS team. Additional ACLS training requirements for pharmacy residents increased overall awareness of pharmacist roles and responsibilities and greatly improved resident confidence and comfort level in performing most essential pharmacist functions during ACLS situations. © The Author(s) 2013.

  10. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    Can prefabrication contribute to the development of high performance homes? To answer this question, this chapter defines high performance in more broadly inclusive terms, acknowledging the technical, architectural, social and economic conditions under which energy consumption and production occur....... Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  11. Acoustic Performance of Novel Fan Noise Reduction Technologies for a High Bypass Model Turbofan at Simulated Flights Conditions

    Elliott, David M.; Woodward, Richard P.; Podboy, Gary G.

    2010-01-01

    Two novel fan noise reduction technologies, over the rotor acoustic treatment and soft stator vane technologies, were tested in an ultra-high bypass ratio turbofan model in the NASA Glenn Research Center s 9- by 15-Foot Low-Speed Wind Tunnel. The performance of these technologies was compared to that of the baseline fan configuration, which did not have these technologies. Sideline acoustic data and hot film flow data were acquired and are used to determine the effectiveness of the various treatments. The material used for the over the rotor treatment was foam metal and two different types were used. The soft stator vanes had several internal cavities tuned to target certain frequencies. In order to accommodate the cavities it was necessary to use a cut-on stator to demonstrate the soft vane concept.

  12. Undergraduate nursing students' performance in recognising and responding to sudden patient deterioration in high psychological fidelity simulated environments: an Australian multi-centre study.

    Bogossian, Fiona; Cooper, Simon; Cant, Robyn; Beauchamp, Alison; Porter, Joanne; Kain, Victoria; Bucknall, Tracey; Phillips, Nicole M

    2014-05-01

    Early recognition and situation awareness of sudden patient deterioration, a timely appropriate clinical response, and teamwork are critical to patient outcomes. High fidelity simulated environments provide the opportunity for undergraduate nursing students to develop and refine recognition and response skills. This paper reports the quantitative findings of the first phase of a larger program of ongoing research: Feedback Incorporating Review and Simulation Techniques to Act on Clinical Trends (FIRST2ACTTM). It specifically aims to identify the characteristics that may predict primary outcome measures of clinical performance, teamwork and situation awareness in the management of deteriorating patients. Mixed-method multi-centre study. High fidelity simulated acute clinical environment in three Australian universities. A convenience sample of 97 final year nursing students enrolled in an undergraduate Bachelor of Nursing or combined Bachelor of Nursing degree were included in the study. In groups of three, participants proceeded through three phases: (i) pre-briefing and completion of a multi-choice question test, (ii) three video-recorded simulated clinical scenarios where actors substituted real patients with deteriorating conditions, and (iii) post-scenario debriefing. Clinical performance, teamwork and situation awareness were evaluated, using a validated standard checklist (OSCE), Team Emergency Assessment Measure (TEAM) score sheet and Situation Awareness Global Assessment Technique (SAGAT). A Modified Angoff technique was used to establish cut points for clinical performance. Student teams engaged in 97 simulation experiences across the three scenarios and achieved a level of clinical performance consistent with the experts' identified pass level point in only 9 (1%) of the simulation experiences. Knowledge was significantly associated with overall teamwork (p=.034), overall situation awareness (p=.05) and clinical performance in two of the three scenarios

  13. Hydrological simulation approaches for BMPs and LID practices in highly urbanized area and development of hydrological performance indicator system

    Yan-wei Sun

    2014-04-01

    Full Text Available Urbanization causes hydrological change and increases stormwater runoff volumes, leading to flooding, erosion, and the degradation of instream ecosystem health. Best management practices (BMPs, like detention ponds and infiltration trenches, have been widely used to control flood runoff events for the past decade. However, low impact development (LID options have been proposed as an alternative approach to better mimic the natural flow regime by using decentralized designs to control stormwater runoff at the source, rather than at a centralized location in the watershed. For highly urbanized areas, LID stormwater management practices such as bioretention cells and porous pavements can be used to retrofit existing infrastructure and reduce runoff volumes and peak flows. This paper describes a modeling approach to incorporate these LID practices and the two BMPs of detention ponds and infiltration trenches in an existing hydrological model to estimate the impacts of BMPs and LID practices on the surface runoff. The modeling approach has been used in a parking lot located in Lenexa, Kansas, USA, to predict hydrological performance of BMPs and LID practices. A performance indicator system including the flow duration curve, peak flow frequency exceedance curve, and runoff coefficient have been developed in an attempt to represent impacts of BMPs and LID practices on the entire spectrum of the runoff regime. Results demonstrate that use of these BMPs and LID practices leads to significant stormwater control for small rainfall events and less control for flood events.

  14. High performance computing applied to simulation of the flow in pipes; Computacao de alto desempenho aplicada a simulacao de escoamento em dutos

    Cozin, Cristiane; Lueders, Ricardo; Morales, Rigoberto E.M. [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil). Dept. de Engenharia Mecanica

    2008-07-01

    In recent years, computer cluster has emerged as a real alternative to solution of problems which require high performance computing. Consequently, the development of new applications has been driven. Among them, flow simulation represents a real computational burden specially for large systems. This work presents a study of using parallel computing for numerical fluid flow simulation in pipelines. A mathematical flow model is numerically solved. In general, this procedure leads to a tridiagonal system of equations suitable to be solved by a parallel algorithm. In this work, this is accomplished by a parallel odd-oven reduction method found in the literature which is implemented on Fortran programming language. A computational platform composed by twelve processors was used. Many measures of CPU times for different tridiagonal system sizes and number of processors were obtained, highlighting the communication time between processors as an important issue to be considered when evaluating the performance of parallel applications. (author)

  15. An exploration of the relationship between knowledge and performance-related variables in high-fidelity simulation: designing instruction that promotes expertise in practice.

    Hauber, Roxanne P; Cormier, Eileen; Whyte, James

    2010-01-01

    Increasingly, high-fidelity patient simulation (HFPS) is becoming essential to nursing education. Much remains unknown about how classroom learning is connected to student decision-making in simulation scenarios and the degree to which transference takes place between the classroom setting and actual practice. The present study was part of a larger pilot study aimed at determining the relationship between nursing students' clinical ability to prioritize their actions and the associated cognitions and physiologic outcomes of care using HFPS. In an effort to better explain the knowledge base being used by nursing students in HFPS, the investigators explored the relationship between common measures of knowledge and performance-related variables. Findings are discussed within the context of the expert performance approach and concepts from cognitive psychology, such as cognitive architecture, cognitive load, memory, and transference.

  16. Molecular simulation workflows as parallel algorithms: the execution engine of Copernicus, a distributed high-performance computing platform.

    Pronk, Sander; Pouya, Iman; Lundborg, Magnus; Rotskoff, Grant; Wesén, Björn; Kasson, Peter M; Lindahl, Erik

    2015-06-09

    Computational chemistry and other simulation fields are critically dependent on computing resources, but few problems scale efficiently to the hundreds of thousands of processors available in current supercomputers-particularly for molecular dynamics. This has turned into a bottleneck as new hardware generations primarily provide more processing units rather than making individual units much faster, which simulation applications are addressing by increasingly focusing on sampling with algorithms such as free-energy perturbation, Markov state modeling, metadynamics, or milestoning. All these rely on combining results from multiple simulations into a single observation. They are potentially powerful approaches that aim to predict experimental observables directly, but this comes at the expense of added complexity in selecting sampling strategies and keeping track of dozens to thousands of simulations and their dependencies. Here, we describe how the distributed execution framework Copernicus allows the expression of such algorithms in generic workflows: dataflow programs. Because dataflow algorithms explicitly state dependencies of each constituent part, algorithms only need to be described on conceptual level, after which the execution is maximally parallel. The fully automated execution facilitates the optimization of these algorithms with adaptive sampling, where undersampled regions are automatically detected and targeted without user intervention. We show how several such algorithms can be formulated for computational chemistry problems, and how they are executed efficiently with many loosely coupled simulations using either distributed or parallel resources with Copernicus.

  17. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  18. High Performance Marine Vessels

    Yun, Liang

    2012-01-01

    High Performance Marine Vessels (HPMVs) range from the Fast Ferries to the latest high speed Navy Craft, including competition power boats and hydroplanes, hydrofoils, hovercraft, catamarans and other multi-hull craft. High Performance Marine Vessels covers the main concepts of HPMVs and discusses historical background, design features, services that have been successful and not so successful, and some sample data of the range of HPMVs to date. Included is a comparison of all HPMVs craft and the differences between them and descriptions of performance (hydrodynamics and aerodynamics). Readers will find a comprehensive overview of the design, development and building of HPMVs. In summary, this book: Focuses on technology at the aero-marine interface Covers the full range of high performance marine vessel concepts Explains the historical development of various HPMVs Discusses ferries, racing and pleasure craft, as well as utility and military missions High Performance Marine Vessels is an ideal book for student...

  19. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  20. High performance systems

    Vigil, M.B. [comp.

    1995-03-01

    This document provides a written compilation of the presentations and viewgraphs from the 1994 Conference on High Speed Computing given at the High Speed Computing Conference, {open_quotes}High Performance Systems,{close_quotes} held at Gleneden Beach, Oregon, on April 18 through 21, 1994.

  1. Simulating and stimulating performance: Introducing distributed simulation to enhance musical learning and performance

    Aaron eWilliamon

    2014-02-01

    Full Text Available Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of real performance could be recreated. Advanced violin students (n=11 were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three expert virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for

  2. Simulating and stimulating performance: introducing distributed simulation to enhance musical learning and performance.

    Williamon, Aaron; Aufegger, Lisa; Eiholzer, Hubert

    2014-01-01

    Musicians typically rehearse far away from their audiences and in practice rooms that differ significantly from the concert venues in which they aspire to perform. Due to the high costs and inaccessibility of such venues, much current international music training lacks repeated exposure to realistic performance situations, with students learning all too late (or not at all) how to manage performance stress and the demands of their audiences. Virtual environments have been shown to be an effective training tool in the fields of medicine and sport, offering practitioners access to real-life performance scenarios but with lower risk of negative evaluation and outcomes. The aim of this research was to design and test the efficacy of simulated performance environments in which conditions of "real" performance could be recreated. Advanced violin students (n = 11) were recruited to perform in two simulations: a solo recital with a small virtual audience and an audition situation with three "expert" virtual judges. Each simulation contained back-stage and on-stage areas, life-sized interactive virtual observers, and pre- and post-performance protocols designed to match those found at leading international performance venues. Participants completed a questionnaire on their experiences of using the simulations. Results show that both simulated environments offered realistic experience of performance contexts and were rated particularly useful for developing performance skills. For a subset of 7 violinists, state anxiety and electrocardiographic data were collected during the simulated audition and an actual audition with real judges. Results display comparable levels of reported state anxiety and patterns of heart rate variability in both situations, suggesting that responses to the simulated audition closely approximate those of a real audition. The findings are discussed in relation to their implications, both generalizable and individual-specific, for performance training.

  3. Effects of Dietary Nitrate Supplementation on Physiological Responses, Cognitive Function, and Exercise Performance at Moderate and Very-High Simulated Altitude

    Oliver M. Shannon

    2017-06-01

    Full Text Available Purpose: Nitric oxide (NO bioavailability is reduced during acute altitude exposure, contributing toward the decline in physiological and cognitive function in this environment. This study evaluated the effects of nitrate (NO3− supplementation on NO bioavailability, physiological and cognitive function, and exercise performance at moderate and very-high simulated altitude.Methods:Ten males (mean (SD: V˙O2max: 60.9 (10.1 ml·kg−1·min−1 rested and performed exercise twice at moderate (~14.0% O2; ~3,000 m and twice at very-high (~11.7% O2; ~4,300 m simulated altitude. Participants ingested either 140 ml concentrated NO3−-rich (BRJ; ~12.5 mmol NO3− or NO3−-deplete (PLA; 0.01 mmol NO3− beetroot juice 2 h before each trial. Participants rested for 45 min in normobaric hypoxia prior to completing an exercise task. Exercise comprised a 45 min walk at 30% V˙O2max and a 3 km time-trial (TT, both conducted on a treadmill at a 10% gradient whilst carrying a 10 kg backpack to simulate altitude hiking. Plasma nitrite concentration ([NO2−], peripheral oxygen saturation (SpO2, pulmonary oxygen uptake (V˙O2, muscle and cerebral oxygenation, and cognitive function were measured throughout.Results: Pre-exercise plasma [NO2−] was significantly elevated in BRJ compared with PLA (p = 0.001. Pulmonary V˙O2 was reduced (p = 0.020, and SpO2 was elevated (p = 0.005 during steady-state exercise in BRJ compared with PLA, with similar effects at both altitudes. BRJ supplementation enhanced 3 km TT performance relative to PLA by 3.8% [1,653.9 (261.3 vs. 1718.7 (213.0 s] and 4.2% [1,809.8 (262.0 vs. 1,889.1 (203.9 s] at 3,000 and 4,300 m, respectively (p = 0.019. Oxygenation of the gastrocnemius was elevated during the TT consequent to BRJ (p = 0.011. The number of false alarms during the Rapid Visual Information Processing Task tended to be lower with BRJ compared with PLA prior to altitude exposure (p = 0.056. Performance in all other cognitive tasks

  4. Responsive design high performance

    Els, Dewald

    2015-01-01

    This book is ideal for developers who have experience in developing websites or possess minor knowledge of how responsive websites work. No experience of high-level website development or performance tweaking is required.

  5. High Performance Macromolecular Material

    Forest, M

    2002-01-01

    .... In essence, most commercial high-performance polymers are processed through fiber spinning, following Nature and spider silk, which is still pound-for-pound the toughest liquid crystalline polymer...

  6. High performance pseudo-analytical simulation of multi-object adaptive optics over multi-GPU systems

    Abdelfattah, Ahmad; Gendron, É ric; Gratadour, Damien; Keyes, David E.; Ltaief, Hatem; Sevin, Arnaud; Vidal, Fabrice

    2014-01-01

    Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique dedicated to the special case of wide-field multi-object spectrographs (MOS). It applies dedicated wavefront corrections to numerous independent tiny patches spread over a large field of view (FOV). The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. The output of this study helps the design of a new instrument called MOSAIC, a multi-object spectrograph proposed for the European Extremely Large Telescope (E-ELT). We have developed a novel hybrid pseudo-analytical simulation scheme that allows us to accurately simulate in detail the tomographic problem. The main challenge resides in the computation of the tomographic reconstructor, which involves pseudo-inversion of a large dense symmetric matrix. The pseudo-inverse is computed using an eigenvalue decomposition, based on the divide and conquer algorithm, on multicore systems with multi-GPUs. Thanks to a new symmetric matrix-vector product (SYMV) multi-GPU kernel, our overall implementation scores significant speedups over standard numerical libraries on multicore, like Intel MKL, and up to 60% speedups over the standard MAGMA implementation on 8 Kepler K20c GPUs. At 40,000 unknowns, this appears to be the largest-scale tomographic AO matrix solver submitted to computation, to date, to our knowledge and opens new research directions for extreme scale AO simulations. © 2014 Springer International Publishing Switzerland.

  7. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  8. ICP-MS nebulizer performance for analysis of SRS high salt simulated radioactive waste tank solutions (number-sign 3053)

    Jones, V.D.

    1997-01-01

    High Level Radioactive Waste Tanks at the Savannah River Site are high in salt content. The cross-flow nebulizer provided the most stable signal for all salt matrices with the smallest signal loss/suppression due to this matrix. The DIN exhibited a serious lack of tolerance for TDS; possibly due to physical de-tuning of the nebulizer efficiency

  9. A predictive analytic model for high-performance tunneling field-effect transistors approaching non-equilibrium Green's function simulations

    Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard

    2015-01-01

    A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach

  10. On Interlayer Stability and High-Cycle Simulator Performance of Diamond-Like Carbon Layers for Articulating Joint Replacements

    Kerstin Thorwarth

    2014-06-01

    Full Text Available Diamond like carbon (DLC coatings have been proven to be an excellent choice for wear reduction in many technical applications. However, for successful adaption to the orthopaedic field, layer performance, stability and adhesion in physiologically relevant setups are crucial and not consistently investigated. In vitro wear testing as well as adequate corrosion tests of interfaces and interlayers are of great importance to verify the long term stability of DLC coated load bearing implants in the human body. DLC coatings were deposited on articulating lumbar spinal disks made of CoCr28Mo6 biomedical implant alloy using a plasma-activated chemical vapor deposition (PACVD process. As an adhesion promoting interlayer, tantalum films were deposited by magnetron sputtering. Wear tests of coated and uncoated implants were performed in physiological solution up to a maximum of 101 million articulation cycles with an amplitude of ±2° and −3/+6° in successive intervals at a preload of 1200 N. The implants were characterized by gravimetry, inductively coupled plasma optical emission spectrometry (ICP-OES and cross section scanning electron microscopy (SEM analysis. It is shown that DLC coated surfaces with uncontaminated tantalum interlayers perform very well and no corrosive or mechanical failure could be observed. This also holds true in tests featuring overload and third-body wear by cortical bone chips present in the bearing pairs. Regarding the interlayer tolerance towards interlayer contamination (oxygen, limits for initiation of potential failure modes were established. It was found that mechanical failure is the most critical aspect and this mode is hypothetically linked to the α-β tantalum phase switch induced by increasing oxygen levels as observed by X-ray diffraction (XRD. It is concluded that DLC coatings are a feasible candidate for near zero wear articulations on implants, potentially even surpassing the performance of ceramic vs

  11. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  12. Clojure high performance programming

    Kumar, Shantanu

    2013-01-01

    This is a short, practical guide that will teach you everything you need to know to start writing high performance Clojure code.This book is ideal for intermediate Clojure developers who are looking to get a good grip on how to achieve optimum performance. You should already have some experience with Clojure and it would help if you already know a little bit of Java. Knowledge of performance analysis and engineering is not required. For hands-on practice, you should have access to Clojure REPL with Leiningen.

  13. High Performance Concrete

    Traian Oneţ

    2009-01-01

    Full Text Available The paper presents the last studies and researches accomplished in Cluj-Napoca related to high performance concrete, high strength concrete and self compacting concrete. The purpose of this paper is to raid upon the advantages and inconveniences when a particular concrete type is used. Two concrete recipes are presented, namely for the concrete used in rigid pavement for roads and another one for self-compacting concrete.

  14. High performance polymeric foams

    Gargiulo, M.; Sorrentino, L.; Iannace, S.

    2008-01-01

    The aim of this work was to investigate the foamability of high-performance polymers (polyethersulfone, polyphenylsulfone, polyetherimide and polyethylenenaphtalate). Two different methods have been used to prepare the foam samples: high temperature expansion and two-stage batch process. The effects of processing parameters (saturation time and pressure, foaming temperature) on the densities and microcellular structures of these foams were analyzed by using scanning electron microscopy

  15. Viscoelastic Waves Simulation in a Blocky Medium with Fluid-Saturated Interlayers Using High-Performance Computing

    Sadovskii, Vladimir; Sadovskaya, Oxana

    2017-04-01

    A thermodynamically consistent approach to the description of linear and nonlinear wave processes in a blocky medium, which consists of a large number of elastic blocks interacting with each other via pliant interlayers, is proposed. The mechanical properties of interlayers are defined by means of the rheological schemes of different levels of complexity. Elastic interaction between the blocks is considered in the framework of the linear elasticity theory [1]. The effects of viscoelastic shear in the interblock interlayers are taken into consideration using the Pointing-Thomson rheological scheme. The model of an elastic porous material is used in the interlayers, where the pores collapse if an abrupt compressive stress is applied. On the basis of the Biot equations for a fluid-saturated porous medium, a new mathematical model of a blocky medium is worked out, in which the interlayers provide a convective fluid motion due to the external perturbations. The collapse of pores is modeled within the generalized rheological approach, wherein the mechanical properties of a material are simulated using four rheological elements. Three of them are the traditional elastic, viscous and plastic elements, the fourth element is the so-called rigid contact [2], which is used to describe the behavior of materials with different resistance to tension and compression. Thermodynamic consistency of the equations in interlayers with the equations in blocks guarantees fulfillment of the energy conservation law for a blocky medium in a whole, i.e. kinetic and potential energy of the system is the sum of kinetic and potential energies of the blocks and interlayers. As a result of discretization of the equations of the model, robust computational algorithm is constructed, that is stable because of the thermodynamic consistency of the finite difference equations at a discrete level. The splitting method by the spatial variables and the Godunov gap decay scheme are used in the blocks, the

  16. High performance germanium MOSFETs

    Saraswat, Krishna [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)]. E-mail: saraswat@stanford.edu; Chui, Chi On [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Krishnamohan, Tejas [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Kim, Donghyun [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Nayfeh, Ammar [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Pethe, Abhijit [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States)

    2006-12-15

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO {sub x}N {sub y} ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin ({approx}2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices.

  17. High performance germanium MOSFETs

    Saraswat, Krishna; Chui, Chi On; Krishnamohan, Tejas; Kim, Donghyun; Nayfeh, Ammar; Pethe, Abhijit

    2006-01-01

    Ge is a very promising material as future channel materials for nanoscale MOSFETs due to its high mobility and thus a higher source injection velocity, which translates into higher drive current and smaller gate delay. However, for Ge to become main-stream, surface passivation and heterogeneous integration of crystalline Ge layers on Si must be achieved. We have demonstrated growth of fully relaxed smooth single crystal Ge layers on Si using a novel multi-step growth and hydrogen anneal process without any graded buffer SiGe layer. Surface passivation of Ge has been achieved with its native oxynitride (GeO x N y ) and high-permittivity (high-k) metal oxides of Al, Zr and Hf. High mobility MOSFETs have been demonstrated in bulk Ge with high-k gate dielectrics and metal gates. However, due to their smaller bandgap and higher dielectric constant, most high mobility materials suffer from large band-to-band tunneling (BTBT) leakage currents and worse short channel effects. We present novel, Si and Ge based heterostructure MOSFETs, which can significantly reduce the BTBT leakage currents while retaining high channel mobility, making them suitable for scaling into the sub-15 nm regime. Through full band Monte-Carlo, Poisson-Schrodinger and detailed BTBT simulations we show a dramatic reduction in BTBT and excellent electrostatic control of the channel, while maintaining very high drive currents in these highly scaled heterostructure DGFETs. Heterostructure MOSFETs with varying strained-Ge or SiGe thickness, Si cap thickness and Ge percentage were fabricated on bulk Si and SOI substrates. The ultra-thin (∼2 nm) strained-Ge channel heterostructure MOSFETs exhibited >4x mobility enhancements over bulk Si devices and >10x BTBT reduction over surface channel strained SiGe devices

  18. High performance conductometry

    Saha, B.

    2000-01-01

    Inexpensive but high performance systems have emerged progressively for basic and applied measurements in physical and analytical chemistry on one hand, and for on-line monitoring and leak detection in plants and facilities on the other. Salient features of the developments will be presented with specific examples

  19. Danish High Performance Concretes

    Nielsen, M. P.; Christoffersen, J.; Frederiksen, J.

    1994-01-01

    In this paper the main results obtained in the research program High Performance Concretes in the 90's are presented. This program was financed by the Danish government and was carried out in cooperation between The Technical University of Denmark, several private companies, and Aalborg University...... concretes, workability, ductility, and confinement problems....

  20. High performance homes

    Beim, Anne; Vibæk, Kasper Sánchez

    2014-01-01

    . Consideration of all these factors is a precondition for a truly integrated practice and as this chapter demonstrates, innovative project delivery methods founded on the manufacturing of prefabricated buildings contribute to the production of high performance homes that are cost effective to construct, energy...

  1. Investigation the performance of 0-D and 3-d combustion simulation softwares for modelling HCCI engine with high air excess ratios

    Gökhan Coşkun

    2017-10-01

    Full Text Available In this study, performance of zero and three dimensional simulations codes that used for simulate a homogenous charge compression ignition (HCCI engine fueled with Primary Reference Fuel PRF (85% iso-octane and 15% n-heptane were investigated. 0-D code, called as SRM Suite (Stochastic Reactor Model which can simulate engine combustion by using stochastic reactor model technique were used. Ansys-Fluent which can simulate computational fluid dynamics (CFD was used for 3-D engine combustion simulations. Simulations were evaluated for both commercial codes in terms of combustion, heat transfer and emissions in a HCCI engine. Chemical kinetic mechanisms which developed by Tsurushima including 33 species and 38 reactions for surrogate PRF fuel were used for combustion simulations. Analysis showed that both codes have advantages over each other.

  2. arXiv Analytical methods for vacuum simulations in high energy accelerators for future machines based on LHC performances

    Aichinger, Ida; Chiggiato, Paolo

    The Future Circular Collider (FCC), currently in the design phase, will address many outstanding questions in particle physics. The technology to succeed in this 100 km circumference collider goes beyond present limits. Ultra-high vacuum conditions in the beam pipe is one essential requirement to provide a smooth operation. Different physics phenomena as photon-, ion- and electron- induced desorption and thermal outgassing of the chamber walls challenge this requirement. This paper presents an analytical model and a computer code PyVASCO that supports the design of a stable vacuum system by providing an overview of all the gas dynamics happening inside the beam pipes. A mass balance equation system describes the density distribution of the four dominating gas species $\\text{H}_2, \\text{CH}_4$, $\\text{CO}$ and $\\text{CO}_2$. An appropriate solving algorithm is discussed in detail and a validation of the model including a comparison of the output to the readings of LHC gauges is presented. This enables the eval...

  3. The effects of laboratory inquire-based experiments and computer simulations on high school students‘ performance and cognitive load in physics teaching

    Radulović Branka

    2016-01-01

    Full Text Available The main goal of this study was to examine the extent to which different teaching instructions focused on the application of laboratory inquire-based experiments (LIBEs and interactive computer based simulations (ICBSs improved understanding of physical contents in high school students, compared to traditional teaching approach. Additionally, the study examined how the applied instructions influenced students’ assessment of invested cognitive load. A convenience sample of this research included 187 high school students. A multiple-choice test of knowledge was used as a measuring instrument for the students’ performance. Each task in the test was followed by the five-point Likert-type scale for the evaluation of invested cognitive load. In addition to descriptive statistics, determination of significant differences in performance and cognitive load as well as the calculation of instructional efficiency of applied instructional design, computed one-factor analysis of variance and Tukey’s post-hoc test. The findings indicate that teaching instructions based on the use of LIBEs and ICBSs equally contribute to an increase in students’ performance and the reduction of cognitive load unlike traditional teaching of Physics. The results obtained by the students from the LIBEs and ICBSs groups for calculated instructional efficiency suggest that the applied teaching strategies represent effective teaching instructions. [Projekat Ministarstva nauke Republike Srbije, br. 179010: The Quality of Education System in Serbia from European Perspective

  4. A new high-performance 3D multiphase flow code to simulate volcanic blasts and pyroclastic density currents: example from the Boxing Day event, Montserrat

    Ongaro, T. E.; Clarke, A.; Neri, A.; Voight, B.; Widiwijayanti, C.

    2005-12-01

    For the first time the dynamics of directed blasts from explosive lava-dome decompression have been investigated by means of transient, multiphase flow simulations in 2D and 3D. Multiphase flow models developed for the analysis of pyroclastic dispersal from explosive eruptions have been so far limited to 2D axisymmetric or Cartesian formulations which cannot properly account for important 3D features of the volcanic system such as complex morphology and fluid turbulence. Here we use a new parallel multiphase flow code, named PDAC (Pyroclastic Dispersal Analysis Code) (Esposti Ongaro et al., 2005), able to simulate the transient and 3D thermofluid-dynamics of pyroclastic dispersal produced by collapsing columns and volcanic blasts. The code solves the equations of the multiparticle flow model of Neri et al. (2003) on 3D domains extending up to several kilometres in 3D and includes a new description of the boundary conditions over topography which is automatically acquired from a DEM. The initial conditions are represented by a compact volume of gas and pyroclasts, with clasts of different sizes and densities, at high temperature and pressure. Different dome porosities and pressurization models were tested in 2D to assess the sensitivity of the results to the distribution of initial gas pressure, and to the total mass and energy stored in the dome, prior to 3D modeling. The simulations have used topographies appropriate for the 1997 Boxing Day directed blast on Montserrat, which eradicated the village of St. Patricks. Some simulations tested the runout of pyroclastic density currents over the ocean surface, corresponding to observations of over-water surges to several km distances at both locations. The PDAC code was used to perform 3D simulations of the explosive event on the actual volcano topography. The results highlight the strong topographic control on the propagation of the dense pyroclastic flows, the triggering of thermal instabilities, and the elutriation

  5. Aircraft Performance for Open Air Traffic Simulations

    Metz, I.C.; Hoekstra, J.M.; Ellerbroek, J.; Kugler, D.

    2016-01-01

    The BlueSky Open Air Tra_c Simulator developed by the Control & Simulation section of TU Delft aims at supporting research for analysing Air Tra_c Management concepts by providing an open source simulation platform. The goal of this study was to complement BlueSky with aircraft performance

  6. High performance in software development

    CERN. Geneva; Haapio, Petri; Liukkonen, Juha-Matti

    2015-01-01

    What are the ingredients of high-performing software? Software development, especially for large high-performance systems, is one the most complex tasks mankind has ever tried. Technological change leads to huge opportunities but challenges our old ways of working. Processing large data sets, possibly in real time or with other tight computational constraints, requires an efficient solution architecture. Efficiency requirements span from the distributed storage and large-scale organization of computation and data onto the lowest level of processor and data bus behavior. Integrating performance behavior over these levels is especially important when the computation is resource-bounded, as it is in numerics: physical simulation, machine learning, estimation of statistical models, etc. For example, memory locality and utilization of vector processing are essential for harnessing the computing power of modern processor architectures due to the deep memory hierarchies of modern general-purpose computers. As a r...

  7. High-Performance Networking

    CERN. Geneva

    2003-01-01

    The series will start with an historical introduction about what people saw as high performance message communication in their time and how that developed to the now to day known "standard computer network communication". It will be followed by a far more technical part that uses the High Performance Computer Network standards of the 90's, with 1 Gbit/sec systems as introduction for an in depth explanation of the three new 10 Gbit/s network and interconnect technology standards that exist already or emerge. If necessary for a good understanding some sidesteps will be included to explain important protocols as well as some necessary details of concerned Wide Area Network (WAN) standards details including some basics of wavelength multiplexing (DWDM). Some remarks will be made concerning the rapid expanding applications of networked storage.

  8. The Effect of Natural or Simulated Altitude Training on High-Intensity Intermittent Running Performance in Team-Sport Athletes: A Meta-Analysis.

    Hamlin, Michael J; Lizamore, Catherine A; Hopkins, Will G

    2018-02-01

    While adaptation to hypoxia at natural or simulated altitude has long been used with endurance athletes, it has only recently gained popularity for team-sport athletes. To analyse the effect of hypoxic interventions on high-intensity intermittent running performance in team-sport athletes. A systematic literature search of five journal databases was performed. Percent change in performance (distance covered) in the Yo-Yo intermittent recovery test (level 1 and level 2 were used without differentiation) in hypoxic (natural or simulated altitude) and control (sea level or normoxic placebo) groups was meta-analyzed with a mixed model. The modifying effects of study characteristics (type and dose of hypoxic exposure, training duration, post-altitude duration) were estimated with fixed effects, random effects allowed for repeated measurement within studies and residual real differences between studies, and the standard-error weighting factors were derived or imputed via standard deviations of change scores. Effects and their uncertainty were assessed with magnitude-based inference, with a smallest important improvement of 4% estimated via between-athlete standard deviations of performance at baseline. Ten studies qualified for inclusion, but two were excluded owing to small sample size and risk of publication bias. Hypoxic interventions occurred over a period of 7-28 days, and the range of total hypoxic exposure (in effective altitude-hours) was 4.5-33 km h in the intermittent-hypoxia studies and 180-710 km h in the live-high studies. There were 11 control and 15 experimental study-estimates in the final meta-analysis. Training effects were moderate and very likely beneficial in the control groups at 1 week (20 ± 14%, percent estimate, ± 90% confidence limits) and 4-week post-intervention (25 ± 23%). The intermittent and live-high hypoxic groups experienced additional likely beneficial gains at 1 week (13 ± 16%; 13 ± 15%) and 4-week post

  9. Hydraulic performance numerical simulation of high specific speed mixed-flow pump based on quasi three-dimensional hydraulic design method

    Zhang, Y X; Su, M; Hou, H C; Song, P F

    2013-01-01

    This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model

  10. High performance data transfer

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  11. Virtual Design Studio (VDS) - Development of an Integrated Computer Simulation Environment for Performance Based Design of Very-Low Energy and High IEQ Buildings

    Chen, Yixing [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Zhang, Jianshun [Syracuse Univ., NY (United States); Pelken, Michael [Syracuse Univ., NY (United States); Gu, Lixing [Univ. of Central Florida, Orlando, FL (United States); Rice, Danial [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Meng, Zhaozhou [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Semahegn, Shewangizaw [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Feng, Wei [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Ling, Francesca [Syracuse Univ., NY (United States); Shi, Jun [Building Energy and Environmental Systems Lab. (BEESL), Syracuse, NY (United States); Henderson, Hugh [CDH Energy, Cazenovia, NY (United States)

    2013-09-01

    Executive Summary The objective of this study was to develop a “Virtual Design Studio (VDS)”: a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. This VDS is intended to assist collaborating architects, engineers and project management team members throughout from the early phases to the detailed building design stages. It can be used to plan design tasks and workflow, and evaluate the potential impacts of various green building strategies on the building performance by using the state of the art simulation tools as well as industrial/professional standards and guidelines for green building system design. Engaged in the development of VDS was a multi-disciplinary research team that included architects, engineers, and software developers. Based on the review and analysis of how existing professional practices in building systems design operate, particularly those used in the U.S., Germany and UK, a generic process for performance-based building design, construction and operation was proposed. It distinguishes the whole process into five distinct stages: Assess, Define, Design, Apply, and Monitoring (ADDAM). The current VDS is focused on the first three stages. The VDS considers building design as a multi-dimensional process, involving multiple design teams, design factors, and design stages. The intersection among these three dimensions defines a specific design task in terms of “who”, “what” and “when”. It also considers building design as a multi-objective process that aims to enhance the five aspects of performance for green building systems: site sustainability, materials and resource efficiency, water utilization efficiency, energy efficiency and impacts to the atmospheric environment, and IEQ. The current VDS development has been limited to energy efficiency and IEQ performance, with particular focus

  12. Photovoltaic array performance simulation models

    Menicucci, D. F.

    1986-09-15

    The experience of the solar industry confirms that, despite recent cost reductions, the profitability of photovoltaic (PV) systems is often marginal and the configuration and sizing of a system is a critical problem for the design engineer. Construction and evaluation of experimental systems are expensive and seldom justifiable. A mathematical model or computer-simulation program is a desirable alternative, provided reliable results can be obtained. Sandia National Laboratories, Albuquerque (SNLA), has been studying PV-system modeling techniques in an effort to develop an effective tool to be used by engineers and architects in the design of cost-effective PV systems. This paper reviews two of the sources of error found in previous PV modeling programs, presents the remedies developed to correct these errors, and describes a new program that incorporates these improvements.

  13. Feasibility of performing high resolution cloud-resolving simulations of historic extreme events: The San Fruttuoso (Liguria, italy) case of 1915.

    Parodi, Antonio; Boni, Giorgio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco

    2017-04-01

    Recent studies show that highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapor content. Analyses of available historical records do not provide a univocal answer, since these may be likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria (Italy): The San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs, as they are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the Reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Liguria Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to Reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers and even photographs can be very valuable sources of knowledge in the reconstruction of past extreme events.

  14. High performance sapphire windows

    Bates, Stephen C.; Liou, Larry

    1993-02-01

    High-quality, wide-aperture optical access is usually required for the advanced laser diagnostics that can now make a wide variety of non-intrusive measurements of combustion processes. Specially processed and mounted sapphire windows are proposed to provide this optical access to extreme environment. Through surface treatments and proper thermal stress design, single crystal sapphire can be a mechanically equivalent replacement for high strength steel. A prototype sapphire window and mounting system have been developed in a successful NASA SBIR Phase 1 project. A large and reliable increase in sapphire design strength (as much as 10x) has been achieved, and the initial specifications necessary for these gains have been defined. Failure testing of small windows has conclusively demonstrated the increased sapphire strength, indicating that a nearly flawless surface polish is the primary cause of strengthening, while an unusual mounting arrangement also significantly contributes to a larger effective strength. Phase 2 work will complete specification and demonstration of these windows, and will fabricate a set for use at NASA. The enhanced capabilities of these high performance sapphire windows will lead to many diagnostic capabilities not previously possible, as well as new applications for sapphire.

  15. Key performance indicators for successful simulation projects

    Jahangirian, M; Taylor, SJE; Young, T; Robinson, S

    2016-01-01

    There are many factors that may contribute to the successful delivery of a simulation project. To provide a structured approach to assessing the impact various factors have on project success, we propose a top-down framework whereby 15 Key Performance Indicators (KPI) are developed that represent the level of successfulness of simulation projects from various perspectives. They are linked to a set of Critical Success Factors (CSF) as reported in the simulation literature. A single measure cal...

  16. Performance evaluation of granular activated carbon system at Pantex: Rapid small-scale column tests to simulate removal of high explosives from contaminated groundwater

    Henke, J.L.; Speitel, G.E. [Univ. of Texas, Austin, TX (United States). Dept. of Civil Engineering

    1998-08-01

    A granular activated carbon (GAC) system is now in operation at Pantex to treat groundwater from the perched aquifer that is contaminated with high explosives. The main chemicals of concern are RDX and HMX. The system consists of two GAC columns in series. Each column is charged with 10,000 pounds of Northwestern LB-830 GAC. At the design flow rate of 325 gpm, the hydraulic loading is 6.47 gpm/ft{sup 2}, and the empty bed contact time is 8.2 minutes per column. Currently, the system is operating at less than 10% of its design flow rate, although flow rate increases are expected in the relatively near future. This study had several objectives: Estimate the service life of the GAC now in use at Pantex; Screen several GACs to provide a recommendation on the best GAC for use at Pantex when the current GAC is exhausted and is replaced; Determine the extent to which natural organic matter in the Pantex groundwater fouls GAC adsorption sites, thereby decreasing the adsorption capacity for high explosives; and Determine if computer simulation models could match the experimental results, thereby providing another tool to follow system performance.

  17. Performance evaluation of granular activated carbon system at Pantex: Rapid small-scale column tests to simulate removal of high explosives from contaminated groundwater

    Henke, J.L.; Speitel, G.E.

    1998-08-01

    A granular activated carbon (GAC) system is now in operation at Pantex to treat groundwater from the perched aquifer that is contaminated with high explosives. The main chemicals of concern are RDX and HMX. The system consists of two GAC columns in series. Each column is charged with 10,000 pounds of Northwestern LB-830 GAC. At the design flow rate of 325 gpm, the hydraulic loading is 6.47 gpm/ft 2 , and the empty bed contact time is 8.2 minutes per column. Currently, the system is operating at less than 10% of its design flow rate, although flow rate increases are expected in the relatively near future. This study had several objectives: Estimate the service life of the GAC now in use at Pantex; Screen several GACs to provide a recommendation on the best GAC for use at Pantex when the current GAC is exhausted and is replaced; Determine the extent to which natural organic matter in the Pantex groundwater fouls GAC adsorption sites, thereby decreasing the adsorption capacity for high explosives; and Determine if computer simulation models could match the experimental results, thereby providing another tool to follow system performance

  18. SLC positron source: Simulation and performance

    Pitthan, R.; Braun, H.; Clendenin, J.E.; Ecklund, S.D.; Helm, R.H.; Kulikov, A.V.; Odian, A.C.; Pei, G.X.; Ross, M.C.; Woodley, M.D.

    1991-06-01

    Performance of the source was found to be in good general agreement with computer simulations with S-band acceleration, and where not, the simulations lead to identification of problems, in particular the underestimated impact of linac misalignments due to the 1989 Loma Prieta Earthquake. 13 refs., 7 figs

  19. Team Culture and Business Strategy Simulation Performance

    Ritchie, William J.; Fornaciari, Charles J.; Drew, Stephen A. W.; Marlin, Dan

    2013-01-01

    Many capstone strategic management courses use computer-based simulations as core pedagogical tools. Simulations are touted as assisting students in developing much-valued skills in strategy formation, implementation, and team management in the pursuit of superior strategic performance. However, despite their rich nature, little is known regarding…

  20. Building performance simulation for sustainable buildings

    Hensen, J.L.M.

    2010-01-01

    This paper aims to provide a general view of the background and current state of building performance simulation, which has the potential to deliver, directly or indirectly, substantial benefits to building stakeholders and to the environment. However the building simulation community faces many

  1. High performance computing on vector systems

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  2. 24th & 25th Joint Workshop on Sustained Simulation Performance

    Bez, Wolfgang; Focht, Erich; Gienger, Michael; Kobayashi, Hiroaki

    2017-01-01

    This book presents the state of the art in High Performance Computing on modern supercomputer architectures. It addresses trends in hardware and software development in general, as well as the future of High Performance Computing systems and heterogeneous architectures. The contributions cover a broad range of topics, from improved system management to Computational Fluid Dynamics, High Performance Data Analytics, and novel mathematical approaches for large-scale systems. In addition, they explore innovative fields like coupled multi-physics and multi-scale simulations. All contributions are based on selected papers presented at the 24th Workshop on Sustained Simulation Performance, held at the University of Stuttgart’s High Performance Computing Center in Stuttgart, Germany in December 2016 and the subsequent Workshop on Sustained Simulation Performance, held at the Cyberscience Center, Tohoku University, Japan in March 2017.

  3. Hazard-to-Risk: High-Performance Computing Simulations of Large Earthquake Ground Motions and Building Damage in the Near-Fault Region

    Miah, M.; Rodgers, A. J.; McCallen, D.; Petersson, N. A.; Pitarka, A.

    2017-12-01

    We are running high-performance computing (HPC) simulations of ground motions for large (magnitude, M=6.5-7.0) earthquakes in the near-fault region (steel moment frame buildings throughout the near-fault domain. For ground motions, we are using SW4, a fourth order summation-by-parts finite difference time-domain code running on 10,000-100,000's of cores. Earthquake ruptures are generated using the Graves and Pitarka (2017) method. We validated ground motion intensity measurements against Ground Motion Prediction Equations. We considered two events (M=6.5 and 7.0) for vertical strike-slip ruptures with three-dimensional (3D) basin structures, including stochastic heterogeneity. We have also considered M7.0 scenarios for a Hayward Fault rupture scenario which effects the San Francisco Bay Area and northern California using both 1D and 3D earth structure. Dynamic, inelastic response of canonical buildings is computed with the NEVADA, a nonlinear, finite-deformation finite element code. Canonical buildings include 3-, 9-, 20- and 40-story steel moment frame buildings. Damage potential is tracked by the peak inter-story drift (PID) ratio, which measures the maximum displacement between adjacent floors of the building and is strongly correlated with damage. PID ratios greater 1.0 generally indicate non-linear response and permanent deformation of the structure. We also track roof displacement to identify permanent deformation. PID (damage) for a given earthquake scenario (M, slip distribution, hypocenter) is spatially mapped throughout the SW4 domain with 1-2 km resolution. Results show that in the near fault region building damage is correlated with peak ground velocity (PGV), while farther away (> 20 km) it is better correlated with peak ground acceleration (PGA). We also show how simulated ground motions have peaks in the response spectra that shift to longer periods for larger magnitude events and for locations of forward directivity, as has been reported by

  4. R high performance programming

    Lim, Aloysius

    2015-01-01

    This book is for programmers and developers who want to improve the performance of their R programs by making them run faster with large data sets or who are trying to solve a pesky performance problem.

  5. The COD Model: Simulating Workgroup Performance

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  6. Improving the Performance of the Extreme-scale Simulator

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2014-01-01

    Investigating the performance of parallel applications at scale on future high-performance computing (HPC) architectures and the performance impact of different architecture choices is an important component of HPC hardware/software co-design. The Extreme-scale Simulator (xSim) is a simulation-based toolkit for investigating the performance of parallel applications at scale. xSim scales to millions of simulated Message Passing Interface (MPI) processes. The overhead introduced by a simulation tool is an important performance and productivity aspect. This paper documents two improvements to xSim: (1) a new deadlock resolution protocol to reduce the parallel discrete event simulation management overhead and (2) a new simulated MPI message matching algorithm to reduce the oversubscription management overhead. The results clearly show a significant performance improvement, such as by reducing the simulation overhead for running the NAS Parallel Benchmark suite inside the simulator from 1,020\\% to 238% for the conjugate gradient (CG) benchmark and from 102% to 0% for the embarrassingly parallel (EP) and benchmark, as well as, from 37,511% to 13,808% for CG and from 3,332% to 204% for EP with accurate process failure simulation.

  7. Maintenance Personnel Performance Simulation (MAPPS) model

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.; Haas, P.M.

    1984-01-01

    A stochastic computer model for simulating the actions and behavior of nuclear power plant maintenance personnel is described. The model considers personnel, environmental, and motivational variables to yield predictions of maintenance performance quality and time to perform. The mode has been fully developed and sensitivity tested. Additional evaluation of the model is now taking place

  8. High performance work practices, innovation and performance

    Jørgensen, Frances; Newton, Cameron; Johnston, Kim

    2013-01-01

    Research spanning nearly 20 years has provided considerable empirical evidence for relationships between High Performance Work Practices (HPWPs) and various measures of performance including increased productivity, improved customer service, and reduced turnover. What stands out from......, and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a practice that has been identified in HPWP literature and potential variables that can facilitate or hinder the effects of these practices of innovation- and performance...

  9. 20th Joint Workshop on Sustained Simulation Performance

    Bez, Wolfgang; Focht, Erich; Patel, Nisarg; Kobayashi, Hiroaki

    2016-01-01

    The book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It explores general trends in hardware and software development, and then focuses specifically on the future of high-performance systems and heterogeneous architectures. It also covers applications such as computational fluid dynamics, material science, medical applications and climate research and discusses innovative fields like coupled multi-physics or multi-scale simulations. The papers included were selected from the presentations given at the 20th Workshop on Sustained Simulation Performance at the HLRS, University of Stuttgart, Germany in December 2015, and the subsequent Workshop on Sustained Simulation Performance at Tohoku University in February 2016.

  10. Python high performance programming

    Lanaro, Gabriele

    2013-01-01

    An exciting, easy-to-follow guide illustrating the techniques to boost the performance of Python code, and their applications with plenty of hands-on examples.If you are a programmer who likes the power and simplicity of Python and would like to use this language for performance-critical applications, this book is ideal for you. All that is required is a basic knowledge of the Python programming language. The book will cover basic and advanced topics so will be great for you whether you are a new or a seasoned Python developer.

  11. High Performance Computing Multicast

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  12. NGINX high performance

    Sharma, Rahul

    2015-01-01

    System administrators, developers, and engineers looking for ways to achieve maximum performance from NGINX will find this book beneficial. If you are looking for solutions such as how to handle more users from the same system or load your website pages faster, then this is the book for you.

  13. Enhancement of High-Intensity Actions and Physical Performance During a Simulated Brazilian Jiu-Jitsu Competition With a Moderate Dose of Caffeine.

    Diaz-Lara, Francisco Javier; Del Coso, Juan; Portillo, Javier; Areces, Francisco; García, Jose Manuel; Abián-Vicén, Javier

    2016-10-01

    Although caffeine is one of the most commonly used substances in combat sports, information about its ergogenic effects on these disciplines is very limited. To determine the effectiveness of ingesting a moderate dose of caffeine to enhance overall performance during a simulated Brazilian jiu-jitsu (BJJ) competition. Fourteen elite BJJ athletes participated in a double-blind, placebo-controlled experimental design. In a random order, the athletes ingested either 3 mg/kg body mass of caffeine or a placebo (cellulose, 0 mg/kg) and performed 2 simulated BJJ combats (with 20 min rest between them), following official BJJ rules. Specific physical tests such as maximal handgrip dynamometry, maximal height during a countermovement jump, permanence during a maximal static-lift test, peak power in a bench-press exercise, and blood lactate concentration were measured at 3 specific times: before the first combat and immediately after the first and second combats. The combats were video-recorded to analyze fight actions. After the caffeine ingestion, participants spent more time in offensive actions in both combats and revealed higher blood lactate values (P Performance in all physical tests carried out before the first combat was enhanced with caffeine (P caffeine and placebo. Caffeine might be an effective ergogenic aid for improving intensity and physical performance during successive elite BJJ combats.

  14. Simulating Performance Risk for Lighting Retrofit Decisions

    Jia Hu

    2015-05-01

    Full Text Available In building retrofit projects, dynamic simulations are performed to simulate building performance. Uncertainty may negatively affect model calibration and predicted lighting energy savings, which increases the chance of default on performance-based contracts. Therefore, the aim of this paper is to develop a simulation-based method that can analyze lighting performance risk in lighting retrofit decisions. The model uses a surrogate model, which is constructed by adaptively selecting sample points and generating approximation surfaces with fast computing time. The surrogate model is a replacement of the computation intensive process. A statistical method is developed to generate extreme weather profile based on the 20-year historical weather data. A stochastic occupancy model was created using actual occupancy data to generate realistic occupancy patterns. Energy usage of lighting, and heating, ventilation, and air conditioning (HVAC is simulated using EnergyPlus. The method can evaluate the influence of different risk factors (e.g., variation of luminaire input wattage, varying weather conditions on lighting and HVAC energy consumption and lighting electricity demand. Probability distributions are generated to quantify the risk values. A case study was conducted to demonstrate and validate the methods. The surrogate model is a good solution for quantifying the risk factors and probability distribution of the building performance.

  15. Performance Optimization of the ATLAS Detector Simulation

    AUTHOR|(CDS)2091018

    In the thesis at hand the current performance of the ATLAS detector simulation, part of the Athena framework, is analyzed and possible optimizations are examined. For this purpose the event based sampling profiler VTune Amplifier by Intel is utilized. As the most important metric to measure improvements, the total execution time of the simulation of $t\\bar{t}$ events is also considered. All efforts are focused on structural changes, which do not influence the simulation output and can be attributed to CPU specific issues, especially front end stalls and vectorization. The most promising change is the activation of profile guided optimization for Geant4, which is a critical external dependency of the simulation. Profile guided optimization gives an average improvement of $8.9\\%$ and $10.0\\%$ for the two considered cases at the cost of one additional compilation (instrumented binaries) and execution (training to obtain profiling data) at build time.

  16. CIGS Solar Cells for Space Applications: Numerical Simulation of the Effect of Traps Created by High-Energy Electron and Proton Irradiation on the Performance of Solar Cells

    Dabbabi, Samar; Ben Nasr, Tarek; Turki Kamoun, Najoua

    2018-02-01

    Numerical simulation is carried out using the Silvaco ATLAS software to predict the effect of 1-MeV electron and 4-MeV proton irradiation on the performance of a Cu(In, Ga)Se2 (CIGS) solar cell that operates under the air mass zero spectrum (AM0). As a consequence of irradiation, two types of traps are induced including the donor- and acceptor-type traps. Only one of them (the donor-type trap) is found responsible for the degradation of the open-circuit voltage (V OC), fill factor (FF) and efficiency (η), while the short circuit current (J SC) remains essentially unaffected. The modelling simulation validity is verified by comparison with the experimental data. This article shows that CIGS solar cells are suited for space applications.

  17. Performance comparison of low and high temperature polymer electrolyte membrane fuel cells. Experimental examinations, modelling and numerical simulation; Leistungsvergleich von Nieder- und Hochtemperatur-Polymerelektrolytmembran-Brennstoffzellen. Experimentelle Untersuchungen, Modellierung und numerische Simulation

    Loehn, Helmut

    2010-11-03

    danger of washing out of the phosphoric acid. In an additional test row the Celtec-P-1000 HT-MEA was subjected to temperature change cycles (40 - 160 C), which lead to irreversible voltage losses. In a final test row performance tests were carried out with a HT-PEM fuel cell stack (16 cells /1 kW), developed in the fuel cell research centre of Volkswagen with a special gas diffusion electrode, which should avoid the degradation at deep temperatures. In these examinations no irreversible voltage losses could be detected, but the tests had to be aborted because of leakage problems. The by the experimental examinations gained insight of the superior operating behaviour and the further advantages of the HT-PEMFC in comparison to the LT-PEMFC were crucial for the construction of a simulation model for a single HT-PEM fuel cell in the theoretical part of this thesis, that also should be suitable as process simulation model for the computer based development of a virtual fuel cell within the interdisciplinary project ''Virtual Fuel Cell'' at the TU Darmstadt. The model is a numerical 2D ''along the channel'' - model, that was constructed with the finite element software COMSOL Multiphysics (version 3.5 a). The stationary, one phase model comprises altogether ten dependent variables in seven application modules in a highly complex, coupled non linear system of equations with 33713 degrees of freedom (1675 rectangle elements with 1768 nodes). The simulation model describes the mass transport processes and the electro-chemical reactions in a HT-PEM fuel cell with good accuracy, the model validation by comparing the model results with experimental data could be proved. So the 2D-model is basically suitable as process simulation model for the projecting of a virtual HT-PEM fuel cell. (orig.)

  18. High performance proton accelerators

    Favale, A.J.

    1989-01-01

    In concert with this theme this paper briefly outlines how Grumman, over the past 4 years, has evolved from a company that designed and fabricated a Radio Frequency Quadrupole (RFQ) accelerator from the Los Alamos National Laboratory (LANL) physics and specifications to a company who, as prime contractor, is designing, fabricating, assembling and commissioning the US Army Strategic Defense Commands (USA SDC) Continuous Wave Deuterium Demonstrator (CWDD) accelerator as a turn-key operation. In the case of the RFQ, LANL scientists performed the physics analysis, established the specifications supported Grumman on the mechanical design, conducted the RFQ tuning and tested the RFQ at their laboratory. For the CWDD Program Grumman has the responsibility for the physics and engineering designs, assembly, testing and commissioning albeit with the support of consultants from LANL, Lawrence Berkeley Laboratory (LBL) and Brookhaven National laboratory. In addition, Culham Laboratory and LANL are team members on CWDD. LANL scientists have reviewed the physics design as well as a USA SDC review board. 9 figs

  19. High-performance computing using FPGAs

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  20. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  1. A Simulation Approach for Performance Validation during Embedded Systems Design

    Wang, Zhonglei; Haberl, Wolfgang; Herkersdorf, Andreas; Wechs, Martin

    Due to the time-to-market pressure, it is highly desirable to design hardware and software of embedded systems in parallel. However, hardware and software are developed mostly using very different methods, so that performance evaluation and validation of the whole system is not an easy task. In this paper, we propose a simulation approach to bridge the gap between model-driven software development and simulation based hardware design, by merging hardware and software models into a SystemC based simulation environment. An automated procedure has been established to generate software simulation models from formal models, while the hardware design is originally modeled in SystemC. As the simulation models are annotated with timing information, performance issues are tackled in the same pass as system functionality, rather than in a dedicated approach.

  2. Equipment and performance upgrade of compact nuclear simulator

    Park, J. C.; Kwon, K. C.; Lee, D. Y.; Hwang, I. K.; Park, W. M.; Cha, K. H.; Song, S. J.; Lee, J. W.; Kim, B. G.; Kim, H. J.

    1999-01-01

    The simulator at Nuclear Training Center in KAERI became old and has not been used effectively for nuclear-related training and researches due to the problems such as aging of the equipment, difficulties in obtaining consumables and their high cost, and less personnel available who can handle the old equipment. To solve the problems, this study was performed for recovering the functions of the simulator through the technical design and replacement of components with new ones. As results of this study, our test after the replacement showed the same simulation status as the previous one, and new graphic displays added to the simulator was effective for the training and easy for maintenance. This study is meaningful as demonstrating the way of upgrading nuclear training simulators that lost their functioning due to the obsolescence of simulators and the unavailability of components

  3. Virtual reality simulation training of mastoidectomy - studies on novice performance.

    Andersen, Steven Arild Wuyts

    2016-08-01

    Virtual reality (VR) simulation-based training is increasingly used in surgical technical skills training including in temporal bone surgery. The potential of VR simulation in enabling high-quality surgical training is great and VR simulation allows high-stakes and complex procedures such as mastoidectomy to be trained repeatedly, independent of patients and surgical tutors, outside traditional learning environments such as the OR or the temporal bone lab, and with fewer of the constraints of traditional training. This thesis aims to increase the evidence-base of VR simulation training of mastoidectomy and, by studying the final-product performances of novices, investigates the transfer of skills to the current gold-standard training modality of cadaveric dissection, the effect of different practice conditions and simulator-integrated tutoring on performance and retention of skills, and the role of directed, self-regulated learning. Technical skills in mastoidectomy were transferable from the VR simulation environment to cadaveric dissection with significant improvement in performance after directed, self-regulated training in the VR temporal bone simulator. Distributed practice led to a better learning outcome and more consolidated skills than massed practice and also resulted in a more consistent performance after three months of non-practice. Simulator-integrated tutoring accelerated the initial learning curve but also caused over-reliance on tutoring, which resulted in a drop in performance when the simulator-integrated tutor-function was discontinued. The learning curves were highly individual but often plateaued early and at an inadequate level, which related to issues concerning both the procedure and the VR simulator, over-reliance on the tutor function and poor self-assessment skills. Future simulator-integrated automated assessment could potentially resolve some of these issues and provide trainees with both feedback during the procedure and immediate

  4. Designing and simulation smart multifunctional continuous logic device as a basic cell of advanced high-performance sensor systems with MIMO-structure

    Krasilenko, Vladimir G.; Nikolskyy, Aleksandr I.; Lazarev, Alexander A.

    2015-01-01

    We have proposed a design and simulation of hardware realizations of smart multifunctional continuous logic devices (SMCLD) as advanced basic cells of the sensor systems with MIMO- structure for images processing and interconnection. The SMCLD realize function of two-valued, multi-valued and continuous logics with current inputs and current outputs. Such advanced basic cells realize function nonlinear time-pulse transformation, analog-to-digital converters and neural logic. We showed advantages of such elements. It's have a number of advantages: high speed and reliability, simplicity, small power consumption, high integration level. The conception of construction of SMCLD consists in the use of a current mirrors realized on 1.5μm technology CMOS transistors. Presence of 50÷70 transistors, 1 PD and 1 LED makes the offered circuits quite compact. The simulation results of NOT, MIN, MAX, equivalence (EQ), normalize summation, averaging and other functions, that implemented SMCLD, showed that the level of logical variables can change from 0.1μA to 10μA for low-power consumption variants. The SMCLD have low power consumption <1mW and processing time about 1÷11μS at supply voltage 2.4÷3.3V.

  5. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  6. Simulations

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  7. Simulator experiments: effects of NPP operator experience on performance

    Beare, A.N.; Gray, L.H.

    1984-01-01

    During the FY83 research, a simulator experiment was conducted at the control room simulator for a GE Boiling Water Reactor (BWR) NPP. The research subjects were licensed operators undergoing requalification training and shift technical advisors (STAs). This experiment was designed to investigate the effects of senior reactor operator (SRO) experience, operating crew augmentation with an STA and practice, as a crew, upon crew and individual operator performance, in response to anticipated plant transients. Sixteen two-man crews of licensed operators were employed in a 2 x 2 factorial design. The SROs leading the crews were split into high and low experience groups on the basis of their years of experience as an SRO. One half of the high- and low-SRO experience groups were assisted by an STA. The crews responded to four simulated plant casualties. A five-variable set of content-referenced performance measures was derived from task analyses of the procedurally correct responses to the four casualties. System parameters and control manipulations were recorded by the computer controlling the simulator. Data on communications and procedure use were obtained from analysis of videotapes of the exercises. Questionnaires were used to collect subject biographical information and data on subjective workload during each simulated casualty. For four of the five performance measures, no significant differences were found between groups led by high (25 to 114 months) and low (1 to 17 months as an SRO) experience SROs. However, crews led by low experience SROs tended to have significantly shorter task performance times than crews led by high experience SROs. The presence of the STA had no significant effect on overall team performance in responding to the four simulated casualties. The FY84 experiments are a partial replication and extension of the FY83 experiment, but with PWR operators and simulator

  8. 18th and 19th Workshop on Sustained Simulation Performance

    Bez, Wolfgang; Focht, Erich; Kobayashi, Hiroaki; Patel, Nisarg

    2015-01-01

    This book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and the future of high-performance systems and heterogeneous architectures in particular. The application-related contributions cover computational fluid dynamics, material science, medical applications and climate research; innovative fields such as coupled multi-physics and multi-scale simulations are highlighted. All papers were chosen from presentations given at the 18th Workshop on Sustained Simulation Performance held at the HLRS, University of Stuttgart, Germany in October 2013 and subsequent Workshop of the same name held at Tohoku University in March 2014.  

  9. Computer Simulation Performed for Columbia Project Cooling System

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  10. Status report on high fidelity reactor simulation

    Palmiotti, G.; Smith, M.; Rabiti, C.; Lewis, E.; Yang, W.; Leclere, M.; Siegel, A.; Fischer, P.; Kaushik, D.; Ragusa, J.; Lottes, J.; Smith, B.

    2006-01-01

    This report presents the effort under way at Argonne National Laboratory toward a comprehensive, integrated computational tool intended mainly for the high-fidelity simulation of sodium-cooled fast reactors. The main activities carried out involved neutronics, thermal hydraulics, coupling strategies, software architecture, and high-performance computing. A new neutronics code, UNIC, is being developed. The first phase involves the application of a spherical harmonics method to a general, unstructured three-dimensional mesh. The method also has been interfaced with a method of characteristics. The spherical harmonics equations were implemented in a stand-alone code that was then used to solve several benchmark problems. For thermal hydraulics, a computational fluid dynamics code called Nek5000, developed in the Mathematics and Computer Science Division for coupled hydrodynamics and heat transfer, has been applied to a single-pin, periodic cell in the wire-wrap geometry typical of advanced burner reactors. Numerical strategies for multiphysics coupling have been considered and higher-accuracy efficient methods proposed to finely simulate coupled neutronic/thermal-hydraulic reactor transients. Initial steps have been taken in order to couple UNIC and Nek5000, and simplified problems have been defined and solved for testing. Furthermore, we have begun developing a lightweight computational framework, based in part on carefully selected open source tools, to nonobtrusively and efficiently integrate the individual physics modules into a unified simulation tool

  11. High Performance Networks for High Impact Science

    Scott, Mary A.; Bair, Raymond A.

    2003-02-13

    This workshop was the first major activity in developing a strategic plan for high-performance networking in the Office of Science. Held August 13 through 15, 2002, it brought together a selection of end users, especially representing the emerging, high-visibility initiatives, and network visionaries to identify opportunities and begin defining the path forward.

  12. CASTOR detector. Model, objectives and simulated performance

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D.; Aslanoglou, X.; Nicolis, N.; Lobanov, M.; Erine, S.; Kharlov, Y. V.; Bogolyubsky, M. Y.; Kurepin, A. B.; Chileev, K.; Wlodarczyk, Z.

    2001-01-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented

  13. Problem reporting management system performance simulation

    Vannatta, David S.

    1993-01-01

    This paper proposes the Problem Reporting Management System (PRMS) model as an effective discrete simulation tool that determines the risks involved during the development phase of a Trouble Tracking Reporting Data Base replacement system. The model considers the type of equipment and networks which will be used in the replacement system as well as varying user loads, size of the database, and expected operational availability. The paper discusses the dynamics, stability, and application of the PRMS and addresses suggested concepts to enhance the service performance and enrich them.

  14. RavenDB high performance

    Ritchie, Brian

    2013-01-01

    RavenDB High Performance is comprehensive yet concise tutorial that developers can use to.This book is for developers & software architects who are designing systems in order to achieve high performance right from the start. A basic understanding of RavenDB is recommended, but not required. While the book focuses on advanced topics, it does not assume that the reader has a great deal of prior knowledge of working with RavenDB.

  15. High-Performance Operating Systems

    Sharp, Robin

    1999-01-01

    Notes prepared for the DTU course 49421 "High Performance Operating Systems". The notes deal with quantitative and qualitative techniques for use in the design and evaluation of operating systems in computer systems for which performance is an important parameter, such as real-time applications......, communication systems and multimedia systems....

  16. High Performance Parallel Processing (HPPP) Finite Element Simulation of Fluid Structure Interactions Final Report CRADA No. TC-0824-94-A

    Couch, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ziegler, D. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2018-01-24

    This project was a muki-partner CRADA. This was a partnership between Alcoa and LLNL. AIcoa developed a system of numerical simulation modules that provided accurate and efficient threedimensional modeling of combined fluid dynamics and structural response.

  17. Spent fuel and high level waste: Chemical durability and performance under simulated repository conditions. Results of a coordinated research project 1998-2004

    2007-10-01

    This publication contains the results of an IAEA Coordinated Research Project (CRP). It provides a basis for understanding the potential interactions of waste form and repository environment, which is necessary for the development of the design and safety case for deep disposal. Types of high level waste matrices investigated include spent fuel, glasses and ceramics. Of particular interest are the experimental results pertaining to ceramic forms such as SYNROC. This publication also outlines important areas for future work, namely, standardized, collaborative experimental protocols for package-release studies, structured development and calibration of predictive models linking the performance of packaged waste and the repository environment, and studies of the long term behaviour of the wastes, including active waste samples

  18. Simulations of High Speed Fragment Trajectories

    Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis

    2017-11-01

    Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.

  19. Effects of Low- Versus High-Fidelity Simulations on the Cognitive Burden and Performance of Entry-Level Paramedicine Students: A Mixed-Methods Comparison Trial Using Eye-Tracking, Continuous Heart Rate, Difficulty Rating Scales, Video Observation and Interviews.

    Mills, Brennen W; Carter, Owen B-J; Rudd, Cobie J; Claxton, Louise A; Ross, Nathan P; Strobel, Natalie A

    2016-02-01

    High-fidelity simulation-based training is often avoided for early-stage students because of the assumption that while practicing newly learned skills, they are ill suited to processing multiple demands, which can lead to "cognitive overload" and poorer learning outcomes. We tested this assumption using a mixed-methods experimental design manipulating psychological immersion. Thirty-nine randomly assigned first-year paramedicine students completed low- or high-environmental fidelity simulations [low-environmental fidelity simulations (LF(en)S) vs. high-environmental fidelity simulation (HF(en)S)] involving a manikin with obstructed airway (SimMan3G). Psychological immersion and cognitive burden were determined via continuous heart rate, eye tracking, self-report questionnaire (National Aeronautics and Space Administration Task Load Index), independent observation, and postsimulation interviews. Performance was assessed by successful location of obstruction and time-to-termination. Eye tracking confirmed that students attended to multiple, concurrent stimuli in HF(en)S and interviews consistently suggested that they experienced greater psychological immersion and cognitive burden than their LF(en)S counterparts. This was confirmed by significantly higher mean heart rate (P cognitive burden but this has considerable educational merit.

  20. INEX simulations of the optical performance of the AFEL

    Goldstein, J.C.; Wang, T.S.F.; Sheffield, R.L.

    1991-01-01

    The AFEL (Advanced Free-Electron Laser) Project at Los Alamos National Laboratory is presently under construction. The project's goal is to produce a very high-brightness electron beam which will be generated by a photocathode injector and a 20 MeV rf-linac. Initial laser experiments will be performed with a 1-cm-period permanent magnet wiggler which will generate intense optical radiation near a wavelength of 3.7 μm. Future experiments will operate with ''slotted-tube'' electromagnetic wigglers (formerly called ''pulsed- wire'' wigglers). Experiments at both fundamental and higher-harmonic wavelengths are planned. This paper presents results of INEX (Integrated Numerical EXperiment) simulations of the optical performance of the AFEL. These simulations use the electron micropulse produced by the accelerator/beam transport code PARMELA in the 3-D FEL simulation code FELEX. 9 refs., 4 figs., 6 tabs

  1. Fast and accurate methods for the performance testing of highly-efficient c-Si photovoltaic modules using a 10 ms single-pulse solar simulator and customized voltage profiles

    Virtuani, A; Rigamonti, G; Friesen, G; Chianese, D; Beljean, P

    2012-01-01

    Performance testing of highly efficient, highly capacitive c-Si modules with pulsed solar simulators requires particular care. These devices in fact usually require a steady-state solar simulator or pulse durations longer than 100–200 ms in order to avoid measurement artifacts. The aim of this work was to validate an alternative method for the testing of highly capacitive c-Si modules using a 10 ms single pulse solar simulator. Our approach attempts to reconstruct a quasi-steady-state I–V (current–voltage) curve of a highly capacitive device during one single 10 ms flash by applying customized voltage profiles–-in place of a conventional V ramp—to the terminals of the device under test. The most promising results were obtained by using V profiles which we name ‘dragon-back’ (DB) profiles. When compared to the reference I–V measurement (obtained by using a multi-flash approach with approximately 20 flashes), the DB V profile method provides excellent results with differences in the estimation of P max (as well as of I sc , V oc and FF) below ±0.5%. For the testing of highly capacitive devices the method is accurate, fast (two flashes—possibly one—required), cost-effective and has proven its validity with several technologies making it particularly interesting for in-line testing. (paper)

  2. On the performance simulation of inter-stage turbine reheat

    Pellegrini, Alvise; Nikolaidis, Theoklis; Pachidis, Vassilios; Köhler, Stephan

    2017-01-01

    Highlights: • An innovative gas turbine performance simulation methodology is proposed. • It allows to perform DP and OD performance calculations for complex engines layouts. • It is essential for inter-turbine reheat (ITR) engine performance calculation. • A detailed description is provided for fast and flexible implementation. • The methodology is successfully verified against a commercial closed-source software. - Abstract: Several authors have suggested the implementation of reheat in high By-Pass Ratio (BPR) aero engines, to improve engine performance. In contrast to military afterburning, civil aero engines would aim at reducing Specific Fuel Consumption (SFC) by introducing ‘Inter-stage Turbine Reheat’ (ITR). To maximise benefits, the second combustor should be placed at an early stage of the expansion process, e.g. between the first and second High-Pressure Turbine (HPT) stages. The aforementioned cycle design requires the accurate simulation of two or more turbine stages on the same shaft. The Design Point (DP) performance can be easily evaluated by defining a Turbine Work Split (TWS) ratio between the turbine stages. However, the performance simulation of Off-Design (OD) operating points requires the calculation of the TWS parameter for every OD step, by taking into account the thermodynamic behaviour of each turbine stage, represented by their respective maps. No analytical solution of the aforementioned problem is currently available in the public domain. This paper presents an analytical methodology by which ITR can be simulated at DP and OD. Results show excellent agreement with a commercial, closed-source performance code; discrepancies range from 0% to 3.48%, and are ascribed to the different gas models implemented in the codes.

  3. Identifying High Performance ERP Projects

    Stensrud, Erik; Myrtveit, Ingunn

    2002-01-01

    Learning from high performance projects is crucial for software process improvement. Therefore, we need to identify outstanding projects that may serve as role models. It is common to measure productivity as an indicator of performance. It is vital that productivity measurements deal correctly with variable returns to scale and multivariate data. Software projects generally exhibit variable returns to scale, and the output from ERP projects is multivariate. We propose to use Data Envelopment ...

  4. INL High Performance Building Strategy

    Jennifer D. Morton

    2010-02-01

    High performance buildings, also known as sustainable buildings and green buildings, are resource efficient structures that minimize the impact on the environment by using less energy and water, reduce solid waste and pollutants, and limit the depletion of natural resources while also providing a thermally and visually comfortable working environment that increases productivity for building occupants. As Idaho National Laboratory (INL) becomes the nation’s premier nuclear energy research laboratory, the physical infrastructure will be established to help accomplish this mission. This infrastructure, particularly the buildings, should incorporate high performance sustainable design features in order to be environmentally responsible and reflect an image of progressiveness and innovation to the public and prospective employees. Additionally, INL is a large consumer of energy that contributes to both carbon emissions and resource inefficiency. In the current climate of rising energy prices and political pressure for carbon reduction, this guide will help new construction project teams to design facilities that are sustainable and reduce energy costs, thereby reducing carbon emissions. With these concerns in mind, the recommendations described in the INL High Performance Building Strategy (previously called the INL Green Building Strategy) are intended to form the INL foundation for high performance building standards. This revised strategy incorporates the latest federal and DOE orders (Executive Order [EO] 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” [2009], EO 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” [2007], and DOE Order 430.2B, “Departmental Energy, Renewable Energy, and Transportation Management” [2008]), the latest guidelines, trends, and observations in high performance building construction, and the latest changes to the Leadership in Energy and Environmental Design

  5. Human Performance in Simulated Reduced Gravity Environments

    Cowley, Matthew; Harvill, Lauren; Rajulu, Sudhakar

    2014-01-01

    NASA is currently designing a new space suit capable of working in deep space and on Mars. Designing a suit is very difficult and often requires trade-offs between performance, cost, mass, and system complexity. Our current understanding of human performance in reduced gravity in a planetary environment (the moon or Mars) is limited to lunar observations, studies from the Apollo program, and recent suit tests conducted at JSC using reduced gravity simulators. This study will look at our most recent reduced gravity simulations performed on the new Active Response Gravity Offload System (ARGOS) compared to the C-9 reduced gravity plane. Methods: Subjects ambulated in reduced gravity analogs to obtain a baseline for human performance. Subjects were tested in lunar gravity (1.6 m/sq s) and Earth gravity (9.8 m/sq s) in shirt-sleeves. Subjects ambulated over ground at prescribed speeds on the ARGOS, but ambulated at a self-selected speed on the C-9 due to time limitations. Subjects on the ARGOS were given over 3 minutes to acclimate to the different conditions before data was collected. Nine healthy subjects were tested in the ARGOS (6 males, 3 females, 79.5 +/- 15.7 kg), while six subjects were tested on the C-9 (6 males, 78.8 +/- 11.2 kg). Data was collected with an optical motion capture system (Vicon, Oxford, UK) and was analyzed using customized analysis scripts in BodyBuilder (Vicon, Oxford, UK) and MATLAB (MathWorks, Natick, MA, USA). Results: In all offloaded conditions, variation between subjects increased compared to 1-g. Kinematics in the ARGOS at lunar gravity resembled earth gravity ambulation more closely than the C-9 ambulation. Toe-off occurred 10% earlier in both reduced gravity environments compared to earth gravity, shortening the stance phase. Likewise, ankle, knee, and hip angles remained consistently flexed and had reduced peaks compared to earth gravity. Ground reaction forces in lunar gravity (normalized to Earth body weight) were 0.4 +/- 0.2 on

  6. Computer simulation at high pressure

    Alder, B.J.

    1977-11-01

    The use of either the Monte Carlo or molecular dynamics method to generate equations-of-state data for various materials at high pressure is discussed. Particular emphasis is given to phase diagrams, such as the generation of various types of critical lines for mixtures, melting, structural and electronic transitions in solids, two-phase ionic fluid systems of astrophysical interest, as well as a brief aside of possible eutectic behavior in the interior of the earth. Then the application of the molecular dynamics method to predict transport coefficients and the neutron scattering function is discussed with a view as to what special features high pressure brings out. Lastly, an analysis by these computational methods of the measured intensity and frequency spectrum of depolarized light and also of the deviation of the dielectric measurements from the constancy of the Clausius--Mosotti function is given that leads to predictions of how the electronic structure of an atom distorts with pressure

  7. Water desalination price from recent performances: Modelling, simulation and analysis

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  8. Integrated plasma control for high performance tokamaks

    Humphreys, D.A.; Deranian, R.D.; Ferron, J.R.; Johnson, R.D.; LaHaye, R.J.; Leuer, J.A.; Penaflor, B.G.; Walker, M.L.; Welander, A.S.; Jayakumar, R.J.; Makowski, M.A.; Khayrutdinov, R.R.

    2005-01-01

    Sustaining high performance in a tokamak requires controlling many equilibrium shape and profile characteristics simultaneously with high accuracy and reliability, while suppressing a variety of MHD instabilities. Integrated plasma control, the process of designing high-performance tokamak controllers based on validated system response models and confirming their performance in detailed simulations, provides a systematic method for achieving and ensuring good control performance. For present-day devices, this approach can greatly reduce the need for machine time traditionally dedicated to control optimization, and can allow determination of high-reliability controllers prior to ever producing the target equilibrium experimentally. A full set of tools needed for this approach has recently been completed and applied to present-day devices including DIII-D, NSTX and MAST. This approach has proven essential in the design of several next-generation devices including KSTAR, EAST, JT-60SC, and ITER. We describe the method, results of design and simulation tool development, and recent research producing novel approaches to equilibrium and MHD control in DIII-D. (author)

  9. High performance fuel technology development

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  10. High Performance Bulk Thermoelectric Materials

    Ren, Zhifeng [Boston College, Chestnut Hill, MA (United States)

    2013-03-31

    Over 13 plus years, we have carried out research on electron pairing symmetry of superconductors, growth and their field emission property studies on carbon nanotubes and semiconducting nanowires, high performance thermoelectric materials and other interesting materials. As a result of the research, we have published 104 papers, have educated six undergraduate students, twenty graduate students, nine postdocs, nine visitors, and one technician.

  11. High-Performance Data Converters

    Steensgaard-Madsen, Jesper

    -resolution internal D/A converters are required. Unit-element mismatch-shaping D/A converters are analyzed, and the concept of mismatch-shaping is generalized to include scaled-element D/A converters. Several types of scaled-element mismatch-shaping D/A converters are proposed. Simulations show that, when implemented...... in a standard CMOS technology, they can be designed to yield 100 dB performance at 10 times oversampling. The proposed scaled-element mismatch-shaping D/A converters are well suited for use as the feedback stage in oversampled delta-sigma quantizers. It is, however, not easy to make full use of their potential......-order difference of the output signal from the loop filter's first integrator stage. This technique avoids the need for accurate matching of analog and digital filters that characterizes the MASH topology, and it preserves the signal-band suppression of quantization errors. Simulations show that quantizers...

  12. The effects of fatigue on performance in simulated nursing work.

    Barker, Linsey M; Nussbaum, Maury A

    2011-09-01

    Fatigue is associated with increased rates of medical errors and healthcare worker injuries, yet existing research in this sector has not considered multiple dimensions of fatigue simultaneously. This study evaluated hypothesised causal relationships between mental and physical fatigue and performance. High and low levels of mental and physical fatigue were induced in 16 participants during simulated nursing work tasks in a laboratory setting. Task-induced changes in fatigue dimensions were quantified using both subjective and objective measures, as were changes in performance on physical and mental tasks. Completing the simulated work tasks increased total fatigue, mental fatigue and physical fatigue in all experimental conditions. Higher physical fatigue adversely affected measures of physical and mental performance, whereas higher mental fatigue had a positive effect on one measure of mental performance. Overall, these results suggest causal effects between manipulated levels of mental and physical fatigue and task-induced changes in mental and physical performance. STATEMENT OF RELEVANCE: Nurse fatigue and performance has implications for patient and provider safety. Results from this study demonstrate the importance of a multidimensional view of fatigue in understanding the causal relationships between fatigue and performance. The findings can guide future work aimed at predicting fatigue-related performance decrements and designing interventions.

  13. Performance evaluation of sea surface simulation methods for target detection

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  14. Performance simulation of a MRPC-based PET imaging system

    Roy, A.; Banerjee, A.; Biswas, S.; Chattopadhyay, S.; Das, G.; Saha, S.

    2014-10-01

    The less expensive and high resolution Multi-gap Resistive Plate Chamber (MRPC) opens up a new possibility to find an efficient alternative detector for the Time of Flight (TOF) based Positron Emission Tomography, where the sensitivity of the system depends largely on the time resolution of the detector. In a layered structure, suitable converters can be used to increase the photon detection efficiency. In this work, we perform a detailed GEANT4 simulation to optimize the converter thickness towards improving the efficiency of photon conversion. A Monte Carlo based procedure has been developed to simulate the time resolution of the MRPC-based system, making it possible to simulate its response for PET imaging application. The results of the test of a six-gap MRPC, operating in avalanche mode, with 22Na source have been discussed.

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. Neo4j high performance

    Raj, Sonal

    2015-01-01

    If you are a professional or enthusiast who has a basic understanding of graphs or has basic knowledge of Neo4j operations, this is the book for you. Although it is targeted at an advanced user base, this book can be used by beginners as it touches upon the basics. So, if you are passionate about taming complex data with the help of graphs and building high performance applications, you will be able to get valuable insights from this book.

  17. Virtual Learning Simulations in High School

    Thisgaard, Malene Warming; Makransky, Guido

    2017-01-01

    The present study compared the value of using a virtual learning simulation compared to traditional lessons on the topic of evolution, and investigated if the virtual learning simulation could serve as a catalyst for STEM academic and career development, based on social cognitive career theory....... The investigation was conducted using a crossover repeated measures design based on a sample of 128 high school biology/biotech students. The results showed that the virtual learning simulation increased knowledge of evolution significantly, compared to the traditional lesson. No significant differences between...... the simulation and lesson were found in their ability to increase the non-cognitive measures. Both interventions increased self-efficacy significantly, and none of them had a significant effect on motivation. In addition, the results showed that the simulation increased interest in biology related tasks...

  18. Development of an Integrated Process, Modeling and Simulation Platform for Performance-Based Design of Low-Energy and High IEQ Buildings

    Chen, Yixing

    2013-01-01

    The objective of this study was to develop a "Virtual Design Studio (VDS)": a software platform for integrated, coordinated and optimized design of green building systems with low energy consumption, high indoor environmental quality (IEQ), and high level of sustainability. The VDS is intended to assist collaborating architects,…

  19. Validation of High-resolution Climate Simulations over Northern Europe.

    Muna, R. A.

    2005-12-01

    Two AMIP2-type (Gates 1992) experiments have been performed with climate versions of ARPEGE/IFS model examine for North Atlantic North Europe, and Norwegian region and analyzed the effect of increasing resolution on the simulated biases. The ECMWF reanalysis or ERA-15 has been used to validate the simulations. Each of the simulations is an integration of the period 1979 to 1996. The global simulations used observed monthly mean sea surface temperatures (SST) as lower boundary condition. All aspects but the horizontal resolutions are similar in the two simulations. The first simulation has a uniform horizontal resolution of T63L. The second one has a variable resolution (T106Lc3) with the highest resolution in the Norwegian Sea. Both simulations have 31 vertical layers in the same locations. For each simulation the results were divided into two seasons: winter (DJF) and summer (JJA). The parameters investigated were mean sea level pressure, geopotential and temperature at 850 hPa and 500 hPa. To find out the causes of temperature bias during summer, latent and sensible heat flux, total cloud cover and total precipitation were analyzed. The high-resolution simulation exhibits more or less realistic climate over Nordic, Artic and European region. The overall performance of the simulations shows improvements of generally all fields investigated with increasing resolution over the target area both in winter (DJF) and summer (JJA).

  20. RELAP5: Applications to high fidelity simulation

    Johnsen, G.W.; Chen, Y.S.

    1988-01-01

    RELAP5 is a pressurized water reactor system transient simulation code for use in nuclear power plant safety analysis. The latest version, MOD2, may be used to simulate and study a wide variety of abnormal events, including loss-of-coolant accidents, operational transients, and transients in which the entire secondary system must be modeled. In this paper, a basic overview of the code is given, its assessment and application illustrated, and progress toward its use as a high fidelity simulator described. 7 refs., 7 figs

  1. Performance measurement system for training simulators. Interim report

    Bockhold, G. Jr.; Roth, D.R.

    1978-05-01

    In the first project phase, the project team has designed, installed, and test run on the Browns Ferry nuclear power plant training simulator a performance measurement system capable of automatic recording of statistical information on operator actions and plant response. Key plant variables and operator actions were monitored and analyzed by the simulator computer for a selected set of four operating and casualty drills. The project has the following objectives: (1) To provide an empirical data base for statistical analysis of operator reliability and for allocation of safety and control functions between operators and automated controls; (2) To develop a method for evaluation of the effectiveness of control room designs and operating procedures; and (3) To develop a system for scoring aspects of operator performance to assist in training evaluations and to support operator selection research. The performance measurement system has shown potential for meeting the research objectives. However, the cost of training simulator time is high; to keep research program costs reasonable, the measurement system is being designed to be an integral part of operator training programs. In the pilot implementation, participating instructors judged the measurement system to be a valuable and objective extension of their abilities to monitor trainee performance

  2. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  3. High performance MEAs. Final report

    NONE

    2012-07-15

    The aim of the present project is through modeling, material and process development to obtain significantly better MEA performance and to attain the technology necessary to fabricate stable catalyst materials thereby providing a viable alternative to current industry standard. This project primarily focused on the development and characterization of novel catalyst materials for the use in high temperature (HT) and low temperature (LT) proton-exchange membrane fuel cells (PEMFC). New catalysts are needed in order to improve fuel cell performance and reduce the cost of fuel cell systems. Additional tasks were the development of new, durable sealing materials to be used in PEMFC as well as the computational modeling of heat and mass transfer processes, predominantly in LT PEMFC, in order to improve fundamental understanding of the multi-phase flow issues and liquid water management in fuel cells. An improved fundamental understanding of these processes will lead to improved fuel cell performance and hence will also result in a reduced catalyst loading to achieve the same performance. The consortium have obtained significant research results and progress for new catalyst materials and substrates with promising enhanced performance and fabrication of the materials using novel methods. However, the new materials and synthesis methods explored are still in the early research and development phase. The project has contributed to improved MEA performance using less precious metal and has been demonstrated for both LT-PEM, DMFC and HT-PEM applications. New novel approach and progress of the modelling activities has been extremely satisfactory with numerous conference and journal publications along with two potential inventions concerning the catalyst layer. (LN)

  4. Visualization and Analysis of Climate Simulation Performance Data

    Röber, Niklas; Adamidis, Panagiotis; Behrens, Jörg

    2015-04-01

    Visualization is the key process of transforming abstract (scientific) data into a graphical representation, to aid in the understanding of the information hidden within the data. Climate simulation data sets are typically quite large, time varying, and consist of many different variables sampled on an underlying grid. A large variety of climate models - and sub models - exist to simulate various aspects of the climate system. Generally, one is mainly interested in the physical variables produced by the simulation runs, but model developers are also interested in performance data measured along with these simulations. Climate simulation models are carefully developed complex software systems, designed to run in parallel on large HPC systems. An important goal thereby is to utilize the entire hardware as efficiently as possible, that is, to distribute the workload as even as possible among the individual components. This is a very challenging task, and detailed performance data, such as timings, cache misses etc. have to be used to locate and understand performance problems in order to optimize the model implementation. Furthermore, the correlation of performance data to the processes of the application and the sub-domains of the decomposed underlying grid is vital when addressing communication and load imbalance issues. High resolution climate simulations are carried out on tens to hundreds of thousands of cores, thus yielding a vast amount of profiling data, which cannot be analyzed without appropriate visualization techniques. This PICO presentation displays and discusses the ICON simulation model, which is jointly developed by the Max Planck Institute for Meteorology and the German Weather Service and in partnership with DKRZ. The visualization and analysis of the models performance data allows us to optimize and fine tune the model, as well as to understand its execution on the HPC system. We show and discuss our workflow, as well as present new ideas and

  5. Simulant Basis for the Standard High Solids Vessel Design

    Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-09-30

    The Waste Treatment and Immobilization Plant (WTP) is working to develop a Standard High Solids Vessel Design (SHSVD) process vessel. To support testing of this new design, WTP engineering staff requested that a Newtonian simulant and a non-Newtonian simulant be developed that would represent the Most Adverse Design Conditions (in development) with respect to mixing performance as specified by WTP. The majority of the simulant requirements are specified in 24590-PTF-RPT-PE-16-001, Rev. 0. The first step in this process is to develop the basis for these simulants. This document describes the basis for the properties of these two simulant types. The simulant recipes that meet this basis will be provided in a subsequent document.

  6. Comparison of driving simulator performance and neuropsychological testing in narcolepsy.

    Kotterba, Sylvia; Mueller, Nicole; Leidag, Markus; Widdig, Walter; Rasche, Kurt; Malin, Jean-Pierre; Schultze-Werninghaus, Gerhard; Orth, Maritta

    2004-09-01

    Daytime sleepiness and cataplexy can increase automobile accident rates in narcolepsy. Several countries have produced guidelines for issuing a driving license. The aim of the study was to compare driving simulator performance and neuropsychological test results in narcolepsy in order to evaluate their predictive value regarding driving ability. Thirteen patients with narcolepsy (age: 41.5+/-12.9 years) and 10 healthy control patients (age: 55.1+/-7.8 years) were investigated. By computer-assisted neuropsychological testing, vigilance, alertness and divided attention were assessed. In a driving simulator patients and controls had to drive on a highway for 60 min (mean speed of 100 km/h). Different weather and daytime conditions and obstacles were presented. Epworth Sleepiness Scale-Scores were significantly raised (narcolepsy patients: 16.7+/-5.1, controls: 6.6+/-3.6, P divided attention (56.9+/-25.4) and vigilance (58.7+/-26.8) were in a normal range. There was, however, a high inter-individual difference. There was no correlation between driving performance and neuropsychological test results or ESS Score. Neuropsychological test results did not significantly change in the follow-up. The difficulties encountered by the narcolepsy patient in remaining alert may account for sleep-related motor vehicle accidents. Driving simulator investigations are closely related to real traffic situations than isolated neuropsychological tests. At the present time the driving simulator seems to be a useful instrument judging driving ability especially in cases with ambiguous neuropsychological results.

  7. High Performance Proactive Digital Forensics

    Alharbi, Soltan; Traore, Issa; Moa, Belaid; Weber-Jahnke, Jens

    2012-01-01

    With the increase in the number of digital crimes and in their sophistication, High Performance Computing (HPC) is becoming a must in Digital Forensics (DF). According to the FBI annual report, the size of data processed during the 2010 fiscal year reached 3,086 TB (compared to 2,334 TB in 2009) and the number of agencies that requested Regional Computer Forensics Laboratory assistance increasing from 689 in 2009 to 722 in 2010. Since most investigation tools are both I/O and CPU bound, the next-generation DF tools are required to be distributed and offer HPC capabilities. The need for HPC is even more evident in investigating crimes on clouds or when proactive DF analysis and on-site investigation, requiring semi-real time processing, are performed. Although overcoming the performance challenge is a major goal in DF, as far as we know, there is almost no research on HPC-DF except for few papers. As such, in this work, we extend our work on the need of a proactive system and present a high performance automated proactive digital forensic system. The most expensive phase of the system, namely proactive analysis and detection, uses a parallel extension of the iterative z algorithm. It also implements new parallel information-based outlier detection algorithms to proactively and forensically handle suspicious activities. To analyse a large number of targets and events and continuously do so (to capture the dynamics of the system), we rely on a multi-resolution approach to explore the digital forensic space. Data set from the Honeynet Forensic Challenge in 2001 is used to evaluate the system from DF and HPC perspectives.

  8. 20th and 21st Joint Workshop on Sustained Simulation Performance

    Bez, Wolfgang; Focht, Erich; Kobayashi, Hiroaki; Qi, Jiaxing; Roller, Sabine

    2015-01-01

    The book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general, and the future of high-performance systems and heterogeneous architectures specifically. The application contributions cover computational fluid dynamics, material science, medical applications and climate research. Innovative fields like coupled multi-physics or multi-scale simulations are also discussed. All papers were chosen from presentations given at the 20th Workshop on Sustained Simulation Performance in December 2014 at the HLRS, University of Stuttgart, Germany, and the subsequent Workshop on Sustained Simulation Performance at Tohoku University in February 2015.  .

  9. Simulation of plasma loading of high-pressure RF cavities

    Yu, K.; Samulyak, R.; Yonehara, K.; Freemire, B.

    2018-01-01

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have been performed in the range of parameters typical for practical muon cooling channels.

  10. Simulation of plasma loading of high-pressure RF cavities

    Yu, K. [Brookhaven National Lab. (BNL), Upton, NY (United States). Computational Science Initiative; Samulyak, R. [Brookhaven National Lab. (BNL), Upton, NY (United States). Computational Science Initiative; Stony Brook Univ., NY (United States). Dept. of Applied Mathematics and Statistics; Yonehara, K. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Freemire, B. [Northern Illinois Univ., DeKalb, IL (United States)

    2018-01-11

    Muon beam-induced plasma loading of radio-frequency (RF) cavities filled with high pressure hydrogen gas with 1% dry air dopant has been studied via numerical simulations. The electromagnetic code SPACE, that resolves relevant atomic physics processes, including ionization by the muon beam, electron attachment to dopant molecules, and electron-ion and ion-ion recombination, has been used. Simulations studies have also been performed in the range of parameters typical for practical muon cooling channels.

  11. High performance light water reactor

    Squarer, D.; Schulenberg, T.; Struwe, D.; Oka, Y.; Bittermann, D.; Aksan, N.; Maraczy, C.; Kyrki-Rajamaeki, R.; Souyri, A.; Dumaz, P.

    2003-01-01

    The objective of the high performance light water reactor (HPLWR) project is to assess the merit and economic feasibility of a high efficiency LWR operating at thermodynamically supercritical regime. An efficiency of approximately 44% is expected. To accomplish this objective, a highly qualified team of European research institutes and industrial partners together with the University of Tokyo is assessing the major issues pertaining to a new reactor concept, under the co-sponsorship of the European Commission. The assessment has emphasized the recent advancement achieved in this area by Japan. Additionally, it accounts for advanced European reactor design requirements, recent improvements, practical design aspects, availability of plant components and the availability of high temperature materials. The final objective of this project is to reach a conclusion on the potential of the HPLWR to help sustain the nuclear option, by supplying competitively priced electricity, as well as to continue the nuclear competence in LWR technology. The following is a brief summary of the main project achievements:-A state-of-the-art review of supercritical water-cooled reactors has been performed for the HPLWR project.-Extensive studies have been performed in the last 10 years by the University of Tokyo. Therefore, a 'reference design', developed by the University of Tokyo, was selected in order to assess the available technological tools (i.e. computer codes, analyses, advanced materials, water chemistry, etc.). Design data and results of the analysis were supplied by the University of Tokyo. A benchmark problem, based on the 'reference design' was defined for neutronics calculations and several partners of the HPLWR project carried out independent analyses. The results of these analyses, which in addition help to 'calibrate' the codes, have guided the assessment of the core and the design of an improved HPLWR fuel assembly. Preliminary selection was made for the HPLWR scale

  12. Equivalent drawbead performance in deep drawing simulations

    Meinders, Vincent T.; Geijselaers, Hubertus J.M.; Huetink, Han

    1999-01-01

    Drawbeads are applied in the deep drawing process to improve the control of the material flow during the forming operation. In simulations of the deep drawing process these drawbeads can be replaced by an equivalent drawbead model. In this paper the usage of an equivalent drawbead model in the

  13. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  14. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  15. Micromagnetics on high-performance workstation and mobile computational platforms

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  16. Manufacturing plant performance evaluation by discrete event simulation

    Rosli Darmawan; Mohd Rasid Osman; Rosnah Mohd Yusuff; Napsiah Ismail; Zulkiflie Leman

    2002-01-01

    A case study was conducted to evaluate the performance of a manufacturing plant using discrete event simulation technique. The study was carried out on animal feed production plant. Sterifeed plant at Malaysian Institute for Nuclear Technology Research (MINT), Selangor, Malaysia. The plant was modelled base on the actual manufacturing activities recorded by the operators. The simulation was carried out using a discrete event simulation software. The model was validated by comparing the simulation results with the actual operational data of the plant. The simulation results show some weaknesses with the current plant design and proposals were made to improve the plant performance. (Author)

  17. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  18. Modelling and simulating fire tube boiler performance

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  19. Development of high performance cladding

    Kiuchi, Kiyoshi

    2003-01-01

    The developments of superior next-generation light water reactor are requested on the basis of general view points, such as improvement of safety, economics, reduction of radiation waste and effective utilization of plutonium, until 2030 year in which conventional reactor plants should be renovate. Improvements of stainless steel cladding for conventional high burn-up reactor to more than 100 GWd/t, developments of manufacturing technology for reduced moderation-light water reactor (RMWR) of breeding ratio beyond 1.0 and researches of water-materials interaction on super critical pressure-water cooled reactor are carried out in Japan Atomic Energy Research Institute. Stable austenite stainless steel has been selected for fuel element cladding of advanced boiling water reactor (ABWR). The austenite stain less has the superiority for anti-irradiation properties, corrosion resistance and mechanical strength. A hard spectrum of neutron energy up above 0.1 MeV takes place in core of the reduced moderation-light water reactor, as liquid metal-fast breeding reactor (LMFBR). High performance cladding for the RMWR fuel elements is required to get anti-irradiation properties, corrosion resistance and mechanical strength also. Slow strain rate test (SSRT) of SUS 304 and SUS 316 are carried out for studying stress corrosion cracking (SCC). Irradiation tests in LMFBR are intended to obtain irradiation data for damaged quantity of the cladding materials. (M. Suetake)

  20. Computer simulation of high energy displacement cascades

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  1. Hand ultrasound: a high-fidelity simulation of lung sliding.

    Shokoohi, Hamid; Boniface, Keith

    2012-09-01

    Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.

  2. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics

    Fero, Laura J.; O’Donnell, John M.; Zullo, Thomas G.; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T.; Hoffman, Leslie A.

    2018-01-01

    Aim This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Background Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. Methods In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation- based performance was rated as ‘meeting’ or ‘not meeting’ overall expectations. Test scores were categorized as strong, average, or weak. Results Most (75·0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0·277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0·001) using high-fidelity human simulation. The relationship between video-taped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer’s V = 0·444, P = 0·029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer’s V = 0·413, P = 0·047). Conclusion Students’ performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills

  3. Critical thinking skills in nursing students: comparison of simulation-based performance with metrics.

    Fero, Laura J; O'Donnell, John M; Zullo, Thomas G; Dabbs, Annette DeVito; Kitutu, Julius; Samosky, Joseph T; Hoffman, Leslie A

    2010-10-01

    This paper is a report of an examination of the relationship between metrics of critical thinking skills and performance in simulated clinical scenarios. Paper and pencil assessments are commonly used to assess critical thinking but may not reflect simulated performance. In 2007, a convenience sample of 36 nursing students participated in measurement of critical thinking skills and simulation-based performance using videotaped vignettes, high-fidelity human simulation, the California Critical Thinking Disposition Inventory and California Critical Thinking Skills Test. Simulation-based performance was rated as 'meeting' or 'not meeting' overall expectations. Test scores were categorized as strong, average, or weak. Most (75.0%) students did not meet overall performance expectations using videotaped vignettes or high-fidelity human simulation; most difficulty related to problem recognition and reporting findings to the physician. There was no difference between overall performance based on method of assessment (P = 0.277). More students met subcategory expectations for initiating nursing interventions (P ≤ 0.001) using high-fidelity human simulation. The relationship between videotaped vignette performance and critical thinking disposition or skills scores was not statistically significant, except for problem recognition and overall critical thinking skills scores (Cramer's V = 0.444, P = 0.029). There was a statistically significant relationship between overall high-fidelity human simulation performance and overall critical thinking disposition scores (Cramer's V = 0.413, P = 0.047). Students' performance reflected difficulty meeting expectations in simulated clinical scenarios. High-fidelity human simulation performance appeared to approximate scores on metrics of critical thinking best. Further research is needed to determine if simulation-based performance correlates with critical thinking skills in the clinical setting. © 2010 The Authors. Journal of Advanced

  4. Spent fuel and high level waste: Chemical durability and performance under simulated repository conditions. Results of a coordinated research project 1998-2004. Part 2: Results of a previously unpublished CRP: Performance of high level waste forms and packages under repository conditions. Results of a co-ordinated research project 1991-1998

    2007-07-01

    The objective of the CRP (Coordinated Research Projekt) on the 'Performance of High Level Waste Forms and Packages under Repository Conditions' was to contribute to the development and implementation of proper and sound technologies for HLW and spent fuel management. Special emphasis was given to the identification of various waste form properties and the study of their long term durability in simulated repository conditions. Another objective was to promote the co-operation and exchange of information between Member States on experimental concerning behaviour of the waste form. The CRP was composed of research contracts and agreements with Argentina, Australia, Belgium, Canada, China, Czech Republic, Finland, France, Germany, India, Japan, Russia, and the United States of America. The publication includes 14 individual contributions of the participants to the CRP, which are indexed separately.

  5. Simulating Radar Signals for Detection Performance Evaluation.

    1981-02-01

    incurring the computation costs usually as- sociated with such simulations. With importance sampling one can modify the probability distribution of the...049.7 0110 IF (N0147-1) ?0Q,7oG.6oSi V4.48 102 61 IrF?=TFACTUIFI*VF1 O. All THryAV.THrTA 1S A I~ THF’r P THF T r/LCAT I IVF /’NF 1204!! %S1PP7FS(T4rTAW

  6. Simulations of depleted CMOS sensors for high-radiation environments

    Liu, J.; Bhat, S.; Breugnon, P.; Caicedo, I.; Chen, Z.; Degerli, Y.; Godiot-Basolo, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Hügging, F.; Krüger, H.; Moustakas, K.; Pangaud, P.; Rozanov, A.; Rymaszewski, P.; Schwemling, P.; Wang, M.; Wang, T.; Wermes, N.; Zhang, L.

    2017-01-01

    After the Phase II upgrade for the Large Hadron Collider (LHC), the increased luminosity requests a new upgraded Inner Tracker (ITk) for the ATLAS experiment. As a possible option for the ATLAS ITk, a new pixel detector based on High Voltage/High Resistivity CMOS (HV/HR CMOS) technology is under study. Meanwhile, a new CMOS pixel sensor is also under development for the tracker of Circular Electron Position Collider (CEPC). In order to explore the sensor electric properties, such as the breakdown voltage and charge collection efficiency, 2D/3D Technology Computer Aided Design (TCAD) simulations have been performed carefully for the above mentioned both of prototypes. In this paper, the guard-ring simulation for a HV/HR CMOS sensor developed for the ATLAS ITk and the charge collection efficiency simulation for a CMOS sensor explored for the CEPC tracker will be discussed in details. Some comparisons between the simulations and the latest measurements will also be addressed.

  7. High performance visual display for HENP detectors

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations

  8. High performance visual display for HENP detectors

    McGuigan, M; Spiletic, J; Fine, V; Nevski, P

    2001-01-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactiv...

  9. High Fidelity In Situ Shoulder Dystocia Simulation

    Andrew Pelikan, MD

    2018-04-01

    Full Text Available Audience: Resident physicians, emergency department (ED staff Introduction: Precipitous deliveries are high acuity, low occurrence in most emergency departments. Shoulder dystocia is a rare but potentially fatal complication of labor that can be relieved by specific maneuvers that must be implemented in a timely manner. This simulation is designed to educate resident learners on the critical management steps in a shoulder dystocia presenting to the emergency department. A special aspect of this simulation is the unique utilization of the “Noelle” model with an instructing physician at bedside maneuvering the fetus through the stations of labor and providing subtle adjustments to fetal positioning not possible though a mechanized model. A literature search of “shoulder dystocia simulation” consists primarily of obstetrics and mid-wife journals, many of which utilize various mannequin models. None of the reviewed articles utilized a bedside provider maneuvering the fetus with the Noelle model, making this method unique. While the Noelle model is equipped with a remote-controlled motor that automatically rotates and delivers the baby either to the head or to the shoulders and can produce a turtle sign and which will prevent delivery of the baby until signaled to do so by the instructor, using the bedside instructor method allows this simulation to be reproduced with less mechanistically advanced and lower cost models.1-5 Objectives: At the end of this simulation, learners will: 1 Recognize impending delivery and mobilize appropriate resources (ie, both obstetrics [OB] and NICU/pediatrics; 2 Identify risk factors for shoulder dystocia based on history and physical; 3 Recognize shoulder dystocia during delivery; 4 Demonstrate maneuvers to relieve shoulder dystocia; 5 Communicate with team members and nursing staff during resuscitation of a critically ill patient. Method: High-fidelity simulation. Topics: High fidelity, in situ, Noelle model

  10. Alcohol consumption for simulated driving performance: A systematic review.

    Rezaee-Zavareh, Mohammad Saeid; Salamati, Payman; Ramezani-Binabaj, Mahdi; Saeidnejad, Mina; Rousta, Mansoureh; Shokraneh, Farhad; Rahimi-Movaghar, Vafa

    2017-06-01

    Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents, injuries and mortalities. The main purpose of our study was to compare simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, using a systematic review. In this systematic review, electronic resources and databases including Medline via Ovid SP, EMBASE via Ovid SP, PsycINFO via Ovid SP, PubMed, Scopus, Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched. The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers, one consumed alcohol and the other not consumed, were included. Lane position standard deviation (LPSD), mean of lane position deviation (MLPD), speed, mean of speed deviation (MSD), standard deviation of speed deviation (SDSD), number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes. After title and abstract screening, the articles were enrolled for data extraction and they were evaluated for risk of biases. Thirteen papers were included in our qualitative synthesis. All included papers were classified as high risk of biases. Alcohol consumption mostly deteriorated the following performance outcomes in descending order: SDSD, LPSD, speed, MLPD, LC and NA. Our systematic review had troublesome heterogeneity. Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD, LPSD, speed, MLPD, LC and NA. More well-designed randomized controlled clinical trials are recommended. Copyright © 2017. Production and hosting by Elsevier B.V.

  11. Alcohol consumption for simulated driving performance: A systematic review

    Mohammad Saeid Rezaee-Zavareh; Payman Salamati; Mahdi Ramezani-Binabaj; Mina Saeidnejad; Mansoureh Rousta; Farhad Shokraneh; Vafa Rahimi-Movaghar

    2017-01-01

    Purpose:Alcohol consumption can lead to risky driving and increase the frequency of traffic accidents,injuries and mortalities.The main purpose of our study was to compare simulated driving performance between two groups of drivers,one consumed alcohol and the other not consumed,using a systematic review.Methods:In this systematic review,electronic resources and databases including Medline via Ovid SP,EMBASE via Ovid SP,PsycINFO via Ovid SP,PubMed,Scopus,Cumulative Index to Nursing and Allied Health Literature (CINHAL) via EBSCOhost were comprehensively and systematically searched.The randomized controlled clinical trials that compared simulated driving performance between two groups of drivers,one consumed alcohol and the other not consumed,were included.Lane position standard deviation (LPSD),mean of lane position deviation (MLPD),speed,mean of speed deviation (MSD),standard deviation of speed deviation (SDSD),number of accidents (NA) and line crossing (LC) were considered as the main parameters evaluating outcomes.After title and abstract screening,the articles were enrolled for data extraction and they were evaluated for risk of biases.Results:Thirteen papers were included in our qualitative synthesis.All included papers were classified as high risk of biases.Alcohol consumption mostly deteriorated the following performance outcomes in descending order:SDSD,LPSD,speed,MLPD,LC and NA.Our systematic review had troublesome heterogeneity.Conclusion:Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD,LPSD,speed,MLPD,LC and NA.More well-designed randomized controlled clinical trials are recommended.

  12. SEAscan 3.5: A simulator performance analyzer

    Dennis, T.; Eisenmann, S.

    1990-01-01

    SEAscan 3.5 is a personal computer based tool developed to analyze the dynamic performance of nuclear power plant training simulators. The system has integrated features to provide its own human featured performance. In this paper, the program is described as a tool for the analysis of training simulator performance. The structure and operating characteristics of SEAscan 3.5 are described. The hardcopy documents are shown to aid in verification of conformance to ANSI/ANS-3.5-1985

  13. Behavioral Simulation and Performance Evaluation of Multi-Processor Architectures

    Ausif Mahmood

    1996-01-01

    Full Text Available The development of multi-processor architectures requires extensive behavioral simulations to verify the correctness of design and to evaluate its performance. A high level language can provide maximum flexibility in this respect if the constructs for handling concurrent processes and a time mapping mechanism are added. This paper describes a novel technique for emulating hardware processes involved in a parallel architecture such that an object-oriented description of the design is maintained. The communication and synchronization between hardware processes is handled by splitting the processes into their equivalent subprograms at the entry points. The proper scheduling of these subprograms is coordinated by a timing wheel which provides a time mapping mechanism. Finally, a high level language pre-processor is proposed so that the timing wheel and the process emulation details can be made transparent to the user.

  14. Simulation of High Quality Ultrasound Imaging

    Hemmsen, Martin Christian; Kortbek, Jacob; Nikolov, Svetoslav Ivanov

    2010-01-01

    ), and at Full Width at One-Hundredth Maximum (FWOHM) of 9 points spread functions resulting from evenly distributed point targets at depths ranging from 10 mm to 90 mm. The results are documented for a 64 channel system, using a 192 element linear array transducer model. A physical BK Medical 8804 transducer...... amplitude and phase compensation, the LR at FWOHM improves from 6.3 mm to 4.7 mm and is a factor of 2.2 better than DRF. This study has shown that individual element impulse response, phase, and amplitude deviations are important to include in simulated system performance evaluations. Furthermore...

  15. The path toward HEP High Performance Computing

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  16. Thermomechanical simulations and experimental validation for high speed incremental forming

    Ambrogio, Giuseppina; Gagliardi, Francesco; Filice, Luigino; Romero, Natalia

    2016-10-01

    Incremental sheet forming (ISF) consists in deforming only a small region of the workspace through a punch driven by a NC machine. The drawback of this process is its slowness. In this study, a high speed variant has been investigated from both numerical and experimental points of view. The aim has been the design of a FEM model able to perform the material behavior during the high speed process by defining a thermomechanical model. An experimental campaign has been performed by a CNC lathe with high speed to test process feasibility. The first results have shown how the material presents the same performance than in conventional speed ISF and, in some cases, better material behavior due to the temperature increment. An accurate numerical simulation has been performed to investigate the material behavior during the high speed process confirming substantially experimental evidence.

  17. Artificial neural network simulation of battery performance

    O`Gorman, C.C.; Ingersoll, D.; Jungst, R.G.; Paez, T.L.

    1998-12-31

    Although they appear deceptively simple, batteries embody a complex set of interacting physical and chemical processes. While the discrete engineering characteristics of a battery such as the physical dimensions of the individual components, are relatively straightforward to define explicitly, their myriad chemical and physical processes, including interactions, are much more difficult to accurately represent. Within this category are the diffusive and solubility characteristics of individual species, reaction kinetics and mechanisms of primary chemical species as well as intermediates, and growth and morphology characteristics of reaction products as influenced by environmental and operational use profiles. For this reason, development of analytical models that can consistently predict the performance of a battery has only been partially successful, even though significant resources have been applied to this problem. As an alternative approach, the authors have begun development of a non-phenomenological model for battery systems based on artificial neural networks. Both recurrent and non-recurrent forms of these networks have been successfully used to develop accurate representations of battery behavior. The connectionist normalized linear spline (CMLS) network has been implemented with a self-organizing layer to model a battery system with the generalized radial basis function net. Concurrently, efforts are under way to use the feedforward back propagation network to map the {open_quotes}state{close_quotes} of a battery system. Because of the complexity of battery systems, accurate representation of the input and output parameters has proven to be very important. This paper describes these initial feasibility studies as well as the current models and makes comparisons between predicted and actual performance.

  18. Comparison of performance of simulation models for floor heating

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  19. Building Performance Simulation for Sustainable Energy Use in Buildings

    Hensen, J.L.M.

    2010-01-01

    This paper aims to provide a general view of the background and current state of building performance simulation, which has the potential to deliver, directly or indirectly, substantial benefits to building stakeholders and to the environment. However the building simulation community faces many

  20. Building performance simulation for sustainable building design and operation

    Hensen, J.L.M.

    2011-01-01

    This paper aims to provide a general view of the background and current state of building performance simulation, which has the potential to deliver, directly or indirectly, substantial benefits to building stakeholders and to the environment. However the building simulation community faces many

  1. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  2. Highly immersive virtual reality laparoscopy simulation: development and future aspects.

    Huber, Tobias; Wunderling, Tom; Paschold, Markus; Lang, Hauke; Kneist, Werner; Hansen, Christian

    2018-02-01

    Virtual reality (VR) applications with head-mounted displays (HMDs) have had an impact on information and multimedia technologies. The current work aimed to describe the process of developing a highly immersive VR simulation for laparoscopic surgery. We combined a VR laparoscopy simulator (LapSim) and a VR-HMD to create a user-friendly VR simulation scenario. Continuous clinical feedback was an essential aspect of the development process. We created an artificial VR (AVR) scenario by integrating the simulator video output with VR game components of figures and equipment in an operating room. We also created a highly immersive VR surrounding (IVR) by integrating the simulator video output with a [Formula: see text] video of a standard laparoscopy scenario in the department's operating room. Clinical feedback led to optimization of the visualization, synchronization, and resolution of the virtual operating rooms (in both the IVR and the AVR). Preliminary testing results revealed that individuals experienced a high degree of exhilaration and presence, with rare events of motion sickness. The technical performance showed no significant difference compared to that achieved with the standard LapSim. Our results provided a proof of concept for the technical feasibility of an custom highly immersive VR-HMD setup. Future technical research is needed to improve the visualization, immersion, and capability of interacting within the virtual scenario.

  3. A New Model to Simulate Energy Performance of VRF Systems

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  4. Alcohol consumption for simulated driving performance: A systematic review

    Mohammad Saeid Rezaee-Zavareh

    2017-06-01

    Conclusion: Alcohol consumption may decrease simulated driving performance in alcohol consumed people compared with non-alcohol consumed people via changes in SDSD, LPSD, speed, MLPD, LC and NA. More well-designed randomized controlled clinical trials are recommended.

  5. Learning Apache Solr high performance

    Mohan, Surendra

    2014-01-01

    This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected.

  6. High-performance composite chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  7. EDITORIAL: High performance under pressure High performance under pressure

    Demming, Anna

    2011-11-01

    nanoelectromechanical systems. Researchers in China exploit the coupling between piezoelectric and semiconducting properties of ZnO in an optimised diode device design [6]. They used a Schottky rather than an ohmic contact to depress the off current. In addition they used ZnO nanobelts that have dominantly polar surfaces instead of [0001] ZnO nanowires to enhance the on current under the small applied forces obtained by using an atomic force microscopy tip. The nanobelts have potential for use in random access memory devices. Much of the success in applying piezoresistivity in device applications stems from a deepening understanding of the mechanisms behind the process. A collaboration of researchers in the USA and China have proposed a new criterion for identifying the carrier type of individual ZnO nanowires based on the piezoelectric output of a nanowire when it is mechanically deformed by a conductive atomic force microscopy tip in contact mode [7]. The p-type/n-type shell/core nanowires give positive piezoelectric outputs, while the n-type nanowires produce negative piezoelectric outputs. In this issue Zhong Lin Wang and colleagues in Italy and the US report theoretical investigations into the piezoresistive behaviour of ZnO nanowires for energy harvesting. The work develops previous research on the ability of vertically aligned ZnO nanowires under uniaxial compression to power a nanodevice, in particular a pH sensor [8]. Now the authors have used finite element simulations to study the system. Among their conclusions they find that, for typical geometries and donor concentrations, the length of the nanowire does not significantly influence the maximum output piezopotential because the potential mainly drops across the tip. This has important implications for low-cost, CMOS- and microelectromechanical-systems-compatible fabrication of nanogenerators. The simulations also reveal the influence of the dielectric surrounding the nanowire on the output piezopotential, especially for

  8. Macrofilament simulation of high current beam transport

    Hayden, R.J.; Jakobson, M.J.

    1985-01-01

    Macrofilament simulation of high current beam transport through a series of solenoids has been used to investigate the sensitivity of such calculations to the initial beam distribution and to the number of filaments used in the simulation. The transport line was tuned to approximately 105 0 phase advance per cell at zero current with a tune depression of 65 0 due to the space charge. Input distributions with the filaments randomly uniform throughout a four dimensional ellipsoid and K-V input distributions have been studied. The behavior of the emittance is similar to that published for quadrupoles with like tune depression. The emittance demonstrated little growth in the first twelve solenoids, a rapid rate of growth for the next twenty, and a subsequent slow rate of growth. A few hundred filaments were sufficient to show the character of the instability. The number of filaments utilized is an order of magnitude fewer than has been utilized previously for similar instabilities. The previously published curves for simulations with less than a thousand particles show a rather constant emittance growth. If the solenoid transport line magnetic field is increased a few percent, emittance growth curves are obtained not unlike those curves. Collision growth effects are less important than indicated in the previously published results for quadrupoles

  9. High-Performance Composite Chocolate

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-01-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with…

  10. Toward High-Performance Organizations.

    Lawler, Edward E., III

    2002-01-01

    Reviews management changes that companies have made over time in adopting or adapting four approaches to organizational performance: employee involvement, total quality management, re-engineering, and knowledge management. Considers future possibilities and defines a new view of what constitutes effective organizational design in management.…

  11. Simulation and performance of brushless DC motor actuators

    Gerba, Alex

    1985-01-01

    The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparisons of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good ...

  12. Analysis of TIMS performance subjected to simulated wind blast

    Jaggi, S.; Kuo, S.

    1992-01-01

    The results of the performance of the Thermal Infrared Multispectral Scanner (TIMS) when it is subjected to various wind conditions in the laboratory are described. Various wind conditions were simulated using a 24 inch fan or combinations of air jet streams blowing toward either or both of the blackbody surfaces. The fan was used to simulate a large volume of air flow at moderate speeds (up to 30 mph). The small diameter air jets were used to probe TIMS system response in reaction to localized wind perturbations. The maximum nozzle speed of the air jet was 60 mph. A range of wind directions and speeds were set up in the laboratory during the test. The majority of the wind tests were conducted under ambient conditions with the room temperature fluctuating no more than 2 C. The temperature of the high speed air jet was determined to be within 1 C of the room temperature. TIMS response was recorded on analog tape. Additional thermistor readouts of the blackbody temperatures and thermocouple readout of the ambient temperature were recorded manually to be compared with the housekeeping data recorded on the tape. Additional tests were conducted under conditions of elevated and cooled room temperatures. The room temperature was varied between 19.5 to 25.5 C in these tests. The calibration parameters needed for quantitative analysis of TIMS data were first plotted on a scanline-by-scanline basis. These parameters are the low and high blackbody temperature readings as recorded by the TIMS and their corresponding digitized count values. Using these values, the system transfer equations were calculated. This equation allows us to compute the flux for any video count by computing the slope and intercept of the straight line that relates the flux to the digital count. The actual video of the target (the lab floor in this case) was then compared with a simulated target. This simulated target was assumed to be a blackbody at emissivity of .95 degrees and the temperature was

  13. Functional High Performance Financial IT

    Berthold, Jost; Filinski, Andrzej; Henglein, Fritz

    2011-01-01

    at the University of Copenhagen that attacks this triple challenge of increased performance, transparency and productivity in the financial sector by a novel integration of financial mathematics, domain-specific language technology, parallel functional programming, and emerging massively parallel hardware. HIPERFIT......The world of finance faces the computational performance challenge of massively expanding data volumes, extreme response time requirements, and compute-intensive complex (risk) analyses. Simultaneously, new international regulatory rules require considerably more transparency and external...... auditability of financial institutions, including their software systems. To top it off, increased product variety and customisation necessitates shorter software development cycles and higher development productivity. In this paper, we report about HIPERFIT, a recently etablished strategic research center...

  14. Alternative High-Performance Ceramic Waste Forms

    Sundaram, S. K. [Alfred Univ., NY (United States)

    2017-02-01

    This final report (M5NU-12-NY-AU # 0202-0410) summarizes the results of the project titled “Alternative High-Performance Ceramic Waste Forms,” funded in FY12 by the Nuclear Energy University Program (NEUP Project # 12-3809) being led by Alfred University in collaboration with Savannah River National Laboratory (SRNL). The overall focus of the project is to advance fundamental understanding of crystalline ceramic waste forms and to demonstrate their viability as alternative waste forms to borosilicate glasses. We processed single- and multiphase hollandite waste forms based on simulated waste streams compositions provided by SRNL based on the advanced fuel cycle initiative (AFCI) aqueous separation process developed in the Fuel Cycle Research and Development (FCR&D). For multiphase simulated waste forms, oxide and carbonate precursors were mixed together via ball milling with deionized water using zirconia media in a polyethylene jar for 2 h. The slurry was dried overnight and then separated from the media. The blended powders were then subjected to melting or spark plasma sintering (SPS) processes. Microstructural evolution and phase assemblages of these samples were studied using x-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersion analysis of x-rays (EDAX), wavelength dispersive spectrometry (WDS), transmission electron spectroscopy (TEM), selective area x-ray diffraction (SAXD), and electron backscatter diffraction (EBSD). These results showed that the processing methods have significant effect on the microstructure and thus the performance of these waste forms. The Ce substitution into zirconolite and pyrochlore materials was investigated using a combination of experimental (in situ XRD and x-ray absorption near edge structure (XANES)) and modeling techniques to study these single phases independently. In zirconolite materials, a transition from the 2M to the 4M polymorph was observed with increasing Ce content. The resulting

  15. High performance Mo adsorbent PZC

    Anon,

    1998-10-01

    We have developed Mo adsorbents for natural Mo(n, {gamma}){sup 99}Mo-{sup 99m}Tc generator. Among them, we called the highest performance adsorbent PZC that could adsorb about 250 mg-Mo/g. In this report, we will show the structure, adsorption mechanism of Mo, and the other useful properties of PZC when you carry out the examination of Mo adsorption and elution of {sup 99m}Tc. (author)

  16. Improving the performance of a filling line based on simulation

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  17. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  18. Indoor Air Quality in High Performance Schools

    High performance schools are facilities that improve the learning environment while saving energy, resources, and money. The key is understanding the lifetime value of high performance schools and effectively managing priorities, time, and budget.

  19. A High-Fidelity Batch Simulation Environment for Integrated Batch and Piloted Air Combat Simulation Analysis

    Goodrich, Kenneth H.; McManus, John W.; Chappell, Alan R.

    1992-01-01

    A batch air combat simulation environment known as the Tactical Maneuvering Simulator (TMS) is presented. The TMS serves as a tool for developing and evaluating tactical maneuvering logics. The environment can also be used to evaluate the tactical implications of perturbations to aircraft performance or supporting systems. The TMS is capable of simulating air combat between any number of engagement participants, with practical limits imposed by computer memory and processing power. Aircraft are modeled using equations of motion, control laws, aerodynamics and propulsive characteristics equivalent to those used in high-fidelity piloted simulation. Databases representative of a modern high-performance aircraft with and without thrust-vectoring capability are included. To simplify the task of developing and implementing maneuvering logics in the TMS, an outer-loop control system known as the Tactical Autopilot (TA) is implemented in the aircraft simulation model. The TA converts guidance commands issued by computerized maneuvering logics in the form of desired angle-of-attack and wind axis-bank angle into inputs to the inner-loop control augmentation system of the aircraft. This report describes the capabilities and operation of the TMS.

  20. Predictors of laparoscopic simulation performance among practicing obstetrician gynecologists.

    Mathews, Shyama; Brodman, Michael; D'Angelo, Debra; Chudnoff, Scott; McGovern, Peter; Kolev, Tamara; Bensinger, Giti; Mudiraj, Santosh; Nemes, Andreea; Feldman, David; Kischak, Patricia; Ascher-Walsh, Charles

    2017-11-01

    While simulation training has been established as an effective method for improving laparoscopic surgical performance in surgical residents, few studies have focused on its use for attending surgeons, particularly in obstetrics and gynecology. Surgical simulation may have a role in improving and maintaining proficiency in the operating room for practicing obstetrician gynecologists. We sought to determine if parameters of performance for validated laparoscopic virtual simulation tasks correlate with surgical volume and characteristics of practicing obstetricians and gynecologists. All gynecologists with laparoscopic privileges (n = 347) from 5 academic medical centers in New York City were required to complete a laparoscopic surgery simulation assessment. The physicians took a presimulation survey gathering physician self-reported characteristics and then performed 3 basic skills tasks (enforced peg transfer, lifting/grasping, and cutting) on the LapSim virtual reality laparoscopic simulator (Surgical Science Ltd, Gothenburg, Sweden). The association between simulation outcome scores (time, efficiency, and errors) and self-rated clinical skills measures (self-rated laparoscopic skill score or surgical volume category) were examined with regression models. The average number of laparoscopic procedures per month was a significant predictor of total time on all 3 tasks (P = .001 for peg transfer; P = .041 for lifting and grasping; P simulation performance as it correlates to active physician practice, further studies may help assess skill and individualize training to maintain skill levels as case volumes fluctuate. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1977-01-01

    Inertial confinement fusion (ICF) designs are considered which may have very high gains (approximately 1000) and low power requirements (<100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  2. High performance inertial fusion targets

    Nuckolls, J.H.; Bangerter, R.O.; Lindl, J.D.; Mead, W.C.; Pan, Y.L.

    1978-01-01

    Inertial confinement fusion (ICF) target designs are considered which may have very high gains (approximately 1000) and low power requirements (< 100 TW) for input energies of approximately one megajoule. These include targets having very low density shells, ultra thin shells, central ignitors, magnetic insulation, and non-ablative acceleration

  3. High performance nuclear fuel element

    Mordarski, W.J.; Zegler, S.T.

    1980-01-01

    A fuel-pellet composition is disclosed for use in fast breeder reactors. Uranium carbide particles are mixed with a powder of uraniumplutonium carbides having a stable microstructure. The resulting mixture is formed into fuel pellets. The pellets thus produced exhibit a relatively low propensity to swell while maintaining a high density

  4. High Performance JavaScript

    Zakas, Nicholas

    2010-01-01

    If you're like most developers, you rely heavily on JavaScript to build interactive and quick-responding web applications. The problem is that all of those lines of JavaScript code can slow down your apps. This book reveals techniques and strategies to help you eliminate performance bottlenecks during development. You'll learn how to improve execution time, downloading, interaction with the DOM, page life cycle, and more. Yahoo! frontend engineer Nicholas C. Zakas and five other JavaScript experts -- Ross Harmes, Julien Lecomte, Steven Levithan, Stoyan Stefanov, and Matt Sweeney -- demonstra

  5. High-Fidelity Roadway Modeling and Simulation

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  6. Carpet Aids Learning in High Performance Schools

    Hurd, Frank

    2009-01-01

    The Healthy and High Performance Schools Act of 2002 has set specific federal guidelines for school design, and developed a federal/state partnership program to assist local districts in their school planning. According to the Collaborative for High Performance Schools (CHPS), high-performance schools are, among other things, healthy, comfortable,…

  7. Simulation of Oscillations in High Power Klystrons

    Ko, K

    2003-01-01

    Spurious oscillations can seriously limit a klystron's performance from reaching its design specifications. These are modes with frequencies different from the drive frequency, and have been found to be localized in various regions of the tube. If left unsuppressed, such oscillations can be driven to large amplitudes by the beam. As a result, the main output signal may suffer from amplitude and phase instabilities which lead to pulse shortening or reduction in power generation efficiency, as observed during the testing of the first 150MW S-band klystron, which was designed and built at SLAC as a part of an international collaboration with DESY. We present efficient methods to identify suspicious modes and then test their possibility of oscillation. In difference to [3], where each beam-loaded quality-factor Qbl was calculated by time-consuming PIC simulations, now only tracking-simulations with much reduced cpu-time and less sensitivity against noise are applied. This enables the determination of Qbl for larg...

  8. High order dark wavefront sensing simulations

    Ragazzoni, Roberto; Arcidiacono, Carmelo; Farinato, Jacopo; Viotto, Valentina; Bergomi, Maria; Dima, Marco; Magrin, Demetrio; Marafatto, Luca; Greggio, Davide; Carolo, Elena; Vassallo, Daniele

    2016-07-01

    Dark wavefront sensing takes shape following quantum mechanics concepts in which one is able to "see" an object in one path of a two-arm interferometer using an as low as desired amount of light actually "hitting" the occulting object. A theoretical way to achieve such a goal, but in the realm of wavefront sensing, is represented by a combination of two unequal beams interferometer sharing the same incoming light, and whose difference in path length is continuously adjusted in order to show different signals for different signs of the incoming perturbation. Furthermore, in order to obtain this in white light, the path difference should be properly adjusted vs the wavelength used. While we incidentally describe how this could be achieved in a true optomechanical setup, we focus our attention to the simulation of a hypothetical "perfect" dark wavefront sensor of this kind in which white light compensation is accomplished in a perfect manner and the gain is selectable in a numerical fashion. Although this would represent a sort of idealized dark wavefront sensor that would probably be hard to match in the real glass and metal, it would also give a firm indication of the maximum achievable gain or, in other words, of the prize for achieving such device. Details of how the simulation code works and first numerical results are outlined along with the perspective for an in-depth analysis of the performances and its extension to more realistic situations, including various sources of additional noise.

  9. Application of Nuclear Power Plant Simulator for High School Student Training

    Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2014-10-15

    In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant.

  10. Application of Nuclear Power Plant Simulator for High School Student Training

    Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung

    2014-01-01

    In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant

  11. Generating performance portable geoscientific simulation code with Firedrake (Invited)

    Ham, D. A.; Bercea, G.; Cotter, C. J.; Kelly, P. H.; Loriant, N.; Luporini, F.; McRae, A. T.; Mitchell, L.; Rathgeber, F.

    2013-12-01

    This presentation will demonstrate how a change in simulation programming paradigm can be exploited to deliver sophisticated simulation capability which is far easier to programme than are conventional models, is capable of exploiting different emerging parallel hardware, and is tailored to the specific needs of geoscientific simulation. Geoscientific simulation represents a grand challenge computational task: many of the largest computers in the world are tasked with this field, and the requirements of resolution and complexity of scientists in this field are far from being sated. However, single thread performance has stalled, even sometimes decreased, over the last decade, and has been replaced by ever more parallel systems: both as conventional multicore CPUs and in the emerging world of accelerators. At the same time, the needs of scientists to couple ever-more complex dynamics and parametrisations into their models makes the model development task vastly more complex. The conventional approach of writing code in low level languages such as Fortran or C/C++ and then hand-coding parallelism for different platforms by adding library calls and directives forces the intermingling of the numerical code with its implementation. This results in an almost impossible set of skill requirements for developers, who must simultaneously be domain science experts, numericists, software engineers and parallelisation specialists. Even more critically, it requires code to be essentially rewritten for each emerging hardware platform. Since new platforms are emerging constantly, and since code owners do not usually control the procurement of the supercomputers on which they must run, this represents an unsustainable development load. The Firedrake system, conversely, offers the developer the opportunity to write PDE discretisations in the high-level mathematical language UFL from the FEniCS project (http://fenicsproject.org). Non-PDE model components, such as parametrisations

  12. Performance Simulations for a Spaceborne Methane Lidar Mission

    Kiemle, C.; Kawa, Stephan Randolph; Quatrevalet, Mathieu; Browell, Edward V.

    2014-01-01

    Future spaceborne lidar measurements of key anthropogenic greenhouse gases are expected to close current observational gaps particularly over remote, polar, and aerosol-contaminated regions, where actual in situ and passive remote sensing observation techniques have difficulties. For methane, a "Methane Remote Lidar Mission" was proposed by Deutsches Zentrum fuer Luft- und Raumfahrt and Centre National d'Etudes Spatiales in the frame of a German-French climate monitoring initiative. Simulations assess the performance of this mission with the help of Moderate Resolution Imaging Spectroradiometer and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations of the earth's surface albedo and atmospheric optical depth. These are key environmental parameters for integrated path differential absorption lidar which uses the surface backscatter to measure the total atmospheric methane column. Results showthat a lidar with an average optical power of 0.45W at 1.6 µm wavelength and a telescope diameter of 0.55 m, installed on a low Earth orbit platform(506 km), will measure methane columns at precisions of 1.2%, 1.7%, and 2.1% over land, water, and snow or ice surfaces, respectively, for monthly aggregated measurement samples within areas of 50 × 50 km2. Globally, the mean precision for the simulated year 2007 is 1.6%, with a standard deviation of 0.7%. At high latitudes, a lower reflectance due to snow and ice is compensated by denser measurements, owing to the orbital pattern. Over key methane source regions such as densely populated areas, boreal and tropical wetlands, or permafrost, our simulations show that the measurement precision will be between 1 and 2%.

  13. High Power Flex-Propellant Arcjet Performance

    Litchford, Ron J.

    2011-01-01

    A MW-class electrothermal arcjet based on a water-cooled, wall-stabilized, constricted arc discharge configuration was subjected to extensive performance testing using hydrogen and simulated ammonia propellants with the deliberate aim of advancing technology readiness level for potential space propulsion applications. The breadboard design incorporates alternating conductor/insulator wafers to form a discharge barrel enclosure with a 2.5-cm internal bore diameter and an overall length of approximately 1 meter. Swirling propellant flow is introduced into the barrel, and a DC arc discharge mode is established between a backplate tungsten cathode button and a downstream ringanode/ spin-coil assembly. The arc-heated propellant then enters a short mixing plenum and is accelerated through a converging-diverging graphite nozzle. This innovative design configuration differs substantially from conventional arcjet thrusters, in which the throat functions as constrictor and the expansion nozzle serves as the anode, and permits the attainment of an equilibrium sonic throat (EST) condition. During the test program, applied electrical input power was varied between 0.5-1 MW with hydrogen and simulated ammonia flow rates in the range of 4-12 g/s and 15-35 g/s, respectively. The ranges of investigated specific input energy therefore fell between 50-250 MJ/kg for hydrogen and 10-60 MJ/kg for ammonia. In both cases, observed arc efficiencies were between 40-60 percent as determined via a simple heat balance method based on electrical input power and coolant water calorimeter measurements. These experimental results were found to be in excellent agreement with theoretical chemical equilibrium predictions, thereby validating the EST assumption and enabling the utilization of standard TDK nozzle expansion analyses to reliably infer baseline thruster performance characteristics. Inferred specific impulse performance accounting for recombination kinetics during the expansion process

  14. High performance soft magnetic materials

    2017-01-01

    This book provides comprehensive coverage of the current state-of-the-art in soft magnetic materials and related applications, with particular focus on amorphous and nanocrystalline magnetic wires and ribbons and sensor applications. Expert chapters cover preparation, processing, tuning of magnetic properties, modeling, and applications. Cost-effective soft magnetic materials are required in a range of industrial sectors, such as magnetic sensors and actuators, microelectronics, cell phones, security, automobiles, medicine, health monitoring, aerospace, informatics, and electrical engineering. This book presents both fundamentals and applications to enable academic and industry researchers to pursue further developments of these key materials. This highly interdisciplinary volume represents essential reading for researchers in materials science, magnetism, electrodynamics, and modeling who are interested in working with soft magnets. Covers magnetic microwires, sensor applications, amorphous and nanocrystalli...

  15. High performance polyethylene nanocomposite fibers

    A. Dorigato

    2012-12-01

    Full Text Available A high density polyethylene (HDPE matrix was melt compounded with 2 vol% of dimethyldichlorosilane treated fumed silica nanoparticles. Nanocomposite fibers were prepared by melt spinning through a co-rotating twin screw extruder and drawing at 125°C in air. Thermo-mechanical and morphological properties of the resulting fibers were then investigated. The introduction of nanosilica improved the drawability of the fibers, allowing the achievement of higher draw ratios with respect to the neat matrix. The elastic modulus and creep stability of the fibers were remarkably improved upon nanofiller addition, with a retention of the pristine tensile properties at break. Transmission electronic microscope (TEM images evidenced that the original morphology of the silica aggregates was disrupted by the applied drawing.

  16. Computational Fluid Dynamics and Building Energy Performance Simulation

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  17. HIGH-PERFORMANCE COATING MATERIALS

    SUGAMA,T.

    2007-01-01

    Corrosion, erosion, oxidation, and fouling by scale deposits impose critical issues in selecting the metal components used at geothermal power plants operating at brine temperatures up to 300 C. Replacing these components is very costly and time consuming. Currently, components made of titanium alloy and stainless steel commonly are employed for dealing with these problems. However, another major consideration in using these metals is not only that they are considerably more expensive than carbon steel, but also the susceptibility of corrosion-preventing passive oxide layers that develop on their outermost surface sites to reactions with brine-induced scales, such as silicate, silica, and calcite. Such reactions lead to the formation of strong interfacial bonds between the scales and oxide layers, causing the accumulation of multiple layers of scales, and the impairment of the plant component's function and efficacy; furthermore, a substantial amount of time is entailed in removing them. This cleaning operation essential for reusing the components is one of the factors causing the increase in the plant's maintenance costs. If inexpensive carbon steel components could be coated and lined with cost-effective high-hydrothermal temperature stable, anti-corrosion, -oxidation, and -fouling materials, this would improve the power plant's economic factors by engendering a considerable reduction in capital investment, and a decrease in the costs of operations and maintenance through optimized maintenance schedules.

  18. SLC injector simulation and tuning for high charge transport

    Yeremian, A.D.; Miller, R.H.; Clendenin, J.E.; Early, R.A.; Ross, M.C.; Turner, J.L.; Wang, J.W.

    1992-08-01

    We have simulated the SLC injector from the thermionic gun through the first accelerating section and used the resulting parameters to tune the injector for optimum performance and high charge transport. Simulations are conducted using PARMELA, a three-dimensional ray-trace code with a two-dimensional space-charge model. The magnetic field profile due to the existing magnetic optics is calculated using POISSON, while SUPERFISH is used to calculate the space harmonics of the various bunchers and the accelerator cavities. The initial beam conditions in the PARMELA code are derived from the EGUN model of the gun. The resulting injector parameters from the PARMELA simulation are used to prescribe experimental settings of the injector components. The experimental results are in agreement with the results of the integrated injector model

  19. SLC injector simulation and tuning for high charge transport

    Yeremian, A.D.; Miller, R.H.; Clendenin, J.E.; Early, R.A.; Ross, M.C.; Turner, J.L.; Wang, J.W.

    1992-01-01

    We have simulated the SLC injector from the thermionic gun through the first accelerating section and used the resulting parameters to tune the injector for optimum performance and high charge transport. Simulations are conducted using PARMELA, a three-dimensional space-charge model. The magnetic field profile due to the existing magnetic optics is calculated using POISSON, while SUPERFISH is used to calculate the space harmonics of the various bunchers and the accelerator cavities. The initial beam conditions in the PARMELA code are derived from the EGUN model of the gun. The resulting injector parameters from the PARMELA simulation are used to prescribe experimental settings of the injector components. The experimental results are in agreement with the results of the integrated injector model. (Author) 5 figs., 7 refs

  20. MDT Performance in a High Rate Background Environment

    Aleksa, Martin; Hessey, N P; Riegler, W

    1998-01-01

    A Cs137 gamma source with different lead filters in the SPS beam-line X5 has been used to simulate the ATLAS background radiation. This note shows the impact of high background rates on the MDT efficiency and resolution for three kinds of pulse shaping and compares the results with GARFIELD simulations. Furthermore it explains how the performance can be improved by time slewing corrections and double track separation.

  1. Impact of reactive settler models on simulated WWTP performance

    Gernaey, Krist; Jeppsson, Ulf; Batstone, Damien J.

    2006-01-01

    for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takacs settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate......, combined with a non-reactive Takacs settler. The second is a fully reactive ASM1 Takacs settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively....... The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler....

  2. Distributed dynamic simulations of networked control and building performance applications.

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  3. MODELING SIMULATION AND PERFORMANCE STUDY OF GRIDCONNECTED PHOTOVOLTAIC ENERGY SYSTEM

    Nagendra K; Karthik J; Keerthi Rao C; Kumar Raja Pemmadi

    2017-01-01

    This paper presents Modeling Simulation of grid connected Photovoltaic Energy System and performance study using MATLAB/Simulink. The Photovoltaic energy system is considered in three main parts PV Model, Power conditioning System and Grid interface. The Photovoltaic Model is inter-connected with grid through full scale power electronic devices. The simulation is conducted on the PV energy system at normal temperature and at constant load by using MATLAB.

  4. Delivering high performance BWR fuel reliably

    Schardt, J.F.

    1998-01-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  5. The effects of bedrest on crew performance during simulated shuttle reentry. Volume 2: Control task performance

    Jex, H. R.; Peters, R. A.; Dimarco, R. J.; Allen, R. W.

    1974-01-01

    A simplified space shuttle reentry simulation performed on the NASA Ames Research Center Centrifuge is described. Anticipating potentially deleterious effects of physiological deconditioning from orbital living (simulated here by 10 days of enforced bedrest) upon a shuttle pilot's ability to manually control his aircraft (should that be necessary in an emergency) a comprehensive battery of measurements was made roughly every 1/2 minute on eight military pilot subjects, over two 20-minute reentry Gz vs. time profiles, one peaking at 2 Gz and the other at 3 Gz. Alternate runs were made without and with g-suits to test the help or interference offered by such protective devices to manual control performance. A very demanding two-axis control task was employed, with a subcritical instability in the pitch axis to force a high attentional demand and a severe loss-of-control penalty. The results show that pilots experienced in high Gz flying can easily handle the shuttle manual control task during 2 Gz or 3 Gz reentry profiles, provided the degree of physiological deconditioning is no more than induced by these 10 days of enforced bedrest.

  6. ATES/heat pump simulations performed with ATESSS code

    Vail, L. W.

    1989-01-01

    Modifications to the Aquifer Thermal Energy Storage System Simulator (ATESSS) allow simulation of aquifer thermal energy storage (ATES)/heat pump systems. The heat pump algorithm requires a coefficient of performance (COP) relationship of the form: COP = COP sub base + alpha (T sub ref minus T sub base). Initial applications of the modified ATES code to synthetic building load data for two sizes of buildings in two U.S. cities showed insignificant performance advantage of a series ATES heat pump system over a conventional groundwater heat pump system. The addition of algorithms for a cooling tower and solar array improved performance slightly. Small values of alpha in the COP relationship are the principal reason for the limited improvement in system performance. Future studies at Pacific Northwest Laboratory (PNL) are planned to investigate methods to increase system performance using alternative system configurations and operations scenarios.

  7. Simulation and performance of brushless dc motor actuators

    Gerba, A., Jr.

    1985-12-01

    The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.

  8. Improving UV Resistance of High Performance Fibers

    Hassanin, Ahmed

    % rutile TiO2 nanoparticles showed excellent protection of braid from PBO. Only 7.5% strength loss was observed. To optimize the degree of protection of the sheath loaded with UV blocker particles, computational models were developed to optimize the protective layer thickness/weight and the amount of UV particles that provide the maximum protection with lightest weight of the protective layer and minimum amount of UV particles. The simulated results were found to be higher that the experimental results due to the tendency of nanoparticles to be agglomerated in real experiments. The third approach to achieve a maximum protection with the minimum weight added is constructing a sleeve from SpectraRTM (Ultra High Molecular Weight Polyethylene (UHMWPE) high performance fiber), which is known to resist UV, woven fabric. Covering the braid from PBO fiber with Spectra RTM woven fabric provide hybrid structure with two compatible components that can share the load and thus maintain the high strength to weight ratio. Although the SpectraRTM fabric had maximum cover factor, 20 % of visible light and about 15 % of UV were able to penetrate the fabric. This transmittance of UV-VIS light negatively affected the protection performance of the SpectraRTM woven fabric layer. It is thought that SpectraRTM fabric be coated with a thin layer (mentioned earlier) containing UV blocker for additional protection while maintain strength contribution to the hybrid structure. To maximize the strength to weight ratio of the hybrid structure (with core from PBO braid and sheath from SpectraRTM woven fabric) an established finite element model was utilized. The theoretical results using the finite element theory indicated that by controlling the bending rigidity of the filling yarn of the SpectraRTM fabric, the extension at peak load of woven fabric in warp direction (loading direction) could be controlled to match the braid extension at peak load. The match in the extension at peak load of the two

  9. Cognitive load predicts point-of-care ultrasound simulator performance.

    Aldekhyl, Sara; Cavalcanti, Rodrigo B; Naismith, Laura M

    2018-02-01

    The ability to maintain good performance with low cognitive load is an important marker of expertise. Incorporating cognitive load measurements in the context of simulation training may help to inform judgements of competence. This exploratory study investigated relationships between demographic markers of expertise, cognitive load measures, and simulator performance in the context of point-of-care ultrasonography. Twenty-nine medical trainees and clinicians at the University of Toronto with a range of clinical ultrasound experience were recruited. Participants answered a demographic questionnaire then used an ultrasound simulator to perform targeted scanning tasks based on clinical vignettes. Participants were scored on their ability to both acquire and interpret ultrasound images. Cognitive load measures included participant self-report, eye-based physiological indices, and behavioural measures. Data were analyzed using a multilevel linear modelling approach, wherein observations were clustered by participants. Experienced participants outperformed novice participants on ultrasound image acquisition. Ultrasound image interpretation was comparable between the two groups. Ultrasound image acquisition performance was predicted by level of training, prior ultrasound training, and cognitive load. There was significant convergence between cognitive load measurement techniques. A marginal model of ultrasound image acquisition performance including prior ultrasound training and cognitive load as fixed effects provided the best overall fit for the observed data. In this proof-of-principle study, the combination of demographic and cognitive load measures provided more sensitive metrics to predict ultrasound simulator performance. Performance assessments which include cognitive load can help differentiate between levels of expertise in simulation environments, and may serve as better predictors of skill transfer to clinical practice.

  10. Noise Simulations of the High-Lift Common Research Model

    Lockard, David P.; Choudhari, Meelan M.; Vatsa, Veer N.; O'Connell, Matthew D.; Duda, Benjamin; Fares, Ehab

    2017-01-01

    The PowerFLOW(TradeMark) code has been used to perform numerical simulations of the high-lift version of the Common Research Model (HL-CRM) that will be used for experimental testing of airframe noise. Time-averaged surface pressure results from PowerFLOW(TradeMark) are found to be in reasonable agreement with those from steady-state computations using FUN3D. Surface pressure fluctuations are highest around the slat break and nacelle/pylon region, and synthetic array beamforming results also indicate that this region is the dominant noise source on the model. The gap between the slat and pylon on the HL-CRM is not realistic for modern aircraft, and most nacelles include a chine that is absent in the baseline model. To account for those effects, additional simulations were completed with a chine and with the slat extended into the pylon. The case with the chine was nearly identical to the baseline, and the slat extension resulted in higher surface pressure fluctuations but slightly reduced radiated noise. The full-span slat geometry without the nacelle/pylon was also simulated and found to be around 10 dB quieter than the baseline over almost the entire frequency range. The current simulations are still considered preliminary as changes in the radiated acoustics are still being observed with grid refinement, and additional simulations with finer grids are planned.

  11. High thermoelectric performance of graphite nanofibers.

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2018-02-22

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high thermoelectric performance. This study unveils that the platelet form of GNFs in which graphite layers are perpendicular to the fiber axis can exhibit outstanding thermoelectric properties with a figure of merit ZT reaching 3.55 in a 0.5 nm diameter fiber and 1.1 in a 1.1 nm diameter one. Interestingly, by introducing 14 C isotope doping, ZT can even be enhanced up to more than 5, and more than 8 if we include the effect of finite phonon mean free path, which demonstrates the amazing thermoelectric potential of GNFs.

  12. High performance carbon nanocomposites for ultracapacitors

    Lu, Wen

    2012-10-02

    The present invention relates to composite electrodes for electrochemical devices, particularly to carbon nanotube composite electrodes for high performance electrochemical devices, such as ultracapacitors.

  13. Strategies and Experiences Using High Performance Fortran

    Shires, Dale

    2001-01-01

    .... High performance Fortran (HPF) is a relative new addition to the Fortran dialect It is an attempt to provide an efficient high-level Fortran parallel programming language for the latest generation of been debatable...

  14. Simulations of dimensionally reduced effective theories of high temperature QCD

    Hietanen, Ari

    Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by perf...

  15. Computer science of the high performance; Informatica del alto rendimiento

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  16. High Performance Grinding and Advanced Cutting Tools

    Jackson, Mark J

    2013-01-01

    High Performance Grinding and Advanced Cutting Tools discusses the fundamentals and advances in high performance grinding processes, and provides a complete overview of newly-developing areas in the field. Topics covered are grinding tool formulation and structure, grinding wheel design and conditioning and applications using high performance grinding wheels. Also included are heat treatment strategies for grinding tools, using grinding tools for high speed applications, laser-based and diamond dressing techniques, high-efficiency deep grinding, VIPER grinding, and new grinding wheels.

  17. Strategy Guideline: High Performance Residential Lighting

    Holton, J.

    2012-02-01

    The Strategy Guideline: High Performance Residential Lighting has been developed to provide a tool for the understanding and application of high performance lighting in the home. The high performance lighting strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner's expectations for high quality lighting.

  18. Simulated astigmatism impairs academic-related performance in children.

    Narayanasamy, Sumithira; Vincent, Stephen J; Sampson, Geoff P; Wood, Joanne M

    2015-01-01

    Astigmatism is an important refractive condition in children. However, the functional impact of uncorrected astigmatism in this population is not well established, particularly with regard to academic performance. This study investigated the impact of simulated bilateral astigmatism on academic-related tasks before and after sustained near work in children. Twenty visually normal children (mean age: 10.8 ± 0.7 years; six males and 14 females) completed a range of standardised academic-related tests with and without 1.50 D of simulated bilateral astigmatism (with both academic-related tests and the visual condition administered in a randomised order). The simulated astigmatism was induced using a positive cylindrical lens while maintaining a plano spherical equivalent. Performance was assessed before and after 20 min of sustained near work, during two separate testing sessions. Academic-related measures included a standardised reading test (the Neale Analysis of Reading Ability), visual information processing tests (Coding and Symbol Search subtests from the Wechsler Intelligence Scale for Children) and a reading-related eye movement test (the Developmental Eye Movement test). Each participant was systematically assigned either with-the-rule (WTR, axis 180°) or against-the-rule (ATR, axis 90°) simulated astigmatism to evaluate the influence of axis orientation on any decrements in performance. Reading, visual information processing and reading-related eye movement performance were all significantly impaired by both simulated bilateral astigmatism (p  0.05). Simulated astigmatism led to a reduction of between 5% and 12% in performance across the academic-related outcome measures, but there was no significant effect of the axis (WTR or ATR) of astigmatism (p > 0.05). Simulated bilateral astigmatism impaired children's performance on a range of academic-related outcome measures irrespective of the orientation of the astigmatism. These findings have

  19. Carbon nanomaterials for high-performance supercapacitors

    Tao Chen; Liming Dai

    2013-01-01

    Owing to their high energy density and power density, supercapacitors exhibit great potential as high-performance energy sources for advanced technologies. Recently, carbon nanomaterials (especially, carbon nanotubes and graphene) have been widely investigated as effective electrodes in supercapacitors due to their high specific surface area, excellent electrical and mechanical properties. This article summarizes the recent progresses on the development of high-performance supercapacitors bas...

  20. High Fidelity Simulation of Primary Atomization in Diesel Engine Sprays

    Ivey, Christopher; Bravo, Luis; Kim, Dokyun

    2014-11-01

    A high-fidelity numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at ambient conditions has been performed. A full understanding of the primary atomization process in fuel injection of diesel has not been achieved for several reasons including the difficulties accessing the optically dense region. Due to the recent advances in numerical methods and computing resources, high fidelity simulations of atomizing flows are becoming available to provide new insights of the process. In the present study, an unstructured un-split Volume-of-Fluid (VoF) method coupled to a stochastic Lagrangian spray model is employed to simulate the atomization process. A common rail fuel injector is simulated by using a nozzle geometry available through the Engine Combustion Network. The working conditions correspond to a single orifice (90 μm) JP-8 fueled injector operating at an injection pressure of 90 bar, ambient condition at 29 bar, 300 K filled with 100% nitrogen with Rel = 16,071, Wel = 75,334 setting the spray in the full atomization mode. The experimental dataset from Army Research Lab is used for validation in terms of spray global parameters and local droplet distributions. The quantitative comparison will be presented and discussed. Supported by Oak Ridge Associated Universities and the Army Research Laboratory.

  1. Measuring cognitive load: performance, mental effort and simulation task complexity.

    Haji, Faizal A; Rojas, David; Childs, Ruth; de Ribaupierre, Sandrine; Dubrowski, Adam

    2015-08-01

    Interest in applying cognitive load theory in health care simulation is growing. This line of inquiry requires measures that are sensitive to changes in cognitive load arising from different instructional designs. Recently, mental effort ratings and secondary task performance have shown promise as measures of cognitive load in health care simulation. We investigate the sensitivity of these measures to predicted differences in intrinsic load arising from variations in task complexity and learner expertise during simulation-based surgical skills training. We randomly assigned 28 novice medical students to simulation training on a simple or complex surgical knot-tying task. Participants completed 13 practice trials, interspersed with computer-based video instruction. On trials 1, 5, 9 and 13, knot-tying performance was assessed using time and movement efficiency measures, and cognitive load was assessed using subjective rating of mental effort (SRME) and simple reaction time (SRT) on a vibrotactile stimulus-monitoring secondary task. Significant improvements in knot-tying performance (F(1.04,24.95)  = 41.1, p cognitive load (F(2.3,58.5)  = 57.7, p load among novices engaged in simulation-based learning. These measures can be used to track cognitive load during skills training. Mental effort ratings are also sensitive to small differences in intrinsic load arising from variations in the physical complexity of a simulation task. The complementary nature of these subjective and objective measures suggests their combined use is advantageous in simulation instructional design research. © 2015 John Wiley & Sons Ltd.

  2. Computational Fluid Dynamics and Building Energy Performance Simulation

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... zones connected by open areas with pressure and buoyancy driven air flow. The two programs are interconnected in an iterative procedure. The paper shows also an evaluation of the air quality in the main area of the buildings based on CFD predictions. It is shown that an interconnection between a CFD...

  3. Numerical Simulation and Performance Analysis of Twin Screw Air Compressors

    W. S. Lee

    2001-01-01

    Full Text Available A theoretical model is proposed in this paper in order to study the performance of oil-less and oil-injected twin screw air compressors. Based on this model, a computer simulation program is developed and the effects of different design parameters including rotor profile, geometric clearance, oil-injected angle, oil temperature, oil flow rate, built-in volume ratio and other operation conditions on the performance of twin screw air compressors are investigated. The simulation program gives us output variables such as specific power, compression ratio, compression efficiency, volumetric efficiency, and discharge temperature. Some of the above results are then compared with experimentally measured data and good agreement is found between the simulation results and the measured data.

  4. MAPPS (Maintenance Personnel Performance Simulation): a computer simulation model for human reliability analysis

    Knee, H.E.; Haas, P.M.

    1985-01-01

    A computer model has been developed, sensitivity tested, and evaluated capable of generating reliable estimates of human performance measures in the nuclear power plant (NPP) maintenance context. The model, entitled MAPPS (Maintenance Personnel Performance Simulation), is of the simulation type and is task-oriented. It addresses a number of person-machine, person-environment, and person-person variables and is capable of providing the user with a rich spectrum of important performance measures including mean time for successful task performance by a maintenance team and maintenance team probability of task success. These two measures are particularly important for input to probabilistic risk assessment (PRA) studies which were the primary impetus for the development of MAPPS. The simulation nature of the model along with its generous input parameters and output variables allows its usefulness to extend beyond its input to PRA

  5. Visuospatial ability factors and performance variables in laparoscopic simulator training

    Luursema, J.M.; Verwey, Willem B.; Burie, Remke

    2012-01-01

    Visuospatial ability has been shown to be important to several aspects of laparoscopic performance, including simulator training. Only a limited subset of visuospatial ability factors however has been investigated in such studies. Tests for different visuospatial ability factors differ in stimulus

  6. Atomic scale simulations for improved CRUD and fuel performance modeling

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  7. Importance of debriefing in high-fidelity simulations

    Igor Karnjuš

    2014-04-01

    Full Text Available Debriefing has been identified as one of the most important parts of a high-fidelity simulation learning process. During debriefing, the mentor invites learners to critically assess the knowledge and skills used during the execution of a scenario. Regardless of the abundance of studies that have examined simulation-based education, debriefing is still poorly defined.The present article examines the essential features of debriefing, its phases, techniques and methods with a systematic review of recent publications. It emphasizes the mentor’s role, since the effectiveness of debriefing largely depends on the mentor’s skills to conduct it. The guidelines that allow the mentor to evaluate his performance in conducting debriefing are also presented. We underline the importance of debriefing in clinical settings as part of continuous learning process. Debriefing allows the medical teams to assess their performance and develop new strategies to achieve higher competencies.Although the debriefing is the cornerstone of high-fidelity simulation learning process, it also represents an important learning strategy in the clinical setting. Many important aspects of debriefing are still poorly explored and understood, therefore this part of the learning process should be given greater attention in the future.

  8. Simulation study of the high intensity S-Band photoinjector

    Zhu, Xiongwei; Nakajima, Kazuhisa [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan)

    2001-10-01

    In this paper, we report the results of simulation study of the high intensity S-Band photoinjector. The aim of the simulation study is to transport high bunch charge with low emittance evolution. The simulation result shows that 7nC bunch with rms emittance 22.3 {pi} mm mrad can be outputted at the exit of photoinjector. (author)

  9. Simulation study of the high intensity S-Band photoinjector

    Zhu, Xiongwei; Nakajima, Kazuhisa

    2001-01-01

    In this paper, we report the results of simulation study of the high intensity S-Band photoinjector. The aim of the simulation study is to transport high bunch charge with low emittance evolution. The simulation result shows that 7nC bunch with rms emittance 22.3 π mm mrad can be outputted at the exit of photoinjector. (author)

  10. A Simulation Base Investigation of High Latency Space Systems Operations

    Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael

    2017-01-01

    NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO

  11. Team Development for High Performance Management.

    Schermerhorn, John R., Jr.

    1986-01-01

    The author examines a team development approach to management that creates shared commitments to performance improvement by focusing the attention of managers on individual workers and their task accomplishments. It uses the "high-performance equation" to help managers confront shared beliefs and concerns about performance and develop realistic…

  12. High Fidelity Ion Beam Simulation of High Dose Neutron Irradiation

    Was, Gary; Wirth, Brian; Motta, Athur; Morgan, Dane; Kaoumi, Djamel; Hosemann, Peter; Odette, Robert

    2018-04-30

    Project Objective: The objective of this proposal is to demonstrate the capability to predict the evolution of microstructure and properties of structural materials in-reactor and at high doses, using ion irradiation as a surrogate for reactor irradiations. “Properties” includes both physical properties (irradiated microstructure) and the mechanical properties of the material. Demonstration of the capability to predict properties has two components. One is ion irradiation of a set of alloys to yield an irradiated microstructure and corresponding mechanical behavior that are substantially the same as results from neutron exposure in the appropriate reactor environment. Second is the capability to predict the irradiated microstructure and corresponding mechanical behavior on the basis of improved models, validated against both ion and reactor irradiations and verified against ion irradiations. Taken together, achievement of these objectives will yield an enhanced capability for simulating the behavior of materials in reactor irradiations

  13. Enabling high performance computational science through combinatorial algorithms

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  14. Enabling high performance computational science through combinatorial algorithms

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. Delivering high performance BWR fuel reliably

    Schardt, J.F. [GE Nuclear Energy, Wilmington, NC (United States)

    1998-07-01

    Utilities are under intense pressure to reduce their production costs in order to compete in the increasingly deregulated marketplace. They need fuel, which can deliver high performance to meet demanding operating strategies. GE's latest BWR fuel design, GE14, provides that high performance capability. GE's product introduction process assures that this performance will be delivered reliably, with little risk to the utility. (author)

  16. HPTA: High-Performance Text Analytics

    Vandierendonck, Hans; Murphy, Karen; Arif, Mahwish; Nikolopoulos, Dimitrios S.

    2017-01-01

    One of the main targets of data analytics is unstructured data, which primarily involves textual data. High-performance processing of textual data is non-trivial. We present the HPTA library for high-performance text analytics. The library helps programmers to map textual data to a dense numeric representation, which can be handled more efficiently. HPTA encapsulates three performance optimizations: (i) efficient memory management for textual data, (ii) parallel computation on associative dat...

  17. High-performance liquid chromatography - Ultraviolet method for the determination of total specific migration of nine ultraviolet absorbers in food simulants based on 1,1,3,3-Tetramethylguanidine and organic phase anion exchange solid phase extraction to remove glyceride.

    Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun

    2016-06-17

    The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Simulations of backgate sandwich nanowire MOSFETs with improved device performance

    Zhao Hengliang; Zhu Huilong; Zhong Jian; Ma Xiaolong; Wei Xing; Zhao Chao; Chen Dapeng; Ye Tianchun

    2014-01-01

    We propose a novel backgate sandwich nanowire MOSFET (SNFET), which offers the advantages of ETSOI (dynamic backgate voltage controllability) and nanowire FETs (good short channel effect). A backgate is used for threshold voltage (V t ) control of the SNFET. Compared with a backgate FinFET with a punch-through stop layer (PTSL), the SNFET possesses improved device performance. 3D device simulations indicate that the SNFET has a three times larger overdrive current, a ∼75% smaller off leakage current, and reduced subthreshold swing (SS) and DIBL than those of a backgate FinFET when the nanowire (NW) and the fin are of equal width. A new process flow to fabricate the backgate SNFET is also proposed in this work. Our analytical model suggests that V t control by the backgate can be attributed to the capacitances formed by the frontgate, NW, and backgate. The SNFET devices are compatible with the latest state-of-the-art high-k/metal gate CMOS technology with the unique capability of independent backgate control for nFETs and pFETs, which is promising for sub-22 nm scaling down. (semiconductor devices)

  19. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  20. Facility/equipment performance evaluation using microcomputer simulation analysis

    Chockie, A.D.; Hostick, C.J.

    1985-08-01

    A computer simulation analysis model was developed at the Pacific Northwest Laboratory to assist in assuring the adequacy of the Monitored Retrievable Storage facility design to meet the specified spent nuclear fuel throughput requirements. The microcomputer-based model was applied to the analysis of material flow, equipment capability and facility layout. The simulation analysis evaluated uncertainties concerning both facility throughput requirements and process duration times as part of the development of a comprehensive estimate of facility performance. The evaluations provided feedback into the design review task to identify areas where design modifications should be considered

  1. Imaging Performance Analysis of Simbol-X with Simulations

    Chauvin, M.; Roques, J. P.

    2009-05-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  2. Imaging Performance Analysis of Simbol-X with Simulations

    Chauvin, M.; Roques, J. P.

    2009-01-01

    Simbol-X is an X-Ray telescope operating in formation flight. It means that its optical performances will strongly depend on the drift of the two spacecrafts and its ability to measure these drifts for image reconstruction. We built a dynamical ray tracing code to study the impact of these parameters on the optical performance of Simbol-X (see Chauvin et al., these proceedings). Using the simulation tool we have developed, we have conducted detailed analyses of the impact of different parameters on the imaging performance of the Simbol-X telescope.

  3. Computer simulations of high pressure systems

    Wilkins, M.L.

    1977-01-01

    Numerical methods are capable of solving very difficult problems in solid mechanics and gas dynamics. In the design of engineering structures, critical decisions are possible if the behavior of materials is correctly described in the calculation. Problems of current interest require accurate analysis of stress-strain fields that range from very small elastic displacement to very large plastic deformation. A finite difference program is described that solves problems over this range and in two and three space-dimensions and time. A series of experiments and calculations serve to establish confidence in the plasticity formulation. The program can be used to design high pressure systems where plastic flow occurs. The purpose is to identify material properties, strength and elongation, that meet the operating requirements. An objective is to be able to perform destructive testing on a computer rather than on the engineering structure. Examples of topical interest are given

  4. High fidelity simulation effectiveness in nursing students' transfer of learning.

    Kirkman, Tera R

    2013-07-13

    Members of nursing faculty are utilizing interactive teaching tools to improve nursing student's clinical judgment; one method that has been found to be potentially effective is high fidelity simulation (HFS). The purpose of this time series design study was to determine whether undergraduate nursing students were able to transfer knowledge and skills learned from classroom lecture and a HFS clinical to the traditional clinical setting. Students (n=42) were observed and rated on their ability to perform a respiratory assessment. The observations and ratings took place at the bedside, prior to a respiratory lecture, following the respiratory lecture, and following simulation clinical. The findings indicated that there was a significant difference (p=0.000) in transfer of learning demonstrated over time. Transfer of learning was demonstrated and the use of HFS was found to be an effective learning and teaching method. Implications of results are discussed.

  5. Strategy Guideline. Partnering for High Performance Homes

    Prahl, Duncan [IBACOS, Inc., Pittsburgh, PA (United States)

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. This guide is intended for use by all parties associated in the design and construction of high performance homes. It serves as a starting point and features initial tools and resources for teams to collaborate to continually improve the energy efficiency and durability of new houses.

  6. Neurocognitive Correlates of Young Drivers' Performance in a Driving Simulator.

    Guinosso, Stephanie A; Johnson, Sara B; Schultheis, Maria T; Graefe, Anna C; Bishai, David M

    2016-04-01

    Differences in neurocognitive functioning may contribute to driving performance among young drivers. However, few studies have examined this relation. This pilot study investigated whether common neurocognitive measures were associated with driving performance among young drivers in a driving simulator. Young drivers (19.8 years (standard deviation [SD] = 1.9; N = 74)) participated in a battery of neurocognitive assessments measuring general intellectual capacity (Full-Scale Intelligence Quotient, FSIQ) and executive functioning, including the Stroop Color-Word Test (cognitive inhibition), Wisconsin Card Sort Test-64 (cognitive flexibility), and Attention Network Task (alerting, orienting, and executive attention). Participants then drove in a simulated vehicle under two conditions-a baseline and driving challenge. During the driving challenge, participants completed a verbal working memory task to increase demand on executive attention. Multiple regression models were used to evaluate the relations between the neurocognitive measures and driving performance under the two conditions. FSIQ, cognitive inhibition, and alerting were associated with better driving performance at baseline. FSIQ and cognitive inhibition were also associated with better driving performance during the verbal challenge. Measures of cognitive flexibility, orienting, and conflict executive control were not associated with driving performance under either condition. FSIQ and, to some extent, measures of executive function are associated with driving performance in a driving simulator. Further research is needed to determine if executive function is associated with more advanced driving performance under conditions that demand greater cognitive load. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  7. Numerical simulation of realistic high-temperature superconductors

    1997-01-01

    One of the main obstacles in the development of practical high-temperature superconducting (HTS) materials is dissipation, caused by the motion of magnetic flux quanta called vortices. Numerical simulations provide a promising new approach for studying these vortices. By exploiting the extraordinary memory and speed of massively parallel computers, researchers can obtain the extremely fine temporal and spatial resolution needed to model complex vortex behavior. The results may help identify new mechanisms to increase the current-capability capabilities and to predict the performance characteristics of HTS materials intended for industrial applications

  8. High Sodium Simulant Testing To Support SB8 Sludge Preparation

    Newell, J. D.

    2012-01-01

    Scoping studies were completed for high sodium simulant SRAT/SME cycles to determine any impact to CPC processing. Two SRAT/SME cycles were performed with simulant having sodium supernate concentration of 1.9M at 130% and 100% of the Koopman Minimum Acid requirement. Both of these failed to meet DWPF processing objectives related to nitrite destruction and hydrogen generation. Another set of SRAT/SME cycles were performed with simulant having a sodium supernate concentration of 1.6M at 130%, 125%, 110%, and 100% of the Koopman Minimum Acid requirement. Only the run at 110% met DWPF processing objectives. Neither simulant had a stoichiometric factor window of 30% between nitrite destruction and excessive hydrogen generation. Based on the 2M-110 results it was anticipated that the 2.5M stoichiometric window for processing would likely be smaller than from 110-130%, since it appeared that it would be necessary to increase the KMA factor by at least 10% above the minimum calculated requirement to achieve nitrite destruction due to the high oxalate content. The 2.5M-130 run exceeded the DWPF hydrogen limits in both the SRAT and SME cycle. Therefore, testing of this wash endpoint was halted. This wash endpoint with this minimum acid requirement and mercury-noble metal concentration profile appears to be something DWPF should not process due to an overly narrow window of stoichiometry. The 2M case was potentially processable in DWPF, but modifications would likely be needed in DWPF such as occasionally accepting SRAT batches with undestroyed nitrite for further acid addition and reprocessing, running near the bottom of the as yet ill-defined window of allowable stoichiometric factors, potentially extending the SRAT cycle to burn off unreacted formic acid before transferring to the SME cycle, and eliminating formic acid additions in the frit slurry

  9. Software life cycle dynamic simulation model: The organizational performance submodel

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  10. Numerical simulation investigation on centrifugal compressor performance of turbocharger

    Li, Jie; Yin, Yuting; Li, Shuqi; Zhang, Jizhong

    2013-01-01

    In this paper, the mathematical model of the flow filed in centrifugal compressor of turbocharger was studied. Based on the theory of computational fluid dynamics (CFD), performance curves and parameter distributions of the compressor were obtained from the 3-D numerical simulation by using CFX. Meanwhile, the influences of grid number and distribution on compressor performance were investigated, and numerical calculation method was analyzed and validated, through combining with test data. The results obtained show the increase of the grid number has little influence on compressor performance while the grid number of single-passage is above 300,000. The results also show that the numerical calculation mass flow rate of compressor choke situation has a good consistent with test results, and the maximum difference of the diffuser exit pressure between simulation and experiment decrease to 3.5% with the assumption of 6 kPa additional total pressure loss at compressor inlet. The numerical simulation method in this paper can be used to predict compressor performance, and the difference of total pressure ratio between calculation and test is less than 7%, and the total-to-total efficiency also have a good consistent with test.

  11. Adaptive Performance-Constrained in Situ Visualization of Atmospheic Simulations

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard; Peterka, Tom; Orf, Leigh; Rahmani, Lokman; Antoniu, Gabriel; Bouge, Luc

    2016-09-12

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of the data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.

  12. Numerical simulation investigation on centrifugal compressor performance of turbocharger

    Li, Jie [China Iron and Steel Research Institute Group, Beijing (China); Yin, Yuting [China North Engine Research Institute, Datong (China); Li, Shuqi; Zhang, Jizhong [Science and Technology Diesel Engine Turbocharging Laboratory, Datong (China)

    2013-06-15

    In this paper, the mathematical model of the flow filed in centrifugal compressor of turbocharger was studied. Based on the theory of computational fluid dynamics (CFD), performance curves and parameter distributions of the compressor were obtained from the 3-D numerical simulation by using CFX. Meanwhile, the influences of grid number and distribution on compressor performance were investigated, and numerical calculation method was analyzed and validated, through combining with test data. The results obtained show the increase of the grid number has little influence on compressor performance while the grid number of single-passage is above 300,000. The results also show that the numerical calculation mass flow rate of compressor choke situation has a good consistent with test results, and the maximum difference of the diffuser exit pressure between simulation and experiment decrease to 3.5% with the assumption of 6 kPa additional total pressure loss at compressor inlet. The numerical simulation method in this paper can be used to predict compressor performance, and the difference of total pressure ratio between calculation and test is less than 7%, and the total-to-total efficiency also have a good consistent with test.

  13. Multi-Bunch Simulations of the ILC for Luminosity Performance Studies

    White, Glen; Walker, Nicholas J

    2005-01-01

    To study the luminosity performance of the International Linear Collider (ILC) with different design parameters, a simulation was constructed that tracks a multi-bunch representation of the beam from the Damping Ring extraction through to the Interaction Point. The simulation code PLACET is used to simulate the LINAC, MatMerlin is used to track through the Beam Delivery System and GUINEA-PIG for the beam-beam interaction. Included in the simulation are ground motion and wakefield effects, intra-train fast feedback and luminosity-based feedback systems. To efficiently study multiple parameters/multiple seeds, the simulation is deployed on the Queen Mary High-Throughput computing cluster at Queen Mary, University of London, where 100 simultaneous simulation seeds can be run.

  14. High-performance ceramics. Fabrication, structure, properties

    Petzow, G.; Tobolski, J.; Telle, R.

    1996-01-01

    The program ''Ceramic High-performance Materials'' pursued the objective to understand the chaining of cause and effect in the development of high-performance ceramics. This chain of problems begins with the chemical reactions for the production of powders, comprises the characterization, processing, shaping and compacting of powders, structural optimization, heat treatment, production and finishing, and leads to issues of materials testing and of a design appropriate to the material. The program ''Ceramic High-performance Materials'' has resulted in contributions to the understanding of fundamental interrelationships in terms of materials science, which are summarized in the present volume - broken down into eight special aspects. (orig./RHM)

  15. High Burnup Fuel Performance and Safety Research

    Bang, Je Keun; Lee, Chan Bok; Kim, Dae Ho (and others)

    2007-03-15

    The worldwide trend of nuclear fuel development is to develop a high burnup and high performance nuclear fuel with high economies and safety. Because the fuel performance evaluation code, INFRA, has a patent, and the superiority for prediction of fuel performance was proven through the IAEA CRP FUMEX-II program, the INFRA code can be utilized with commercial purpose in the industry. The INFRA code was provided and utilized usefully in the universities and relevant institutes domesticallly and it has been used as a reference code in the industry for the development of the intrinsic fuel rod design code.

  16. Fully Coupled Simulation of Lithium Ion Battery Cell Performance

    Trembacki, Bradley L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Murthy, Jayathi Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roberts, Scott Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Lithium-ion battery particle-scale (non-porous electrode) simulations applied to resolved electrode geometries predict localized phenomena and can lead to better informed decisions on electrode design and manufacturing. This work develops and implements a fully-coupled finite volume methodology for the simulation of the electrochemical equations in a lithium-ion battery cell. The model implementation is used to investigate 3D battery electrode architectures that offer potential energy density and power density improvements over traditional layer-by-layer particle bed battery geometries. Advancement of micro-scale additive manufacturing techniques has made it possible to fabricate these 3D electrode microarchitectures. A variety of 3D battery electrode geometries are simulated and compared across various battery discharge rates and length scales in order to quantify performance trends and investigate geometrical factors that improve battery performance. The energy density and power density of the 3D battery microstructures are compared in several ways, including a uniform surface area to volume ratio comparison as well as a comparison requiring a minimum manufacturable feature size. Significant performance improvements over traditional particle bed electrode designs are observed, and electrode microarchitectures derived from minimal surfaces are shown to be superior. A reduced-order volume-averaged porous electrode theory formulation for these unique 3D batteries is also developed, allowing simulations on the full-battery scale. Electrode concentration gradients are modeled using the diffusion length method, and results for plate and cylinder electrode geometries are compared to particle-scale simulation results. Additionally, effective diffusion lengths that minimize error with respect to particle-scale results for gyroid and Schwarz P electrode microstructures are determined.

  17. ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS

    WONG, CPC; MALANG, S; NISHIO, S; RAFFRAY, R; SAGARA, S

    2002-01-01

    OAK A271 ADVANCED HIGH PERFORMANCE SOLID WALL BLANKET CONCEPTS. First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  18. Simulator experiments: effects of NPP operator experience on performance

    Beare, A.N.; Gray, L.H.

    1985-01-01

    Experiments are being conducted on nuclear power plant (NPP) control room training simulators by the Oak Ridge National Laboratory, its subcontractor, General Physics Corporation, and participating utilities. The experiments are sponsored by the Nuclear Regulatory Commission's (NRC) Human Factors and Safeguards Branch, Division of Risk Analysis and Operations, and are a continuation of prior research using simulators, supported by field data collection, to provide a technical basis for NRC human factors regulatory issues concerned with the operational safety of nuclear power plants. During the FY83 research, a simulator experiment was conducted at the control room simulator for a GE boiling water reactor (BWR) NPP. The research subjects were licensed operators undergoing requalification training and shift technical advisors (STAs). This experiment was designed to investigate the effects of (a) senior reactor operator (SRO) experience, (b) operating crew augmentation with an STA and (c) practice, as a crew, upon crew and individual operator performance, in response to anticipated plant transients. The FY84 experiments are a partial replication and extension of the FY83 experiment, but with PWR operators and simulator. Methodology and results to date are reported

  19. PERFORMANCE EVALUATION OF SOLAR COLLECTORS USING A SOLAR SIMULATOR

    M. Norhafana

    2015-11-01

    Full Text Available Solar water heating systems is one of the applications of solar energy. One of the components of a solar water heating system is a solar collector that consists of an absorber. The performance of the solar water heating system depends on the absorber in the solar collector. In countries with unsuitable weather conditions, the indoor testing of solar collectors with the use of a solar simulator is preferred. Thus, this study is conducted to use a multilayered absorber in the solar collector of a solar water heating system as well as to evaluate the performance of the solar collector in terms of useful heat of the multilayered absorber using the multidirectional ability of a solar simulator at several values of solar radiation. It is operated at three variables of solar radiation of 400 W/m2, 550 W/m2 and 700 W/m2 and using three different positions of angles at 0º, 45º and 90º. The results show that the multilayer absorber in the solar collector is only able to best adapt at 45° of solar simulator with different values of radiation intensity. At this angle the maximum values of useful heat and temperature difference are achieved. KEYWORDS: solar water heating system; solar collector; multilayered absorber; solar simulator; solar radiation 

  20. Simulation studies on high-gradient experiments

    Yamaguchi, S.

    1992-12-01

    Computer simulation of the characteristics of the dark current emitted from a 0.6 m long S-band accelerating structure has been made. The energy spectra and the dependence of the dark current on the structure length were simulated. By adjusting the secondary electron emission (SEE) coefficients, the simulated energy spectra qualitatively reproduced the observed ones. It was shown that the dark current increases exponentially with the structure length. The measured value of the multiplication factor of the dark current per unit cell can be explained if the SEE coefficient is set to 1.2. The critical gradient for dark current capture E cri has been calculated for two structures of 180 cells. They are E cri [MV/m] = 13.1 f and 8.75 f for a/λ = 0.089 and 0.16, respectively, where f is the frequency in GHz, a the iris diameter and λ the wave length

  1. High thermoelectric performance of graphite nanofibers

    Tran, Van-Truong; Saint-Martin, Jérôme; Dollfus, Philippe; Volz, Sebastian

    2017-01-01

    Graphite nanofibers (GNFs) have been demonstrated to be a promising material for hydrogen storage and heat management in electronic devices. Here, by means of first-principles and transport simulations, we show that GNFs can also be an excellent material for thermoelectric applications thanks to the interlayer weak van der Waals interaction that induces low thermal conductance and a step-like shape in the electronic transmission with mini-gaps, which are necessary ingredients to achieve high ...

  2. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  3. The new rosetta targets observations, simulations and instrument performances

    Epifani, Elena; Palumbo, Pasquale

    2004-01-01

    The Rosetta mission was successfully launched on March 2nd, 2004 for a rendezvous with the short period comet 67PChuryumov-Gerasimenko in 2014 The new baseline mission foresees also a double fly-by with asteroids 21 Lutetia and 2867 Steins, on the way towards the primary target This volume collects papers presented at the workshop on "The NEW Rosetta targets Observations, simulations and instrument performances", held in Capri on October 13-15, 2003 The papers cover the fields of observations of the new Rosetta targets, laboratory experiments and theoretical simulation of cometary processes, and the expected performances of Rosetta experiments Until real operations around 67PChuryumov-Gerasimenko will start in 10 years from now, new astronomical observations, laboratory experiments and theoretical models are required The goals are to increase knowledge about physics and chemistry of comets and to prepare to exploit at best Rosetta data

  4. High performance liquid chromatographic determination of ...

    STORAGESEVER

    2010-02-08

    ) high performance liquid chromatography (HPLC) grade .... applications. These are important requirements if the reagent is to be applicable to on-line pre or post column derivatisation in a possible automation of the analytical.

  5. Analog circuit design designing high performance amplifiers

    Feucht, Dennis

    2010-01-01

    The third volume Designing High Performance Amplifiers applies the concepts from the first two volumes. It is an advanced treatment of amplifier design/analysis emphasizing both wideband and precision amplification.

  6. Embedded High Performance Scalable Computing Systems

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  7. Gradient High Performance Liquid Chromatography Method ...

    Purpose: To develop a gradient high performance liquid chromatography (HPLC) method for the simultaneous determination of phenylephrine (PHE) and ibuprofen (IBU) in solid ..... nimesulide, phenylephrine. Hydrochloride, chlorpheniramine maleate and caffeine anhydrous in pharmaceutical dosage form. Acta Pol.

  8. Simulant Basis for the Standard High Solids Vessel Design

    Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-09-01

    This document provides the requirements for a test simulant suitable for demonstrating the mixing requirements for the Single High Solids Vessel Design (SHSVD). This simulant has not been evaluated for other purposes such as gas retention and release or erosion. The objective of this work is to provide an underpinning for the simulant properties based on actual waste characterization.

  9. Highlighting High Performance: Whitman Hanson Regional High School; Whitman, Massachusetts

    2006-06-01

    This brochure describes the key high-performance building features of the Whitman-Hanson Regional High School. The brochure was paid for by the Massachusetts Technology Collaborative as part of their Green Schools Initiative. High-performance features described are daylighting and energy-efficient lighting, indoor air quality, solar and wind energy, building envelope, heating and cooling systems, water conservation, and acoustics. Energy cost savings are also discussed.

  10. PERFORMANCE EVALUATION OF SOLAR COLLECTORS USING A SOLAR SIMULATOR

    M. Norhafana; Ahmad Faris Ismail; Z. A. A. Majid

    2015-01-01

    Solar water heating systems is one of the applications of solar energy. One of the components of a solar water heating system is a solar collector that consists of an absorber. The performance of the solar water heating system depends on the absorber in the solar collector. In countries with unsuitable weather conditions, the indoor testing of solar collectors with the use of a solar simulator is preferred. Thus, this study is conducted to use a multilayered absorber in the solar collector of...

  11. Sustainable construction building performance simulation and asset and maintenance management

    2016-01-01

    This book presents a collection of recent research works that highlight best practice solutions, case studies and practical advice on the implementation of sustainable construction techniques. It includes a set of new developments in the field of building performance simulation, building sustainability assessment, sustainable management, asset and maintenance management and service-life prediction. Accordingly, the book will appeal to a broad readership of professionals, scientists, students, practitioners, lecturers and other interested parties.

  12. Use of advanced simulations in fuel performance codes

    Van Uffelen, P.

    2015-01-01

    The simulation of the cylindrical fuel rod behaviour in a reactor or a storage pool for spent fuel requires a fuel performance code. Such tool solves the equations for the heat transfer, the stresses and strains in fuel and cladding, the evolution of several isotopes and the behaviour of various fission products in the fuel rod. The main equations along with their limitations are briefly described. The current approaches adopted for overcoming these limitations and the perspectives are also outlined. (author)

  13. Quantum Simulations of Low Temperature High Energy Density Matter

    Voth, Gregory

    2004-01-01

    .... Using classical molecular dynamics simulations to evaluate these equilibrium properties would predict qualitatively incorrect results for low temperature solid hydrogen, because of the highly quantum...

  14. High performance computing in Windows Azure cloud

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  15. High-performance computing — an overview

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  16. Governance among Malaysian high performing companies

    Asri Marsidi

    2016-07-01

    Full Text Available Well performed companies have always been linked with effective governance which is generally reflected through effective board of directors. However many issues concerning the attributes for effective board of directors remained unresolved. Nowadays diversity has been perceived as able to influence the corporate performance due to the likelihood of meeting variety of needs and demands from diverse customers and clients. The study therefore aims to provide a fundamental understanding on governance among high performing companies in Malaysia.

  17. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J.

    2006-01-01

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  18. High-performance OPCPA laser system

    Zuegel, J.D.; Bagnoud, V.; Bromage, J.; Begishev, I.A.; Puth, J. [Rochester Univ., Lab. for Laser Energetics, NY (United States)

    2006-06-15

    Optical parametric chirped-pulse amplification (OPCPA) is ideally suited for amplifying ultra-fast laser pulses since it provides broadband gain across a wide range of wavelengths without many of the disadvantages of regenerative amplification. A high-performance OPCPA system has been demonstrated as a prototype for the front end of the OMEGA Extended Performance (EP) Laser System. (authors)

  19. Comparing Dutch and British high performing managers

    Waal, A.A. de; Heijden, B.I.J.M. van der; Selvarajah, C.; Meyer, D.

    2016-01-01

    National cultures have a strong influence on the performance of organizations and should be taken into account when studying the traits of high performing managers. At the same time, many studies that focus upon the attributes of successful managers show that there are attributes that are similar

  20. Hot and Hypoxic Environments Inhibit Simulated Soccer Performance and Exacerbate Performance Decrements When Combined

    Aldous, Jeffrey W. F.; Chrismas, Bryna C. R.; Akubat, Ibrahim; Dascombe, Ben; Abt, Grant; Taylor, Lee

    2016-01-01

    The effects of heat and/or hypoxia have been well-documented in match-play data. However, large match-to-match variation for key physical performance measures makes environmental inferences difficult to ascertain from soccer match-play. Therefore, the present study aims to investigate the hot (HOT), hypoxic (HYP), and hot-hypoxic (HH) mediated-decrements during a non-motorized treadmill based soccer-specific simulation. Twelve male University soccer players completed three familiarization sessions and four randomized crossover experimental trials of the intermittent Soccer Performance Test (iSPT) in normoxic-temperate (CON: 18°C 50% rH), HOT (30°C; 50% rH), HYP (1000 m; 18°C 50% rH), and HH (1000 m; 30°C; 50% rH). Physical performance and its performance decrements, body temperatures (rectal, skin, and estimated muscle temperature), heart rate (HR), arterial blood oxygen saturation (SaO2), perceived exertion, thermal sensation (TS), body mass changes, blood lactate, and plasma volume were all measured. Performance decrements were similar in HOT and HYP [Total Distance (−4%), High-speed distance (~−8%), and variable run distance (~−12%) covered] and exacerbated in HH [total distance (−9%), high-speed distance (−15%), and variable run distance (−15%)] compared to CON. Peak sprint speed, was 4% greater in HOT compared with CON and HYP and 7% greater in HH. Sprint distance covered was unchanged (p > 0.05) in HOT and HYP and only decreased in HH (−8%) compared with CON. Body mass (−2%), temperatures (+2–5%), and TS (+18%) were altered in HOT. Furthermore, SaO2 (−8%) and HR (+3%) were changed in HYP. Similar changes in body mass and temperatures, HR, TS, and SaO2 were evident in HH to HOT and HYP, however, blood lactate (p physical performance during iSPT. Future interventions should address the increases in TS and body temperatures, to attenuate these decrements on soccer performance. PMID:26793122

  1. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  2. Robust Multivariable Optimization and Performance Simulation for ASIC Design

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application-specific-integrated-circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power, and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem, which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques, which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable, are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way that facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as a framework of software modules, templates, and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation.

  3. Simulating the Performance of Ground-Based Optical Asteroid Surveys

    Christensen, Eric J.; Shelly, Frank C.; Gibbs, Alex R.; Grauer, Albert D.; Hill, Richard E.; Johnson, Jess A.; Kowalski, Richard A.; Larson, Stephen M.

    2014-11-01

    We are developing a set of asteroid survey simulation tools in order to estimate the capability of existing and planned ground-based optical surveys, and to test a variety of possible survey cadences and strategies. The survey simulator is composed of several layers, including a model population of solar system objects and an orbital integrator, a site-specific atmospheric model (including inputs for seeing, haze and seasonal cloud cover), a model telescope (with a complete optical path to estimate throughput), a model camera (including FOV, pixel scale, and focal plane fill factor) and model source extraction and moving object detection layers with tunable detection requirements. We have also developed a flexible survey cadence planning tool to automatically generate nightly survey plans. Inputs to the cadence planner include camera properties (FOV, readout time), telescope limits (horizon, declination, hour angle, lunar and zenithal avoidance), preferred and restricted survey regions in RA/Dec, ecliptic, and Galactic coordinate systems, and recent coverage by other asteroid surveys. Simulated surveys are created for a subset of current and previous NEO surveys (LINEAR, Pan-STARRS and the three Catalina Sky Survey telescopes), and compared against the actual performance of these surveys in order to validate the model’s performance. The simulator tracks objects within the FOV of any pointing that were not discovered (e.g. too few observations, too trailed, focal plane array gaps, too fast or slow), thus dividing the population into “discoverable” and “discovered” subsets, to inform possible survey design changes. Ongoing and future work includes generating a realistic “known” subset of the model NEO population, running multiple independent simulated surveys in coordinated and uncoordinated modes, and testing various cadences to find optimal strategies for detecting NEO sub-populations. These tools can also assist in quantifying the efficiency of novel

  4. Spent fuel and high level waste: Chemical durability and performance under simulated repository conditions. Results of a coordinated research project 1998-2004. Part 1: Contributions by participants in the co-ordinated research project on chemical durability and performance assessment under simulated repository conditions

    2007-07-01

    This publication contains the results of an IAEA Coordinated Research Project (CRP). It provides a basis for understanding the potential interactions of waste form and repository environment, which is necessary for the development of the design and safety case for deep disposal. Types of high level waste matrices investigated include spent fuel, glasses and ceramics. Of particular interest are the experimental results pertaining to ceramic forms such as SYNROC. This publication also outlines important areas for future work, namely, standardized, collaborative experimental protocols for package-release studies, structured development and calibration of predictive models linking the performance of packaged waste and the repository environment, and studies of the long term behaviour of the wastes, including active waste samples. It comprises 15 contributions of the participants on the Coordinated Research Project which are indexed individually.

  5. Predictive neuromechanical simulations indicate why walking performance declines with ageing.

    Song, Seungmoon; Geyer, Hartmut

    2018-04-01

    Although the natural decline in walking performance with ageing affects the quality of life of a growing elderly population, its physiological origins remain unknown. By using predictive neuromechanical simulations of human walking with age-related neuro-musculo-skeletal changes, we find evidence that the loss of muscle strength and muscle contraction speed dominantly contribute to the reduced walking economy and speed. The findings imply that focusing on recovering these muscular changes may be the only effective way to improve performance in elderly walking. More generally, the work is of interest for investigating the physiological causes of altered gait due to age, injury and disorders. Healthy elderly people walk slower and energetically less efficiently than young adults. This decline in walking performance lowers the quality of life for a growing ageing population, and understanding its physiological origin is critical for devising interventions that can delay or revert it. However, the origin of the decline in walking performance remains unknown, as ageing produces a range of physiological changes whose individual effects on gait are difficult to separate in experiments with human subjects. Here we use a predictive neuromechanical model to separately address the effects of common age-related changes to the skeletal, muscular and nervous systems. We find in computer simulations of this model that the combined changes produce gait consistent with elderly walking and that mainly the loss of muscle strength and mass reduces energy efficiency. In addition, we find that the slower preferred walking speed of elderly people emerges in the simulations when adapting to muscle fatigue, again mainly caused by muscle-related changes. The results suggest that a focus on recovering these muscular changes may be the only effective way to improve performance in elderly walking. © 2018 The Authors. The Journal of Physiology © 2018 The Physiological Society.

  6. Power efficient and high performance VLSI architecture for AES algorithm

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  7. High Performance Work Systems for Online Education

    Contacos-Sawyer, Jonna; Revels, Mark; Ciampa, Mark

    2010-01-01

    The purpose of this paper is to identify the key elements of a High Performance Work System (HPWS) and explore the possibility of implementation in an online institution of higher learning. With the projected rapid growth of the demand for online education and its importance in post-secondary education, providing high quality curriculum, excellent…

  8. Teacher Accountability at High Performing Charter Schools

    Aguirre, Moises G.

    2016-01-01

    This study will examine the teacher accountability and evaluation policies and practices at three high performing charter schools located in San Diego County, California. Charter schools are exempted from many laws, rules, and regulations that apply to traditional school systems. By examining the teacher accountability systems at high performing…

  9. Advanced high performance solid wall blanket concepts

    Wong, C.P.C.; Malang, S.; Nishio, S.; Raffray, R.; Sagara, A.

    2002-01-01

    First wall and blanket (FW/blanket) design is a crucial element in the performance and acceptance of a fusion power plant. High temperature structural and breeding materials are needed for high thermal performance. A suitable combination of structural design with the selected materials is necessary for D-T fuel sufficiency. Whenever possible, low afterheat, low chemical reactivity and low activation materials are desired to achieve passive safety and minimize the amount of high-level waste. Of course the selected fusion FW/blanket design will have to match the operational scenarios of high performance plasma. The key characteristics of eight advanced high performance FW/blanket concepts are presented in this paper. Design configurations, performance characteristics, unique advantages and issues are summarized. All reviewed designs can satisfy most of the necessary design goals. For further development, in concert with the advancement in plasma control and scrape off layer physics, additional emphasis will be needed in the areas of first wall coating material selection, design of plasma stabilization coils, consideration of reactor startup and transient events. To validate the projected performance of the advanced FW/blanket concepts the critical element is the need for 14 MeV neutron irradiation facilities for the generation of necessary engineering design data and the prediction of FW/blanket components lifetime and availability

  10. Highly automated driving, secondary task performance, and driver state.

    Merat, Natasha; Jamson, A Hamish; Lai, Frank C H; Carsten, Oliver

    2012-10-01

    A driving simulator study compared the effect of changes in workload on performance in manual and highly automated driving. Changes in driver state were also observed by examining variations in blink patterns. With the addition of a greater number of advanced driver assistance systems in vehicles, the driver's role is likely to alter in the future from an operator in manual driving to a supervisor of highly automated cars. Understanding the implications of such advancements on drivers and road safety is important. A total of 50 participants were recruited for this study and drove the simulator in both manual and highly automated mode. As well as comparing the effect of adjustments in driving-related workload on performance, the effect of a secondary Twenty Questions Task was also investigated. In the absence of the secondary task, drivers' response to critical incidents was similar in manual and highly automated driving conditions. The worst performance was observed when drivers were required to regain control of driving in the automated mode while distracted by the secondary task. Blink frequency patterns were more consistent for manual than automated driving but were generally suppressed during conditions of high workload. Highly automated driving did not have a deleterious effect on driver performance, when attention was not diverted to the distracting secondary task. As the number of systems implemented in cars increases, an understanding of the implications of such automation on drivers' situation awareness, workload, and ability to remain engaged with the driving task is important.

  11. High Performance Computing in Science and Engineering '14

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  12. High performance computations using dynamical nucleation theory

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  13. Doctors' stress responses and poor communication performance in simulated bad-news consultations.

    Brown, Rhonda; Dunn, Stewart; Byrnes, Karen; Morris, Richard; Heinrich, Paul; Shaw, Joanne

    2009-11-01

    No studies have previously evaluated factors associated with high stress levels and poor communication performance in breaking bad news (BBN) consultations. This study determined factors that were most strongly related to doctors' stress responses and poor communication performance during a simulated BBN task. In 2007, the authors recruited 24 doctors comprising 12 novices (i.e., interns/residents with 1-3 years' experience) and 12 experts (i.e., registrars, medical/radiation oncologists, or cancer surgeons, with more than 4 years' experience). Doctors participated in simulated BBN consultations and a number of control tasks. Five-minute-epoch heart rate (HR), HR variability, and communication performance were assessed in all participants. Subjects also completed a short questionnaire asking about their prior experience BBN, perceived stress, psychological distress (i.e., anxiety, depression), fatigue, and burnout. High stress responses were related to inexperience with BBN, fatigue, and giving bad versus good news. Poor communication performance in the consultation was related to high burnout and fatigue scores. These results suggest that BBN was a stressful experience for doctors even in a simulated encounter, especially for those who were inexperienced and/or fatigued. Poor communication performance was related to burnout and fatigue, but not inexperience with BBN. These results likely indicate that burnout and fatigue contributed to stress and poor work performance in some doctors during the simulated BBN task.

  14. High Dynamic Performance Nonlinear Source Emulator

    Nguyen-Duy, Khiem; Knott, Arnold; Andersen, Michael A. E.

    2016-01-01

    As research and development of renewable and clean energy based systems is advancing rapidly, the nonlinear source emulator (NSE) is becoming very essential for testing of maximum power point trackers or downstream converters. Renewable and clean energy sources play important roles in both...... terrestrial and nonterrestrial applications. However, most existing NSEs have only been concerned with simulating energy sources in terrestrial applications, which may not be fast enough for testing of nonterrestrial applications. In this paper, a high-bandwidth NSE is developed that is able to simulate...... change in the input source but also to a load step between nominal and open circuit. Moreover, all of these operation modes have a very fast settling time of only 10 μs, which is hundreds of times faster than that of existing works. This attribute allows for higher speed and a more efficient maximum...

  15. Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations

    Rino, C. L.; Carrano, C. S.; Yokoyama, T.

    2017-12-01

    In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently

  16. Burnout among pilots: psychosocial factors related to happiness and performance at simulator training.

    Demerouti, Evangelia; Veldhuis, Wouter; Coombes, Claire; Hunter, Rob

    2018-06-18

    In this study among airline pilots, we aim to uncover the work characteristics (job demands and resources) and the outcomes (job crafting, happiness and simulator training performance) that are related to burnout for this occupational group. Using a large sample of airline pilots, we showed that 40% of the participating pilots experience high burnout. In line with Job Demands-Resources theory, job demands were detrimental for simulator training performance because they made pilots more exhausted and less able to craft their job, whereas job resources had a favourable effect because they reduced feelings of disengagement and increased job crafting. Moreover, burnout was negatively related to pilots' happiness with life. These findings highlight the importance of psychosocial factors and health for valuable outcomes for both pilots and airlines. Practitioner Summary: Using an online survey among the members of a European pilots' professional association, we examined the relationship between psychosocial factors (work characteristics, burnout) and outcomes (simulator training performance, happiness). Forty per cent of the participating pilots experience high burnout. Job demands were detrimental, whereas job resources were favourable for simulator training performance/happiness. Twitter text: 40% of airline pilots experience burnout and psychosocial work factors and burnout relate to performance at pilots' simulator training.

  17. An approach to high speed ship ride quality simulation

    Malone, W. L.; Vickery, J. M.

    1975-01-01

    The high speeds attained by certain advanced surface ships result in a spectrum of motion which is higher in frequency than that of conventional ships. This fact along with the inclusion of advanced ride control features in the design of these ships resulted in an increased awareness of the need for ride criteria. Such criteria can be developed using data from actual ship operations in varied sea states or from clinical laboratory experiments. A third approach is to simulate ship conditions using measured or calculated ship motion data. Recent simulations have used data derived from a math model of Surface Effect Ship (SES) motion. The model in turn is based on equations of motion which have been refined with data from scale models and SES of up to 101 600-kg (100-ton) displacement. Employment of broad band motion emphasizes the use of the simulators as a design tool to evaluate a given ship configuration in several operational situations and also serves to provide data as to the overall effect of a given motion on crew performance and physiological status.

  18. Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion

    Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul

    2015-01-01

    A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.

  19. GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit

    Pronk, Sander [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Pall, Szilard [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Schulz, Roland [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Larsson, Per [Univ. of Virginia, Charlottesville, VA (United States); Bjelkmar, Par [Science for Life Lab., Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden); Apostolov, Rossen [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Shirts, Michael R. [Univ. of Virginia, Charlottesville, VA (United States); Smith, Jeremy C. [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kasson, Peter M. [Univ. of Virginia, Charlottesville, VA (United States); van der Spoel, David [Science for Life Lab., Stockholm (Sweden); Uppsala Univ., Uppsala (Sweden); Hess, Berk [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Lindahl, Erik [Science for Life Lab., Stockholm (Sweden); KTH Royal Institute of Technology, Stockholm (Sweden); Stockholm Univ., Stockholm (Sweden)

    2013-02-13

    In this study, molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. As a result, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations.

  20. GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit.

    Pronk, Sander; Páll, Szilárd; Schulz, Roland; Larsson, Per; Bjelkmar, Pär; Apostolov, Rossen; Shirts, Michael R; Smith, Jeremy C; Kasson, Peter M; van der Spoel, David; Hess, Berk; Lindahl, Erik

    2013-04-01

    Molecular simulation has historically been a low-throughput technique, but faster computers and increasing amounts of genomic and structural data are changing this by enabling large-scale automated simulation of, for instance, many conformers or mutants of biomolecules with or without a range of ligands. At the same time, advances in performance and scaling now make it possible to model complex biomolecular interaction and function in a manner directly testable by experiment. These applications share a need for fast and efficient software that can be deployed on massive scale in clusters, web servers, distributed computing or cloud resources. Here, we present a range of new simulation algorithms and features developed during the past 4 years, leading up to the GROMACS 4.5 software package. The software now automatically handles wide classes of biomolecules, such as proteins, nucleic acids and lipids, and comes with all commonly used force fields for these molecules built-in. GROMACS supports several implicit solvent models, as well as new free-energy algorithms, and the software now uses multithreading for efficient parallelization even on low-end systems, including windows-based workstations. Together with hand-tuned assembly kernels and state-of-the-art parallelization, this provides extremely high performance and cost efficiency for high-throughput as well as massively parallel simulations. GROMACS is an open source and free software available from http://www.gromacs.org. Supplementary data are available at Bioinformatics online.

  1. Building Performance Simulation tools for planning of energy efficiency retrofits

    Mondrup, Thomas Fænø; Karlshøj, Jan; Vestergaard, Flemming

    2014-01-01

    Designing energy efficiency retrofits for existing buildings will bring environmental, economic, social, and health benefits. However, selecting specific retrofit strategies is complex and requires careful planning. In this study, we describe a methodology for adopting Building Performance...... to energy efficiency retrofits in social housing. To generate energy savings, we focus on optimizing the building envelope. We evaluate alternative building envelope actions using procedural solar radiation and daylight simulations. In addition, we identify the digital information flow and the information...... Simulation (BPS) tools as energy and environmentally conscious decision-making aids. The methodology has been developed to screen buildings for potential improvements and to support the development of retrofit strategies. We present a case study of a Danish renovation project, implementing BPS approaches...

  2. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Jeon Soohong

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  3. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Soohong Jeon

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive ele- ments, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  4. Management of Industrial Performance Indicators: Regression Analysis and Simulation

    Walter Roberto Hernandez Vergara

    2017-11-01

    Full Text Available Stochastic methods can be used in problem solving and explanation of natural phenomena through the application of statistical procedures. The article aims to associate the regression analysis and systems simulation, in order to facilitate the practical understanding of data analysis. The algorithms were developed in Microsoft Office Excel software, using statistical techniques such as regression theory, ANOVA and Cholesky Factorization, which made it possible to create models of single and multiple systems with up to five independent variables. For the analysis of these models, the Monte Carlo simulation and analysis of industrial performance indicators were used, resulting in numerical indices that aim to improve the goals’ management for compliance indicators, by identifying systems’ instability, correlation and anomalies. The analytical models presented in the survey indicated satisfactory results with numerous possibilities for industrial and academic applications, as well as the potential for deployment in new analytical techniques.

  5. High Performance Networks From Supercomputing to Cloud Computing

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  6. Wavy channel transistor for area efficient high performance operation

    Fahad, Hossain M.

    2013-04-05

    We report a wavy channel FinFET like transistor where the channel is wavy to increase its width without any area penalty and thereby increasing its drive current. Through simulation and experiments, we show the effectiveness of such device architecture is capable of high performance operation compared to conventional FinFETs with comparatively higher area efficiency and lower chip latency as well as lower power consumption.

  7. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High performance bio-integrated devices

    Kim, Dae-Hyeong; Lee, Jongha; Park, Minjoon

    2014-06-01

    In recent years, personalized electronics for medical applications, particularly, have attracted much attention with the rise of smartphones because the coupling of such devices and smartphones enables the continuous health-monitoring in patients' daily life. Especially, it is expected that the high performance biomedical electronics integrated with the human body can open new opportunities in the ubiquitous healthcare. However, the mechanical and geometrical constraints inherent in all standard forms of high performance rigid wafer-based electronics raise unique integration challenges with biotic entities. Here, we describe materials and design constructs for high performance skin-mountable bio-integrated electronic devices, which incorporate arrays of single crystalline inorganic nanomembranes. The resulting electronic devices include flexible and stretchable electrophysiology electrodes and sensors coupled with active electronic components. These advances in bio-integrated systems create new directions in the personalized health monitoring and/or human-machine interfaces.

  9. Designing a High Performance Parallel Personal Cluster

    Kapanova, K. G.; Sellier, J. M.

    2016-01-01

    Today, many scientific and engineering areas require high performance computing to perform computationally intensive experiments. For example, many advances in transport phenomena, thermodynamics, material properties, computational chemistry and physics are possible only because of the availability of such large scale computing infrastructures. Yet many challenges are still open. The cost of energy consumption, cooling, competition for resources have been some of the reasons why the scientifi...

  10. vSphere high performance cookbook

    Sarkar, Prasenjit

    2013-01-01

    vSphere High Performance Cookbook is written in a practical, helpful style with numerous recipes focusing on answering and providing solutions to common, and not-so common, performance issues and problems.The book is primarily written for technical professionals with system administration skills and some VMware experience who wish to learn about advanced optimization and the configuration features and functions for vSphere 5.1.

  11. Comparative Performance of Four Single Extreme Outlier Discordancy Tests from Monte Carlo Simulations

    Surendra P. Verma

    2014-01-01

    Full Text Available Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15 for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ=0 and ε=±1, were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15>N14>N8.

  12. Performance demonstration program plan for analysis of simulated headspace gases

    1995-06-01

    The Performance Demonstration Program (PDP) for analysis of headspace gases will consist of regular distribution and analyses of test standards to evaluate the capability for analyzing VOCs, hydrogen, and methane in the headspace of transuranic (TRU) waste throughout the Department of Energy (DOE) complex. Each distribution is termed a PDP cycle. These evaluation cycles will provide an objective measure of the reliability of measurements performed for TRU waste characterization. Laboratory performance will be demonstrated by the successful analysis of blind audit samples of simulated TRU waste drum headspace gases according to the criteria set within the text of this Program Plan. Blind audit samples (hereinafter referred to as PDP samples) will be used as an independent means to assess laboratory performance regarding compliance with the QAPP QAOs. The concentration of analytes in the PDP samples will encompass the range of concentrations anticipated in actual waste characterization gas samples. Analyses which are required by the WIPP to demonstrate compliance with various regulatory requirements and which are included in the PDP must be performed by laboratories which have demonstrated acceptable performance in the PDP

  13. High performance parallel I/O

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  14. Assessing performance and validating finite element simulations using probabilistic knowledge

    Dolin, Ronald M.; Rodriguez, E. A. (Edward A.)

    2002-01-01

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrence results are used to validate finite element predictions.

  15. Eddy current NDE performance demonstrations using simulation tools

    Maurice, L.; Costan, V.; Guillot, E.; Thomas, P.

    2013-01-01

    To carry out performance demonstrations of the Eddy-Current NDE processes applied on French nuclear power plants, EDF studies the possibility of using simulation tools as an alternative to measurements on steam generator tube mocks-up. This paper focuses on the strategy led by EDF to assess and use code C armel3D and Civa, on the case of Eddy-Current NDE on wears problem which may appear in the U-shape region of steam generator tubes due to the rubbing of anti-vibration bars.

  16. Quantum Accelerators for High-performance Computing Systems

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  17. Integrating Soft Set Theory and Fuzzy Linguistic Model to Evaluate the Performance of Training Simulation Systems.

    Chang, Kuei-Hu; Chang, Yung-Chia; Chain, Kai; Chung, Hsiang-Yu

    2016-01-01

    The advancement of high technologies and the arrival of the information age have caused changes to the modern warfare. The military forces of many countries have replaced partially real training drills with training simulation systems to achieve combat readiness. However, considerable types of training simulation systems are used in military settings. In addition, differences in system set up time, functions, the environment, and the competency of system operators, as well as incomplete information have made it difficult to evaluate the performance of training simulation systems. To address the aforementioned problems, this study integrated analytic hierarchy process, soft set theory, and the fuzzy linguistic representation model to evaluate the performance of various training simulation systems. Furthermore, importance-performance analysis was adopted to examine the influence of saving costs and training safety of training simulation systems. The findings of this study are expected to facilitate applying military training simulation systems, avoiding wasting of resources (e.g., low utility and idle time), and providing data for subsequent applications and analysis. To verify the method proposed in this study, the numerical examples of the performance evaluation of training simulation systems were adopted and compared with the numerical results of an AHP and a novel AHP-based ranking technique. The results verified that not only could expert-provided questionnaire information be fully considered to lower the repetition rate of performance ranking, but a two-dimensional graph could also be used to help administrators allocate limited resources, thereby enhancing the investment benefits and training effectiveness of a training simulation system.

  18. Fracture modelling of a high performance armour steel

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  19. High Performance Computing Software Applications for Space Situational Awareness

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  20. Performance of technology-driven simulators for medical students--a systematic review.

    Michael, Michael; Abboudi, Hamid; Ker, Jean; Shamim Khan, Mohammed; Dasgupta, Prokar; Ahmed, Kamran

    2014-12-01

    Simulation-based education has evolved as a key training tool in high-risk industries such as aviation and the military. In parallel with these industries, the benefits of incorporating specialty-oriented simulation training within medical schools are vast. Adoption of simulators into medical school education programs has shown great promise and has the potential to revolutionize modern undergraduate education. An English literature search was carried out using MEDLINE, EMBASE, and psychINFO databases to identify all randomized controlled studies pertaining to "technology-driven" simulators used in undergraduate medical education. A validity framework incorporating the "framework for technology enhanced learning" report by the Department of Health, United Kingdom, was used to evaluate the capabilities of each technology-driven simulator. Information was collected regarding the simulator type, characteristics, and brand name. Where possible, we extracted information from the studies on the simulators' performance with respect to validity status, reliability, feasibility, education impact, acceptability, and cost effectiveness. We identified 19 studies, analyzing simulators for medical students across a variety of procedure-based specialities including; cardiovascular (n = 2), endoscopy (n = 3), laparoscopic surgery (n = 8), vascular access (n = 2), ophthalmology (n = 1), obstetrics and gynecology (n = 1), anesthesia (n = 1), and pediatrics (n = 1). Incorporation of simulators has so far been on an institutional level; no national or international trends have yet emerged. Simulators are capable of providing a highly educational and realistic experience for the medical students within a variety of speciality-oriented teaching sessions. Further research is needed to establish how best to incorporate simulators into a more primary stage of medical education; preclinical and clinical undergraduate medicine. Copyright © 2014 Elsevier Inc. All rights

  1. High performance APCS conceptual design and evaluation scoping study

    Soelberg, N.; Liekhus, K.; Chambers, A.; Anderson, G.

    1998-02-01

    This Air Pollution Control System (APCS) Conceptual Design and Evaluation study was conducted to evaluate a high-performance (APC) system for minimizing air emissions from mixed waste thermal treatment systems. Seven variations of high-performance APCS designs were conceptualized using several design objectives. One of the system designs was selected for detailed process simulation using ASPEN PLUS to determine material and energy balances and evaluate performance. Installed system capital costs were also estimated. Sensitivity studies were conducted to evaluate the incremental cost and benefit of added carbon adsorber beds for mercury control, specific catalytic reduction for NO x control, and offgas retention tanks for holding the offgas until sample analysis is conducted to verify that the offgas meets emission limits. Results show that the high-performance dry-wet APCS can easily meet all expected emission limits except for possibly mercury. The capability to achieve high levels of mercury control (potentially necessary for thermally treating some DOE mixed streams) could not be validated using current performance data for mercury control technologies. The engineering approach and ASPEN PLUS modeling tool developed and used in this study identified APC equipment and system performance, size, cost, and other issues that are not yet resolved. These issues need to be addressed in feasibility studies and conceptual designs for new facilities or for determining how to modify existing facilities to meet expected emission limits. The ASPEN PLUS process simulation with current and refined input assumptions and calculations can be used to provide system performance information for decision-making, identifying best options, estimating costs, reducing the potential for emission violations, providing information needed for waste flow analysis, incorporating new APCS technologies in existing designs, or performing facility design and permitting activities

  2. Performance simulation of an absorption heat transformer operating with partially miscible mixtures

    Alonso, D.; Cachot, T.; Hornut, J.M. [LSGC-CNRS-ENSIC, Nancy (France); Univ. Henri Poincare, Nancy (France). IUT

    2002-07-08

    This paper proposes to study the thermodynamics performances of a new absorption heat-transformer cycle, where the separation step is obtained by the cooling and settling of a partially miscible mixture at low temperature. This new cycle has been called an absorption-demixing heat transformer (ADHT) cycle. A numerical simulation code has been written, and has allowed us to evaluate the temperature lift and thermal yield of 2 working pairs. Both high qualitative and quantitative performances have been obtained, so demonstrating the feasibility and industrial interest for such a cycle. Moreover a comparison of the simulation results with performances really obtained on an experimental ADHT has confirmed the pertinence of the simulation code.(author)

  3. Strategy Guideline: Partnering for High Performance Homes

    Prahl, D.

    2013-01-01

    High performance houses require a high degree of coordination and have significant interdependencies between various systems in order to perform properly, meet customer expectations, and minimize risks for the builder. Responsibility for the key performance attributes is shared across the project team and can be well coordinated through advanced partnering strategies. For high performance homes, traditional partnerships need to be matured to the next level and be expanded to all members of the project team including trades, suppliers, manufacturers, HERS raters, designers, architects, and building officials as appropriate. In an environment where the builder is the only source of communication between trades and consultants and where relationships are, in general, adversarial as opposed to cooperative, the chances of any one building system to fail are greater. Furthermore, it is much harder for the builder to identify and capitalize on synergistic opportunities. Partnering can help bridge the cross-functional aspects of the systems approach and achieve performance-based criteria. Critical success factors for partnering include support from top management, mutual trust, effective and open communication, effective coordination around common goals, team building, appropriate use of an outside facilitator, a partnering charter progress toward common goals, an effective problem-solving process, long-term commitment, continuous improvement, and a positive experience for all involved.

  4. Long-term bridge performance high priority bridge performance issues.

    2014-10-01

    Bridge performance is a multifaceted issue involving performance of materials and protective systems, : performance of individual components of the bridge, and performance of the structural system as a whole. The : Long-Term Bridge Performance (LTBP)...

  5. Simulation of a spintronic transistor: A study of its performance

    Pela, R.R.; Teles, L.K.

    2009-01-01

    We study theoretically the magnetic bipolar transistor, and compare its performance with common bipolar transistor. We present not only the simulation results for the characteristic curves, but also other relevant parameters related with its performance, such as: the current amplification factor, the open-loop gain, the hybrid parameters and the cutoff frequency. We noted that the spin-charge coupling introduces new phenomena that enrich the functionality characteristics of the magnetic bipolar transistor. Among other things, it has an adjustable band structure, which may be modified during the device operation; it exhibits the already known spin-voltaic effect. On the other hand, we observed that it is necessary a large g-factor to analyze the influence of the field B over the transistor. Nevertheless, we consider the magnetic bipolar transistor as a promising device for spintronic applications

  6. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid chromatography (HPLC) method for the determination of cefadroxil monohydrate in human plasma. Methods: Schimadzu HPLC with LC solution software was used with Waters Spherisorb, C18 (5 μm, 150mm × 4.5mm) column. The mobile phase ...

  7. An Introduction to High Performance Fortran

    John Merlin

    1995-01-01

    Full Text Available High Performance Fortran (HPF is an informal standard for extensions to Fortran 90 to assist its implementation on parallel architectures, particularly for data-parallel computation. Among other things, it includes directives for specifying data distribution across multiple memories, and concurrent execution features. This article provides a tutorial introduction to the main features of HPF.

  8. High Performance Electronics on Flexible Silicon

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  9. Debugging a high performance computing program

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  10. Technology Leadership in Malaysia's High Performance School

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  11. Toward High Performance in Industrial Refrigeration Systems

    Thybo, C.; Izadi-Zamanabadi, Roozbeh; Niemann, H.

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  12. Towards high performance in industrial refrigeration systems

    Thybo, C.; Izadi-Zamanabadi, R.; Niemann, Hans Henrik

    2002-01-01

    Achieving high performance in complex industrial systems requires information manipulation at different system levels. The paper shows how different models of same subsystems, but using different quality of information/data, are used for fault diagnosis as well as robust control design...

  13. Validated high performance liquid chromatographic (HPLC) method ...

    STORAGESEVER

    2010-02-22

    Feb 22, 2010 ... specific and accurate high performance liquid chromatographic method for determination of ZER in micro-volumes ... tional medicine as a cure for swelling, sores, loss of appetite and ... Receptor Activator for Nuclear Factor κ B Ligand .... The effect of ... be suitable for preclinical pharmacokinetic studies. The.

  14. Validated High Performance Liquid Chromatography Method for ...

    Purpose: To develop a simple, rapid and sensitive high performance liquid ... response, tailing factor and resolution of six replicate injections was < 3 %. ... Cefadroxil monohydrate, Human plasma, Pharmacokinetics Bioequivalence ... Drug-free plasma was obtained from the local .... Influence of probenicid on the renal.

  15. Project materials [Commercial High Performance Buildings Project

    None

    2001-01-01

    The Consortium for High Performance Buildings (ChiPB) is an outgrowth of DOE'S Commercial Whole Buildings Roadmapping initiatives. It is a team-driven public/private partnership that seeks to enable and demonstrate the benefit of buildings that are designed, built and operated to be energy efficient, environmentally sustainable, superior quality, and cost effective.

  16. High performance structural ceramics for nuclear industry

    Pujari, Vimal K.; Faker, Paul

    2006-01-01

    A family of Saint-Gobain structural ceramic materials and products produced by its High performance Refractory Division is described. Over the last fifty years or so, Saint-Gobain has been a leader in developing non oxide ceramic based novel materials, processes and products for application in Nuclear, Chemical, Automotive, Defense and Mining industries

  17. A new high performance current transducer

    Tang Lijun; Lu Songlin; Li Deming

    2003-01-01

    A DC-100 kHz current transducer is developed using a new technique on zero-flux detecting principle. It was shown that the new current transducer is of high performance, its magnetic core need not be selected very stringently, and it is easy to manufacture

  18. Physiological responses and performance in a simulated trampoline gymnastics competition in elite male gymnasts

    Jensen, Peter; Scott, Suzanne; Krustrup, Peter

    2013-01-01

    Abstract Physiological responses and performance were examined during and after a simulated trampoline competition (STC). Fifteen elite trampoline gymnasts participated, of which whereas eight completed two routines (EX1 and EX2) and a competition final (EX3). Trampoline-specific activities were...... gymnastic competition includes a high number of repeated explosive and energy demanding jumps, which impairs jump performance during and 24 h post-competition....

  19. Strategy Guideline. High Performance Residential Lighting

    Holton, J. [IBACOS, Inc., Pittsburgh, PA (United States)

    2012-02-01

    This report has been developed to provide a tool for the understanding and application of high performance lighting in the home. The strategies featured in this guide are drawn from recent advances in commercial lighting for application to typical spaces found in residential buildings. This guide offers strategies to greatly reduce lighting energy use through the application of high quality fluorescent and light emitting diode (LED) technologies. It is important to note that these strategies not only save energy in the home but also serve to satisfy the homeowner’s expectations for high quality lighting.

  20. Instrument performance and simulation verification of the POLAR detector

    Kole, M.; Li, Z. H.; Produit, N.; Tymieniecka, T.; Zhang, J.; Zwolinska, A.; Bao, T. W.; Bernasconi, T.; Cadoux, F.; Feng, M. Z.; Gauvin, N.; Hajdas, W.; Kong, S. W.; Li, H. C.; Li, L.; Liu, X.; Marcinkowski, R.; Orsi, S.; Pohl, M.; Rybka, D.; Sun, J. C.; Song, L. M.; Szabelski, J.; Wang, R. J.; Wang, Y. H.; Wen, X.; Wu, B. B.; Wu, X.; Xiao, H. L.; Xiong, S. L.; Zhang, L.; Zhang, L. Y.; Zhang, S. N.; Zhang, X. F.; Zhang, Y. J.; Zhao, Y.

    2017-11-01

    POLAR is a new satellite-born detector aiming to measure the polarization of an unprecedented number of Gamma-Ray Bursts in the 50-500 keV energy range. The instrument, launched on-board the Tiangong-2 Chinese Space lab on the 15th of September 2016, is designed to measure the polarization of the hard X-ray flux by measuring the distribution of the azimuthal scattering angles of the incoming photons. A detailed understanding of the polarimeter and specifically of the systematic effects induced by the instrument's non-uniformity are required for this purpose. In order to study the instrument's response to polarization, POLAR underwent a beam test at the European Synchrotron Radiation Facility in France. In this paper both the beam test and the instrument performance will be described. This is followed by an overview of the Monte Carlo simulation tools developed for the instrument. Finally a comparison of the measured and simulated instrument performance will be provided and the instrument response to polarization will be presented.

  1. Flight simulation program for high altitude long endurance unmanned vehicle; Kokodo mujinki no hiko simulation program

    Suzuki, H.; Hashidate, M. [National Aerospace Laboratory, Tokyo (Japan)

    1995-11-01

    An altitude of about 20 km has the atmospheric density too dilute for common aircraft, and the air resistance too great for satellites. Attention has been drawn in recent years on a high-altitude long-endurance unmanned vehicle that flies at this altitude for a long period of time to serve as a wave relaying base and perform traffic control. Therefore, a development was made on a flight simulation program to evaluate and discuss the guidance and control laws for the high-altitude unmanned vehicle. Equations of motion were derived for three-dimensional six freedom and three-dimensional three freedom. Aerodynamic characteristics of an unmanned vehicle having a Rectenna wing were estimated, and formulation was made according to the past research results on data of winds that the unmanned vehicle is anticipated to encounter at an altitude of 20 km. Noticing the inside of a horizontal plane, a proposal was given on a guidance law that follows a given path. A flight simulation was carried out to have attained a prospect that the unmanned vehicle may be enclosed in a limited space even if the vehicle is encountered with a relatively strong wind. 18 refs., 20 figs., 1 tab.

  2. Architecting Web Sites for High Performance

    Arun Iyengar

    2002-01-01

    Full Text Available Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

  3. Wireless network simulation - Your window on future network performance

    Fledderus, E.

    2005-01-01

    The paper describes three relevant perspectives on current wireless simulation practices. In order to obtain the key challenges for future network simulations, the characteristics of "beyond 3G" networks are described, including their impact on simulation.

  4. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  5. High performance anode for advanced Li batteries

    Lake, Carla [Applied Sciences, Inc., Cedarville, OH (United States)

    2015-11-02

    The overall objective of this Phase I SBIR effort was to advance the manufacturing technology for ASI’s Si-CNF high-performance anode by creating a framework for large volume production and utilization of low-cost Si-coated carbon nanofibers (Si-CNF) for the battery industry. This project explores the use of nano-structured silicon which is deposited on a nano-scale carbon filament to achieve the benefits of high cycle life and high charge capacity without the consequent fading of, or failure in the capacity resulting from stress-induced fracturing of the Si particles and de-coupling from the electrode. ASI’s patented coating process distinguishes itself from others, in that it is highly reproducible, readily scalable and results in a Si-CNF composite structure containing 25-30% silicon, with a compositionally graded interface at the Si-CNF interface that significantly improve cycling stability and enhances adhesion of silicon to the carbon fiber support. In Phase I, the team demonstrated the production of the Si-CNF anode material can successfully be transitioned from a static bench-scale reactor into a fluidized bed reactor. In addition, ASI made significant progress in the development of low cost, quick testing methods which can be performed on silicon coated CNFs as a means of quality control. To date, weight change, density, and cycling performance were the key metrics used to validate the high performance anode material. Under this effort, ASI made strides to establish a quality control protocol for the large volume production of Si-CNFs and has identified several key technical thrusts for future work. Using the results of this Phase I effort as a foundation, ASI has defined a path forward to commercialize and deliver high volume and low-cost production of SI-CNF material for anodes in Li-ion batteries.

  6. NINJA: Java for High Performance Numerical Computing

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  7. Interprofessional education in pharmacology using high-fidelity simulation.

    Meyer, Brittney A; Seefeldt, Teresa M; Ngorsuraches, Surachat; Hendrickx, Lori D; Lubeck, Paula M; Farver, Debra K; Heins, Jodi R

    2017-11-01

    This study examined the feasibility of an interprofessional high-fidelity pharmacology simulation and its impact on pharmacy and nursing students' perceptions of interprofessionalism and pharmacology knowledge. Pharmacy and nursing students participated in a pharmacology simulation using a high-fidelity patient simulator. Faculty-facilitated debriefing included discussion of the case and collaboration. To determine the impact of the activity on students' perceptions of interprofessionalism and their ability to apply pharmacology knowledge, surveys were administered to students before and after the simulation. Attitudes Toward Health Care Teams scale (ATHCT) scores improved from 4.55 to 4.72 on a scale of 1-6 (p = 0.005). Almost all (over 90%) of the students stated their pharmacology knowledge and their ability to apply that knowledge improved following the simulation. A simulation in pharmacology is feasible and favorably affected students' interprofessionalism and pharmacology knowledge perceptions. Pharmacology is a core science course required by multiple health professions in early program curricula, making it favorable for incorporation of interprofessional learning experiences. However, reports of high-fidelity interprofessional simulation in pharmacology courses are limited. This manuscript contributes to the literature in the field of interprofessional education by demonstrating that an interprofessional simulation in pharmacology is feasible and can favorably affect students' perceptions of interprofessionalism. This manuscript provides an example of a pharmacology interprofessional simulation that faculty in other programs can use to build similar educational activities. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. High-speed LWR transients simulation for optimizing emergency response

    Wulff, W.; Cheng, H.S.; Lekach, S.V.; Mallen, A.N.; Stritar, A.

    1984-01-01

    The purpose of computer-assisted emergency response in nuclear power plants, and the requirements for achieving such a response, are presented. An important requirement is the attainment of realistic high-speed plant simulations at the reactor site. Currently pursued development programs for plant simulations are reviewed. Five modeling principles are established and a criterion is presented for selecting numerical procedures and efficient computer hardware to achieve high-speed simulations. A newly developed technology for high-speed power plant simulation is described and results are presented. It is shown that simulation speeds ten times greater than real-time process-speeds are possible, and that plant instrumentation can be made part of the computational loop in a small, on-site minicomputer. Additional technical issues are presented which must still be resolved before the newly developed technology can be implemented in a nuclear power plant

  9. Architecture of a highly modular lighting simulation system

    CERN. Geneva

    2014-01-01

    This talk will discuss the challenges before designing a highly modular, parallel, heterogeneous rendering system and their solutions. It will review how different lighting simulation algorithms could be combined to work together using an unified framework. We will discuss how the system can be instrumented for collecting data about the algorithms' runtime performance. The talk includes an overview of how collected data could be visualised in the computational domain of the lighting algorithms and be used for visual debugging and analysis. About the speaker Hristo Lesev has been working in the software industry for the last ten years. He has taken part in delivering a number of desktop and mobile applications. Computer Graphics programming is Hristo's main passion and he has experience writing extensions for 3D software like 3DS Max, Maya, Blender, Sketchup, and V-Ray. Since 2006 Hristo teaches Photorealistic Ray Tracing in the Faculty of Mathematics and Informatics at the Paisii Hilendarski...

  10. High Performance Systolic Array Core Architecture Design for DNA Sequencer

    Saiful Nurdin Dayana

    2018-01-01

    Full Text Available This paper presents a high performance systolic array (SA core architecture design for Deoxyribonucleic Acid (DNA sequencer. The core implements the affine gap penalty score Smith-Waterman (SW algorithm. This time-consuming local alignment algorithm guarantees optimal alignment between DNA sequences, but it requires quadratic computation time when performed on standard desktop computers. The use of linear SA decreases the time complexity from quadratic to linear. In addition, with the exponential growth of DNA databases, the SA architecture is used to overcome the timing issue. In this work, the SW algorithm has been captured using Verilog Hardware Description Language (HDL and simulated using Xilinx ISIM simulator. The proposed design has been implemented in Xilinx Virtex -6 Field Programmable Gate Array (FPGA and improved in the core area by 90% reduction.

  11. Development of high performance cladding materials

    Park, Jeong Yong; Jeong, Y. H.; Park, S. Y.

    2010-04-01

    The irradiation test for HANA claddings conducted and a series of evaluation for next-HANA claddings as well as their in-pile and out-of pile performances tests were also carried out at Halden research reactor. The 6th irradiation test have been completed successfully in Halden research reactor. As a result, HANA claddings showed high performance, such as corrosion resistance increased by 40% compared to Zircaloy-4. The high performance of HANA claddings in Halden test has enabled lead test rod program as the first step of the commercialization of HANA claddings. DB has been established for thermal and LOCA-related properties. It was confirmed from the thermal shock test that the integrity of HANA claddings was maintained in more expanded region than the criteria regulated by NRC. The manufacturing process of strips was established in order to apply HANA alloys, which were originally developed for the claddings, to the spacer grids. 250 kinds of model alloys for the next-generation claddings were designed and manufactured over 4 times and used to select the preliminary candidate alloys for the next-generation claddings. The selected candidate alloys showed 50% better corrosion resistance and 20% improved high temperature oxidation resistance compared to the foreign advanced claddings. We established the manufacturing condition controlling the performance of the dual-cooled claddings by changing the reduction rate in the cold working steps

  12. A Linux Workstation for High Performance Graphics

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  13. The path toward HEP High Performance Computing

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  14. Solar power plant performance evaluation: simulation and experimental validation

    Natsheh, E M; Albarbar, A

    2012-01-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P and O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  15. Solar power plant performance evaluation: simulation and experimental validation

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  16. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  17. Comparison between the performance of some KEK-klystrons and simulation results

    Fukuda, Shigeki [National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan)

    1997-04-01

    Recent developments of various klystron simulation codes have enabled us to realistically design klystrons. This paper presents various simulation results using the FCI code and the performances of tubes manufactured based on this code. Upgrading a 30-MW S-band klystron and developing a 50-MW S-band klystron for the KEKB projects are successful examples based on FCI-code predictions. Mass-productions of these tubes have already started. On the other hand, a discrepancy has been found between the FCI simulation results and the performance of real tubes. In some cases, the simulation results lead to high-efficiency results, while manufactured tubes show the usual value, or a lower value, of the efficiency. One possible cause may come from a data mismatch between the electron-gun simulation and the input data set of the FCI code for the gun region. This kind of discrepancy has been observed in 30-MW S-band pulsed tubes, sub-booster pulsed tubes and L-band high-duty pulsed klystrons. Sometimes, JPNDSK (one-dimensional disk-model code) gives similar results. Some examples using the FCI code are given in this article. An Arsenal-MSU code could be applied to the 50-MW klystron under collaboration with Moscow State University; a good agreement has been found between the prediction of the code and performance. (author)

  18. Learning through simulated independent practice leads to better future performance in a simulated crisis than learning through simulated supervised practice.

    Goldberg, A; Silverman, E; Samuelson, S; Katz, D; Lin, H M; Levine, A; DeMaria, S

    2015-05-01

    Anaesthetists may fail to recognize and manage certain rare intraoperative events. Simulation has been shown to be an effective educational adjunct to typical operating room-based education to train for these events. It is yet unclear, however, why simulation has any benefit. We hypothesize that learners who are allowed to manage a scenario independently and allowed to fail, thus causing simulated morbidity, will consequently perform better when re-exposed to a similar scenario. Using a randomized, controlled, observer-blinded design, 24 first-year residents were exposed to an oxygen pipeline contamination scenario, either where patient harm occurred (independent group, n=12) or where a simulated attending anaesthetist intervened to prevent harm (supervised group, n=12). Residents were brought back 6 months later and exposed to a different scenario (pipeline contamination) with the same end point. Participants' proper treatment, time to diagnosis, and non-technical skills (measured using the Anaesthetists' Non-Technical Skills Checklist, ANTS) were measured. No participants provided proper treatment in the initial exposure. In the repeat encounter 6 months later, 67% in the independent group vs 17% in the supervised group resumed adequate oxygen delivery (P=0.013). The independent group also had better ANTS scores [median (interquartile range): 42.3 (31.5-53.1) vs 31.3 (21.6-41), P=0.015]. There was no difference in time to treatment if proper management was provided [602 (490-820) vs 610 (420-800) s, P=0.79]. Allowing residents to practise independently in the simulation laboratory, and subsequently, allowing them to fail, can be an important part of simulation-based learning. This is not feasible in real clinical practice but appears to have improved resident performance in this study. The purposeful use of independent practice and its potentially negative outcomes thus sets simulation-based learning apart from traditional operating room learning. © The Author

  19. High Performance Commercial Fenestration Framing Systems

    Mike Manteghi; Sneh Kumar; Joshua Early; Bhaskar Adusumalli

    2010-01-31

    A major objective of the U.S. Department of Energy is to have a zero energy commercial building by the year 2025. Windows have a major influence on the energy performance of the building envelope as they control over 55% of building energy load, and represent one important area where technologies can be developed to save energy. Aluminum framing systems are used in over 80% of commercial fenestration products (i.e. windows, curtain walls, store fronts, etc.). Aluminum framing systems are often required in commercial buildings because of their inherent good structural properties and long service life, which is required from commercial and architectural frames. At the same time, they are lightweight and durable, requiring very little maintenance, and offer design flexibility. An additional benefit of aluminum framing systems is their relatively low cost and easy manufacturability. Aluminum, being an easily recyclable material, also offers sustainable features. However, from energy efficiency point of view, aluminum frames have lower thermal performance due to the very high thermal conductivity of aluminum. Fenestration systems constructed of aluminum alloys therefore have lower performance in terms of being effective barrier to energy transfer (heat loss or gain). Despite the lower energy performance, aluminum is the choice material for commercial framing systems and dominates the commercial/architectural fenestration market because of the reasons mentioned above. In addition, there is no other cost effective and energy efficient replacement material available to take place of aluminum in the commercial/architectural market. Hence it is imperative to improve the performance of aluminum framing system to improve the energy performance of commercial fenestration system and in turn reduce the energy consumption of commercial building and achieve zero energy building by 2025. The objective of this project was to develop high performance, energy efficient commercial

  20. Fracture toughness of ultra high performance concrete by flexural performance

    Manolova Emanuela

    2016-01-01

    Full Text Available This paper describes the fracture toughness of the innovative structural material - Ultra High Performance Concrete (UHPC, evaluated by flexural performance. For determination the material behaviour by static loading are used adapted standard test methods for flexural performance of fiber-reinforced concrete (ASTM C 1609 and ASTM C 1018. Fracture toughness is estimated by various deformation parameters derived from the load-deflection curve, obtained by testing simple supported beam under third-point loading, using servo-controlled testing system. This method is used to be estimated the contribution of the embedded fiber-reinforcement into improvement of the fractural behaviour of UHPC by changing the crack-resistant capacity, fracture toughness and energy absorption capacity with various mechanisms. The position of the first crack has been formulated based on P-δ (load- deflection response and P-ε (load - longitudinal deformation in the tensile zone response, which are used for calculation of the two toughness indices I5 and I10. The combination of steel fibres with different dimensions leads to a composite, having at the same time increased crack resistance, first crack formation, ductility and post-peak residual strength.

  1. High-Performance Tiled WMS and KML Web Server

    Plesea, Lucian

    2007-01-01

    This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.

  2. Near peripheral motion contrast threshold predicts older drivers' simulator performance.

    Henderson, Steven; Gagnon, Sylvain; Collin, Charles; Tabone, Ricardo; Stinchcombe, Arne

    2013-01-01

    Our group has previously demonstrated that peripheral motion contrast threshold (PMCT) is significantly associated with self-reported accident risk of older drivers (questionnaire assessment), and with Useful Field of View(®) subtest 2 (UFOV2). It has not been shown, however, that PMCT is significantly associated with driving performance. Using the method of descending limits (spatial two-alternative forced choice) we assessed motion contrast thresholds of 28 young participants (25-45), and 21 older drivers (63-86) for 0.4 cycle/degree drifting Gabor stimuli at 15° eccentricity and examined whether it was related to performance on a simulated on-road test and to a measure of visual attention (UFOV(®) subtests 2 and 3). Peripheral motion contrast thresholds (PMCT) of younger participants were significantly lower than older participants. PMCT and UFOV2 significantly predicted driving examiners' scores of older drivers' simulator performance, as well as number of crashes. Within the older group, PMCT correlated significantly with UFOV2, UFOV3, and age. Within the younger group, PMCT was not significantly related to either UFOV(®) scores or age. Partial correlations showed that: substantial association between PMCT and UFOV2 was not age-related (within the older driver group); PMCT and UFOV2 tapped a common visual function; and PMCT assessed a component not captured by UFOV2. PMCT is potentially a useful assessment tool for predicting accident risk of older drivers, and for informing efforts to develop effective countermeasures to remediate this functional deficit as much as possible. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Shock Mechanism Analysis and Simulation of High-Power Hydraulic Shock Wave Simulator

    Xiaoqiu Xu

    2017-01-01

    Full Text Available The simulation of regular shock wave (e.g., half-sine can be achieved by the traditional rubber shock simulator, but the practical high-power shock wave characterized by steep prepeak and gentle postpeak is hard to be realized by the same. To tackle this disadvantage, a novel high-power hydraulic shock wave simulator based on the live firing muzzle shock principle was proposed in the current work. The influence of the typical shock characteristic parameters on the shock force wave was investigated via both theoretical deduction and software simulation. According to the obtained data compared with the results, in fact, it can be concluded that the developed hydraulic shock wave simulator can be applied to simulate the real condition of the shocking system. Further, the similarity evaluation of shock wave simulation was achieved based on the curvature distance, and the results stated that the simulation method was reasonable and the structural optimization based on software simulation is also beneficial to the increase of efficiency. Finally, the combination of theoretical analysis and simulation for the development of artillery recoil tester is a comprehensive approach in the design and structure optimization of the recoil system.

  4. Design and experimentally measure a high performance metamaterial filter

    Xu, Ya-wen; Xu, Jing-cheng

    2018-03-01

    Metamaterial filter is a kind of expecting optoelectronic device. In this paper, a metal/dielectric/metal (M/D/M) structure metamaterial filter is simulated and measured. Simulated results indicate that the perfect impedance matching condition between the metamaterial filter and the free space leads to the transmission band. Measured results show that the proposed metamaterial filter achieves high performance transmission on TM and TE polarization directions. Moreover, the high transmission rate is also can be obtained when the incident angle reaches to 45°. Further measured results show that the transmission band can be expanded through optimizing structural parameters. The central frequency of the transmission band is also can be adjusted through optimizing structural parameters. The physical mechanism behind the central frequency shifted is solved through establishing an equivalent resonant circuit model.

  5. High-performance scientific computing in the cloud

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  6. HIGH PERFORMANCE CERIA BASED OXYGEN MEMBRANE

    2014-01-01

    The invention describes a new class of highly stable mixed conducting materials based on acceptor doped cerium oxide (CeO2-8 ) in which the limiting electronic conductivity is significantly enhanced by co-doping with a second element or co- dopant, such as Nb, W and Zn, so that cerium and the co......-dopant have an ionic size ratio between 0.5 and 1. These materials can thereby improve the performance and extend the range of operating conditions of oxygen permeation membranes (OPM) for different high temperature membrane reactor applications. The invention also relates to the manufacturing of supported...

  7. Playa: High-Performance Programmable Linear Algebra

    Victoria E. Howle

    2012-01-01

    Full Text Available This paper introduces Playa, a high-level user interface layer for composing algorithms for complex multiphysics problems out of objects from other Trilinos packages. Among other features, Playa provides very high-performance overloaded operators implemented through an expression template mechanism. In this paper, we give an overview of the central Playa objects from a user's perspective, show application to a sequence of increasingly complex solver algorithms, provide timing results for Playa's overloaded operators and other functions, and briefly survey some of the implementation issues involved.

  8. Optimizing the design of very high power, high performance converters

    Edwards, R.J.; Tiagha, E.A.; Ganetis, G.; Nawrocky, R.J.

    1980-01-01

    This paper describes how various technologies are used to achieve the desired performance in a high current magnet power converter system. It is hoped that the discussions of the design approaches taken will be applicable to other power supply systems where stringent requirements in stability, accuracy and reliability must be met

  9. Simulation of microdamage in ceramics deformed under high confinement

    Zhang Dongmei; Feng Ruqiang

    2004-01-01

    A polycrystalline ceramic may display high strength under dynamic compression but fails catastrophically during load reversal to tension. One plausible mechanism is that heterogeneous plasticity in some of the crystals under compression induces microdamage during load reversal. To examine this possibility quantitatively, we developed a computational method, in which the polycrystalline microstructure is realistically simulated using Voronoi crystals having grain boundary layer. Both anisotropic elasticity and plastic slip in limited crystallographic planes are considered in crystal modeling. The grain boundary material is treated as an isotropic glassy solid, which has pressure-dependent shear strength under compression and fractures in Mode I when the threshold is reached. The structural and material models have been implemented into ABAQUS/Explicit code. Model simulations have been performed to analyze the intragranular microplasticity, intergranular microdamage, and their interactions in polycrystalline α-6H silicon carbide subjected to dynamic unaxial-strain compression and then load reversal to tension. It is found that microplasticity is more favorable than intergranular shear damage during compression. However, both the microplasticity-induced heterogeneity and the grain boundary damage affect strongly microcracking during load reversal, which leads to fragmentation or spallation depending on the level of compression. The significance of these findings is discussed

  10. Numerical Simulation of Oil Jet Lubrication for High Speed Gears

    Tommaso Fondelli

    2015-01-01

    Full Text Available The Geared Turbofan technology is one of the most promising engine configurations to significantly reduce the specific fuel consumption. In this architecture, a power epicyclical gearbox is interposed between the fan and the low pressure spool. Thanks to the gearbox, fan and low pressure spool can turn at different speed, leading to higher engine bypass ratio. Therefore the gearbox efficiency becomes a key parameter for such technology. Further improvement of efficiency can be achieved developing a physical understanding of fluid dynamic losses within the transmission system. These losses are mainly related to viscous effects and they are directly connected to the lubrication method. In this work, the oil injection losses have been studied by means of CFD simulations. A numerical study of a single oil jet impinging on a single high speed gear has been carried out using the VOF method. The aim of this analysis is to evaluate the resistant torque due to the oil jet lubrication, correlating the torque data with the oil-gear interaction phases. URANS calculations have been performed using an adaptive meshing approach, as a way of significantly reducing the simulation costs. A global sensitivity analysis of adopted models has been carried out and a numerical setup has been defined.

  11. Robust High Performance Aquaporin based Biomimetic Membranes

    Helix Nielsen, Claus; Zhao, Yichun; Qiu, C.

    2013-01-01

    on top of a support membrane. Control membranes, either without aquaporins or with the inactive AqpZ R189A mutant aquaporin served as controls. The separation performance of the membranes was evaluated by cross-flow forward osmosis (FO) and reverse osmosis (RO) tests. In RO the ABM achieved a water......Aquaporins are water channel proteins with high water permeability and solute rejection, which makes them promising for preparing high-performance biomimetic membranes. Despite the growing interest in aquaporin-based biomimetic membranes (ABMs), it is challenging to produce robust and defect...... permeability of ~ 4 L/(m2 h bar) with a NaCl rejection > 97% at an applied hydraulic pressure of 5 bar. The water permeability was ~40% higher compared to a commercial brackish water RO membrane (BW30) and an order of magnitude higher compared to a seawater RO membrane (SW30HR). In FO, the ABMs had > 90...

  12. Evaluation of high-performance computing software

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  13. High performance cloud auditing and applications

    Choi, Baek-Young; Song, Sejun

    2014-01-01

    This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments.   In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially suppor...

  14. Monitoring SLAC High Performance UNIX Computing Systems

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  15. High performance parallel computers for science

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  16. Toward a theory of high performance.

    Kirby, Julia

    2005-01-01

    What does it mean to be a high-performance company? The process of measuring relative performance across industries and eras, declaring top performers, and finding the common drivers of their success is such a difficult one that it might seem a fool's errand to attempt. In fact, no one did for the first thousand or so years of business history. The question didn't even occur to many scholars until Tom Peters and Bob Waterman released In Search of Excellence in 1982. Twenty-three years later, we've witnessed several more attempts--and, just maybe, we're getting closer to answers. In this reported piece, HBR senior editor Julia Kirby explores why it's so difficult to study high performance and how various research efforts--including those from John Kotter and Jim Heskett; Jim Collins and Jerry Porras; Bill Joyce, Nitin Nohria, and Bruce Roberson; and several others outlined in a summary chart-have attacked the problem. The challenge starts with deciding which companies to study closely. Are the stars the ones with the highest market caps, the ones with the greatest sales growth, or simply the ones that remain standing at the end of the game? (And when's the end of the game?) Each major study differs in how it defines success, which companies it therefore declares to be worthy of emulation, and the patterns of activity and attitude it finds in common among them. Yet, Kirby concludes, as each study's method incrementally solves problems others have faced, we are progressing toward a consensus theory of high performance.

  17. High Performance Interactive System Dynamics Visualization

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  18. High Performance Interactive System Dynamics Visualization

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  19. High-performance phase-field modeling

    Vignal, Philippe; Sarmiento, Adel; Cortes, Adriano Mauricio; Dalcin, L.; Collier, N.; Calo, Victor M.

    2015-01-01

    and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  20. AHPCRC - Army High Performance Computing Research Center

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms