A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips
Directory of Open Access Journals (Sweden)
Guanyi Sun
2011-01-01
Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation
Directory of Open Access Journals (Sweden)
Xishan Wen
2017-04-01
Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.
Simulation analysis for hyperbola locating accuracy
International Nuclear Information System (INIS)
Wang Changli; Liu Daizhi
2004-01-01
In the system of the hyperbola location, the geometric shape of the detecting stations has an important influence on the locating accuracy. At first, this paper simulates the process of the hyperbola location by the computer, and then analyzes the influence of the geometric shape on the locating errors and gives the computer simulation results, finally, discusses the problems that require attention in course of selecting the detecting station. The conclusion has practicality. (authors)
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units
Directory of Open Access Journals (Sweden)
Qingzhong Cai
2016-06-01
Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.
Impacts of land use/cover classification accuracy on regional climate simulations
Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.
2007-03-01
Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
International Nuclear Information System (INIS)
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degree apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly to about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ± 2 x 10 -4 and tracking error of ± 5 x 10 -4 was needed
Multi-Accuracy-Level Burning Plasma Simulations
International Nuclear Information System (INIS)
Artaud, J. F.; Basiuk, V.; Garcia, J.; Giruzzi, G.; Huynh, P.; Huysmans, G.; Imbeaux, F.; Johner, J.; Scheider, M.
2007-01-01
The design of a reactor grade tokamak is based on a hierarchy of tools. We present here three codes that are presently used for the simulations of burning plasmas. At the first level there is a 0-dimensional code that allows to choose a reasonable range of global parameters; in our case the HELIOS code was used for this task. For the second level we have developed a mixed 0-D / 1-D code called METIS that allows to study the main properties of a burning plasma, including profiles and all heat and current sources, but always under the constraint of energy and other empirical scaling laws. METIS is a fast code that permits to perform a large number of runs (a run takes about one minute) and design the main features of a scenario, or validate the results of the 0-D code on a full time evolution. At the top level, we used the full 1D1/2 suite of codes CRONOS that gives access to a detailed study of the plasma profiles evolution. CRONOS can use a variety of modules for source terms and transport coefficients computation with different level of complexity and accuracy: from simple estimators to highly sophisticated physics calculations. Thus it is possible to vary the accuracy of burning plasma simulations, as a trade-off with computation time. A wide range of scenario studies can thus be made with CRONOS and then validated with post-processing tools like MHD stability analysis. We will present in this paper results of this multi-level analysis applied to the ITER hybrid scenario. This specific example will illustrate the importance of having several tools for the study of burning plasma scenarios, especially in a domain that present devices cannot access experimentally. (Author)
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
International Nuclear Information System (INIS)
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degrees apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly at about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ±2x10 -4 and tracking error of ±5x10 -4 was needed. 3 refs., 5 figs
High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts
DEFF Research Database (Denmark)
Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard
2011-01-01
Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...
MODELING AND SIMULATION OF HIGH RESOLUTION OPTICAL REMOTE SENSING SATELLITE GEOMETRIC CHAIN
Directory of Open Access Journals (Sweden)
Z. Xia
2018-04-01
Full Text Available The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.
4D dose simulation in volumetric arc therapy: Accuracy and affecting parameters
Werner, René
2017-01-01
between 98% and 100%. Parameters of major influence on 4D VMAT dose simulation accuracy were the degree of temporal discretization of the dose delivery process (the higher, the better) and correct alignment of the assumed breathing phases at the beginning of the dose measurements and simulations. Given the high γ-passing rates between simulated motion-affected doses and dynamic measurements, we consider the simulations to provide a reliable basis for assessment of VMAT motion effects that–in the sense of 4D QA of VMAT treatment plans–allows to verify target coverage in hypofractioned VMAT-based radiotherapy of moving targets. Remaining differences between measurements and simulations motivate, however, further detailed studies. PMID:28231337
High-order dynamic lattice method for seismic simulation in anisotropic media
Hu, Xiaolin; Jia, Xiaofeng
2018-03-01
The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.
Energy Technology Data Exchange (ETDEWEB)
Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)
2015-08-15
High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.
Assessing accuracy of point fire intervals across landscapes with simulation modelling
Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall
2007-01-01
We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...
International Nuclear Information System (INIS)
Karanki, D.R.; Rahman, S.; Dang, V.N.; Zerkak, O.
2017-01-01
The coupling of plant simulation models and stochastic models representing failure events in Dynamic Event Trees (DET) is a framework used to model the dynamic interactions among physical processes, equipment failures, and operator responses. The integration of physical and stochastic models may additionally enhance the treatment of uncertainties. Probabilistic Safety Assessments as currently implemented propagate the (epistemic) uncertainties in failure probabilities, rates, and frequencies; while the uncertainties in the physical model (parameters) are not propagated. The coupling of deterministic (physical) and probabilistic models in integrated simulations such as DET allows both types of uncertainties to be considered. However, integrated accident simulations with epistemic uncertainties will challenge even today's high performance computing infrastructure, especially for simulations of inherently complex nuclear or chemical plants. Conversely, intentionally limiting computations for practical reasons would compromise accuracy of results. This work investigates how to tradeoff accuracy and computations to quantify risk in light of both uncertainties and accident dynamics. A simple depleting tank problem that can be solved analytically is considered to examine the adequacy of a discrete DET approach. The results show that optimal allocation of computational resources between epistemic and aleatory calculations by means of convergence studies ensures accuracy within a limited budget. - Highlights: • Accident simulations considering uncertainties require intensive computations. • Tradeoff between accuracy and accident simulations is a challenge. • Optimal allocation between epistemic & aleatory computations ensures the tradeoff. • Online convergence gives an early indication of computational requirements. • Uncertainty propagation in DDET is examined on a tank problem solved analytically.
Directory of Open Access Journals (Sweden)
Xiaoming Zha
2016-11-01
Full Text Available Power hardware-in-the-loop (PHIL systems are advanced, real-time platforms for combined software and hardware testing. Two paramount issues in PHIL simulations are the closed-loop stability and simulation accuracy. This paper presents a virtual impedance (VI method for PHIL simulations that improves the simulation’s stability and accuracy. Through the establishment of an impedance model for a PHIL simulation circuit, which is composed of a voltage-source converter and a simple network, the stability and accuracy of the PHIL system are analyzed. Then, the proposed VI method is implemented in a digital real-time simulator and used to correct the combined impedance in the impedance model, achieving higher stability and accuracy of the results. The validity of the VI method is verified through the PHIL simulation of two typical PHIL examples.
Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments
International Nuclear Information System (INIS)
Sutherland, B.; Hanlon, P.; Charles, P.
2011-01-01
Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.
Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.
Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2012-11-08
A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.
Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-12-20
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.
High Fidelity BWR Fuel Simulations
Energy Technology Data Exchange (ETDEWEB)
Yoon, Su Jong [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-08-01
This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fraction and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.
A comparison of the accuracy of intraoral scanners using an intraoral environment simulator.
Park, Hye-Nan; Lim, Young-Jun; Yi, Won-Jin; Han, Jung-Suk; Lee, Seung-Pyo
2018-02-01
The aim of this study was to design an intraoral environment simulator and to assess the accuracy of two intraoral scanners using the simulator. A box-shaped intraoral environment simulator was designed to simulate two specific intraoral environments. The cast was scanned 10 times by Identica Blue (MEDIT, Seoul, South Korea), TRIOS (3Shape, Copenhagen, Denmark), and CS3500 (Carestream Dental, Georgia, USA) scanners in the two simulated groups. The distances between the left and right canines (D3), first molars (D6), second molars (D7), and the left canine and left second molar (D37) were measured. The distance data were analyzed by the Kruskal-Wallis test. The differences in intraoral environments were not statistically significant ( P >.05). Between intraoral scanners, statistically significant differences ( P Kruskal-Wallis test with regard to D3 and D6. No difference due to the intraoral environment was revealed. The simulator will contribute to the higher accuracy of intraoral scanners in the future.
Two high accuracy digital integrators for Rogowski current transducers
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
International Nuclear Information System (INIS)
Meng, Xiaojing; Wang, Yi; Liu, Tiening; Xing, Xiao; Cao, Yingxue; Zhao, Jiangping
2016-01-01
Highlights: • The effects of radiation on predictive accuracy in numerical simulations were studied. • A scaled experimental model with a high-temperature heat source was set up. • Simulation results were discussed considering with and without radiation model. • The buoyancy force and the ventilation rate were investigated. - Abstract: This paper investigates the effects of radiation on predictive accuracy in the numerical simulations of industrial buildings. A scaled experimental model with a high-temperature heat source is set up and the buoyancy-driven natural ventilation performance is presented. Besides predicting ventilation performance in an industrial building, the scaled model in this paper is also used to generate data to validate the numerical simulations. The simulation results show good agreement with the experiment data. The effects of radiation on predictive accuracy in the numerical simulations are studied for both pure convection model and combined convection and radiation model. Detailed results are discussed regarding the temperature and velocity distribution, the buoyancy force and the ventilation rate. The temperature and velocity distributions through the middle plane are presented for the pure convection model and the combined convection and radiation model. It is observed that the overall temperature and velocity magnitude predicted by the simulations for pure convection were significantly greater than those for the combined convection and radiation model. In addition, the Grashof number and the ventilation rate are investigated. The results show that the Grashof number and the ventilation rate are greater for the pure convection model than for the combined convection and radiation model.
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-11-21
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.
Lorenz, Aaron J
2013-03-01
Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation
A fuzzy set approach to assess the predictive accuracy of land use simulations
van Vliet, J.; Hagen-Zanker, A.; Hurkens, J.; van van Delden, H.
2013-01-01
The predictive accuracy of land use models is frequently assessed by comparing two data sets: the simulated land use map and the observed land use map at the end of the simulation period. A common statistic for this is Kappa, which expresses the agreement between two categorical maps, corrected for
High accuracy FIONA-AFM hybrid imaging
International Nuclear Information System (INIS)
Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.
2011-01-01
Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.
High Accuracy Transistor Compact Model Calibrations
Energy Technology Data Exchange (ETDEWEB)
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
Micro Injection Moulding High Accuracy Three-Dimensional Simulations and Process Control
DEFF Research Database (Denmark)
Tosello, Guido; Costa, F.S.; Hansen, Hans Nørgaard
2011-01-01
Data analysis and simulations of micro‐moulding experiments have been conducted. Micro moulding simulations have been executed by implementing in the software the actual processing conditions. Various aspects of the simulation set‐up have been considered in order to improve the simulation accurac...
High Accuracy Attitude Control System Design for Satellite with Flexible Appendages
Directory of Open Access Journals (Sweden)
Wenya Zhou
2014-01-01
Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Accuracy of finite-difference modeling of seismic waves : Simulation versus laboratory measurements
Arntsen, B.
2017-12-01
The finite-difference technique for numerical modeling of seismic waves is still important and for some areas extensively used.For exploration purposes is finite-difference simulation at the core of both traditional imaging techniques such as reverse-time migration and more elaborate Full-Waveform Inversion techniques.The accuracy and fidelity of finite-difference simulation of seismic waves are hard to quantify and meaningfully error analysis is really onlyeasily available for simplistic media. A possible alternative to theoretical error analysis is provided by comparing finite-difference simulated data with laboratory data created using a scale model. The advantage of this approach is the accurate knowledge of the model, within measurement precision, and the location of sources and receivers.We use a model made of PVC immersed in water and containing horizontal and tilted interfaces together with several spherical objects to generateultrasonic pressure reflection measurements. The physical dimensions of the model is of the order of a meter, which after scaling represents a model with dimensions of the order of 10 kilometer and frequencies in the range of one to thirty hertz.We find that for plane horizontal interfaces the laboratory data can be reproduced by the finite-difference scheme with relatively small error, but for steeply tilted interfaces the error increases. For spherical interfaces the discrepancy between laboratory data and simulated data is sometimes much more severe, to the extent that it is not possible to simulate reflections from parts of highly curved bodies. The results are important in view of the fact that finite-difference modeling is often at the core of imaging and inversion algorithms tackling complicatedgeological areas with highly curved interfaces.
International Nuclear Information System (INIS)
Tehrani, Joubin Nasehi; Wang, Jing; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney–Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney–Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney–Rivlin material model along left-right, anterior–posterior, and superior–inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. (paper)
Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing
2015-01-01
Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional com...
Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E
1998-07-15
To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.
International Nuclear Information System (INIS)
Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.
1998-01-01
Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall
Testing the accuracy of clustering redshifts with simulations
Scottez, V.; Benoit-Lévy, A.; Coupon, J.; Ilbert, O.; Mellier, Y.
2018-03-01
We explore the accuracy of clustering-based redshift inference within the MICE2 simulation. This method uses the spatial clustering of galaxies between a spectroscopic reference sample and an unknown sample. This study give an estimate of the reachable accuracy of this method. First, we discuss the requirements for the number objects in the two samples, confirming that this method does not require a representative spectroscopic sample for calibration. In the context of next generation of cosmological surveys, we estimated that the density of the Quasi Stellar Objects in BOSS allows us to reach 0.2 per cent accuracy in the mean redshift. Secondly, we estimate individual redshifts for galaxies in the densest regions of colour space ( ˜ 30 per cent of the galaxies) without using the photometric redshifts procedure. The advantage of this procedure is threefold. It allows: (i) the use of cluster-zs for any field in astronomy, (ii) the possibility to combine photo-zs and cluster-zs to get an improved redshift estimation, (iii) the use of cluster-z to define tomographic bins for weak lensing. Finally, we explore this last option and build five cluster-z selected tomographic bins from redshift 0.2 to 1. We found a bias on the mean redshift estimate of 0.002 per bin. We conclude that cluster-z could be used as a primary redshift estimator by next generation of cosmological surveys.
Kontrola tačnosti rezultata u simulacijama Monte Karlo / Accuracy control in Monte Carlo simulations
Directory of Open Access Journals (Sweden)
Nebojša V. Nikolić
2010-04-01
Full Text Available U radu je demonstrirana primena metode automatizovanog ponavljanja nezavisnih simulacionih eksperimenata sa prikupljanjem statistike slučajnih procesa, u dostizanju i kontroli tačnosti simulacionih rezultata u simulaciji sistema masovnog opsluživanja Monte Karlo. Metoda se zasniva na primeni osnovnih stavova i teorema matematičke statistike i teorije verovatnoće. Tačnost simulacionih rezultata dovedena je u direktnu vezu sa brojem ponavljanja simulacionih eksperimenata. / The paper presents an application of the Automated Independent Replication with Gathering Statistics of the Stochastic Processes Method in achieving and controlling the accuracy of simulation results in the Monte Carlo queuing simulations. The method is based on the application of the basic theorems of the theory of probability and mathematical statistics. The accuracy of the simulation results is linked with a number of independent replications of simulation experiments.
Hurtado, Daniel E.; Rojas, Guillermo
2018-04-01
Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang
2012-01-01
Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related
Directory of Open Access Journals (Sweden)
WIDAD Elmahboub
2005-02-01
Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.
Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report.
Energy Technology Data Exchange (ETDEWEB)
Thompson, Aidan Patrick; Schultz, Peter Andrew; Crozier, Paul; Moore, Stan Gerald; Swiler, Laura Painton; Stephens, John Adam; Trott, Christian Robert; Foiles, Stephen Martin; Tucker, Garritt J. (Drexel University)
2014-09-01
This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients
International Nuclear Information System (INIS)
Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.
2016-01-01
Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.
High Accuracy Positioning using Jet Thrusters for Quadcopter
Directory of Open Access Journals (Sweden)
Pi ChenHuan
2018-01-01
Full Text Available A quadcopter is equipped with four additional jet thrusters on its horizontal plane and vertical to each other in order to improve the maneuverability and positioning accuracy of quadcopter. A dynamic model of the quadcopter with jet thrusters is derived and two controllers are implemented in simulation, one is a dual loop state feedback controller for pose control and another is an auxiliary jet thruster controller for accurate positioning. Step response simulations showed that the jet thruster can control the quadcopter with less overshoot compared to the conventional one. Over 10s loiter simulation with disturbance, the quadcopter with jet thruster decrease 85% of RMS error of horizontal disturbance compared to a conventional quadcopter with only a dual loop state feedback controller. The jet thruster controller shows the possibility for further accurate in the field of quadcopter positioning.
Simulations of pulsating one-dimensional detonations with true fifth order accuracy
International Nuclear Information System (INIS)
Henrick, Andrew K.; Aslam, Tariq D.; Powers, Joseph M.
2006-01-01
A novel, highly accurate numerical scheme based on shock-fitting coupled with fifth order spatial and temporal discretizations is applied to a classical unsteady detonation problem to generate solutions with unprecedented accuracy. The one-dimensional reactive Euler equations for a calorically perfect mixture of ideal gases whose reaction is described by single-step irreversible Arrhenius kinetics are solved in a series of calculations in which the activation energy is varied. In contrast with nearly all known simulations of this problem, which converge at a rate no greater than first order as the spatial and temporal grid is refined, the present method is shown to converge at a rate consistent with the fifth order accuracy of the spatial and temporal discretization schemes. This high accuracy enables more precise verification of known results and prediction of heretofore unknown phenomena. To five significant figures, the scheme faithfully recovers the stability boundary, growth rates, and wave-numbers predicted by an independent linear stability theory in the stable and weakly unstable regime. As the activation energy is increased, a series of period-doubling events are predicted, and the system undergoes a transition to chaos. Consistent with general theories of non-linear dynamics, the bifurcation points are seen to converge at a rate for which the Feigenbaum constant is 4.66 ± 0.09, in close agreement with the true value of 4.669201... As activation energy is increased further, domains are identified in which the system undergoes a transition from a chaotic state back to one whose limit cycles are characterized by a small number of non-linear oscillatory modes. This result is consistent with behavior of other non-linear dynamical systems, but not typically considered in detonation dynamics. The period and average detonation velocity are calculated for a variety of asymptotically stable limit cycles. The average velocity for such pulsating detonations is found
EM Simulation Accuracy Enhancement for Broadband Modeling of On-Wafer Passive Components
DEFF Research Database (Denmark)
Johansen, Tom Keinicke; Jiang, Chenhui; Hadziabdic, Dzenan
2007-01-01
This paper describes methods for accuracy enhancement in broadband modeling of on-wafer passive components using electromagnetic (EM) simulation. It is shown that standard excitation schemes for integrated component simulation leads to poor correlation with on-wafer measurements beyond the lower...... GHz frequency range. We show that this is due to parasitic effects and higher-order modes caused by the excitation schemes. We propose a simple equivalent circuit for the parasitic effects in the well-known ground ring excitation scheme. An extended L-2L calibration method is shown to improve...
Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.
Maldonado, G; Greenland, S
1998-07-01
A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.
Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong
2017-12-04
A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.
Energy Technology Data Exchange (ETDEWEB)
Frouzakis, C. E.; Boulouchos, K.
2005-12-15
This comprehensive illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on the work done at the Swiss Federal Institute of Technology in Zurich on the numerical simulation of combustion processes at high Reynolds numbers. The authors note that with appropriate extensive calculation effort, results can be obtained that demonstrate a high degree of accuracy. It is noted that a large part of the project work was devoted to the development of algorithms for the simulation of the combustion processes. Application work is also discussed with research on combustion stability being carried on. The direct numerical simulation (DNS) methods used are described and co-operation with other institutes is noted. The results of experimental work are compared with those provided by simulation and are discussed in detail. Conclusions and an outlook round off the report.
DEFF Research Database (Denmark)
Jakobsen, Jakob; Jensen, Anna B. O.; Nielsen, Allan Aasbjerg
2015-01-01
non-line-of-sight satellites. The signal reflections are implemented using the extended geometric path length of the signal path caused by reflections from the surrounding buildings. Based on real GPS satellite positions, simulated Galileo satellite positions, models of atmospheric effect...... on the satellite signals, designs of representative environments e.g. urban and rural scenarios, and a method to simulate reflection of satellite signals within the environment we are able to estimate the position accuracy given several prerequisites as described in the paper. The result is a modelling...... of the signal path from satellite to receiver, the satellite availability, the extended pseudoranges caused by signal reflection, and an estimate of the position accuracy based on a least squares adjustment of the extended pseudoranges. The paper describes the models and algorithms used and a verification test...
International Nuclear Information System (INIS)
Zhong Xiaolin; Tatineni, Mahidhar
2003-01-01
The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow
Application of large-eddy simulation to pressurized thermal shock: Assessment of the accuracy
International Nuclear Information System (INIS)
Loginov, M.S.; Komen, E.M.J.; Hoehne, T.
2011-01-01
Highlights: → We compare large-eddy simulation with experiment on the single-phase pressurized thermal shock problem. → Three test cases are considered, they cover entire range of mixing patterns. → The accuracy of the flow mixing in the reactor pressure vessel is assessed qualitatively and quantitatively. - Abstract: Pressurized Thermal Shock (PTS) is identified as one of the safety issues where Computational Fluid Dynamics (CFD) can bring real benefits. The turbulence modeling may impact overall accuracy of the calculated thermal loads on the vessel walls, therefore advanced methods for turbulent flows are required. The feasibility and mesh resolution of LES for single-phase PTS are assessed earlier in a companion paper. The current investigation deals with the accuracy of LES approach with respect to the experiment. Experimental data from the Rossendorf Coolant Mixing (ROCOM) facility is used as a basis for validation. Three test cases with different flow rates are considered. They correspond to a buoyancy-driven, a momentum-driven, and a transitional coolant mixing pattern in the downcomer. Time- and frequency-domain analysis are employed for comparison of the numerical and experimental data. The investigation shows a good qualitative prediction of the bulk flow patterns. The fluctuations are modeled correctly. A conservative estimate of the temperature drop near the wall can be obtained from the numerical results with safety factor of 1.1-1.3. In general, the current LES gives a realistic and reliable description of the considered coolant mixing experiments. The accuracy of the prediction is definitely improved with respect to earlier CFD simulations.
A high accuracy algorithm of displacement measurement for a micro-positioning stage
Directory of Open Access Journals (Sweden)
Xiang Zhang
2017-05-01
Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.
Energy Technology Data Exchange (ETDEWEB)
Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-07-15
Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are
Energy Technology Data Exchange (ETDEWEB)
Inagaki, M.; Abe, K. [Toyota Central Research and Development Labs., Inc., Aichi (Japan)
1998-07-25
With the recent advances in computers, large eddy simulation (LES) has become applicable to engineering prediction. However, most cases of the engineering applications need to use the nonorthgonal curvilimear coordinate systems. The staggered grids, usually used in LES in the orthgonal coordinates, don`t keep conservative properties in the nonorthgonal curvilinear coordinates. On the other hand, the colocated grids can be applied in the nonorthgonal curvilinear coordinates without losing its conservative properties, although its prediction accuracy isn`t so high as the staggered grid`s in the orthgonal coordinates especially with the coarse grids. In this research, the discretization method of the colocated grids is modified to improve its prediction accuracy. Plane channel flows are simulated on four grids of different resolution using the modified colocated grids and the original colocated grids. The results show that the modified colocated grids have higher accuracy than the original colocated grids. 17 refs., 13 figs., 1 tab.
Directory of Open Access Journals (Sweden)
J. R. Santillan
2016-09-01
Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs
Wang, Y; He, S; Guo, Y; Wang, S; Chen, S
2013-08-01
To evaluate the accuracy of volumetric measurement of simulated root resorption cavities based on cone beam computed tomography (CBCT), in comparison with that of Micro-computed tomography (Micro-CT) which served as the reference. The State Key Laboratory of Oral Diseases at Sichuan University. Thirty-two bovine teeth were included for standardized CBCT scanning and Micro-CT scanning before and after the simulation of different degrees of root resorption. The teeth were divided into three groups according to the depths of the root resorption cavity (group 1: 0.15, 0.2, 0.3 mm; group 2: 0.6, 1.0 mm; group 3: 1.5, 2.0, 3.0 mm). Each depth included four specimens. Differences in tooth volume before and after simulated root resorption were then calculated from CBCT and Micro-CT scans, respectively. The overall between-method agreement of the measurements was evaluated using the concordance correlation coefficient (CCC). For the first group, the average volume of resorption cavity was 1.07 mm(3) , and the between-method agreement of measurement for the volume changes was low (CCC = 0.098). For the second and third groups, the average volumes of resorption cavities were 3.47 and 6.73 mm(3) respectively, and the between-method agreements were good (CCC = 0.828 and 0.895, respectively). The accuracy of 3-D quantitative volumetric measurement of simulated root resorption based on CBCT was fairly good in detecting simulated resorption cavities larger than 3.47 mm(3), while it was not sufficient for measuring resorption cavities smaller than 1.07 mm(3) . This method could be applied in future studies of root resorption although further studies are required to improve its accuracy. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Yurkin, M.A.; de Kanter, D.; Hoekstra, A.G.
2010-01-01
We studied the accuracy of the discrete dipole approximation (DDA) for simulations of absorption and scattering spectra by gold nanoparticles (spheres, cubes, and rods ranging in size from 10 to 100 nm). We varied the dipole resolution and applied two DDA formulations, employing the standard lattice
Accuracy assessment of high-rate GPS measurements for seismology
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
A simulation of driven reconnection by a high precision MHD code
International Nuclear Information System (INIS)
Kusano, Kanya; Ouchi, Yasuo; Hayashi, Takaya; Horiuchi, Ritoku; Watanabe, Kunihiko; Sato, Tetsuya.
1988-01-01
A high precision MHD code, which has the fourth-order accuracy for both the spatial and time steps, is developed, and is applied to the simulation studies of two dimensional driven reconnection. It is confirm that the numerical dissipation of this new scheme is much less than that of two-step Lax-Wendroff scheme. The effect of the plasma compressibility on the reconnection dynamics is investigated by means of this high precision code. (author)
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Directory of Open Access Journals (Sweden)
Peilu Liu
2017-10-01
Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
DEFF Research Database (Denmark)
Busck, Jens; Heiselberg, Henning
2004-01-01
We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...
High accuracy autonomous navigation using the global positioning system (GPS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
International Nuclear Information System (INIS)
Liu Jizhi; Chen Xingbi
2009-01-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure
Energy Technology Data Exchange (ETDEWEB)
Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)
2009-12-15
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)
High accuracy wavelength calibration for a scanning visible spectrometer
Energy Technology Data Exchange (ETDEWEB)
Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2010-10-15
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
Switched-capacitor techniques for high-accuracy filter and ADC design
Quinn, P.J.; Roermund, van A.H.M.
2007-01-01
Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement
Directory of Open Access Journals (Sweden)
Xianglei Liu
2018-01-01
Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Accuracy of Binary Black Hole waveforms for Advanced LIGO searches
Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela
2015-04-01
Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.
An angle encoder for super-high resolution and super-high accuracy using SelfA
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-06-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after
Status report on high fidelity reactor simulation
International Nuclear Information System (INIS)
Palmiotti, G.; Smith, M.; Rabiti, C.; Lewis, E.; Yang, W.; Leclere, M.; Siegel, A.; Fischer, P.; Kaushik, D.; Ragusa, J.; Lottes, J.; Smith, B.
2006-01-01
This report presents the effort under way at Argonne National Laboratory toward a comprehensive, integrated computational tool intended mainly for the high-fidelity simulation of sodium-cooled fast reactors. The main activities carried out involved neutronics, thermal hydraulics, coupling strategies, software architecture, and high-performance computing. A new neutronics code, UNIC, is being developed. The first phase involves the application of a spherical harmonics method to a general, unstructured three-dimensional mesh. The method also has been interfaced with a method of characteristics. The spherical harmonics equations were implemented in a stand-alone code that was then used to solve several benchmark problems. For thermal hydraulics, a computational fluid dynamics code called Nek5000, developed in the Mathematics and Computer Science Division for coupled hydrodynamics and heat transfer, has been applied to a single-pin, periodic cell in the wire-wrap geometry typical of advanced burner reactors. Numerical strategies for multiphysics coupling have been considered and higher-accuracy efficient methods proposed to finely simulate coupled neutronic/thermal-hydraulic reactor transients. Initial steps have been taken in order to couple UNIC and Nek5000, and simplified problems have been defined and solved for testing. Furthermore, we have begun developing a lightweight computational framework, based in part on carefully selected open source tools, to nonobtrusively and efficiently integrate the individual physics modules into a unified simulation tool
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-03-01
For the purpose of achieving energy conservation by reducing weight of vehicles, survey was made of forming/processing technology of new materials such as high-tensile steel and aluminum alloys, and the future development was studied of 'high grade board forming simulation technology.' The subject of the board forming simulation is to develop a method to precisely predict dimensional accuracy (mainly spring back) and sectional shape. When applying the forming simulation technology to difficult-processing materials such as high-tensile steel and aluminum alloys and unknown materials such as super metal, subjects seem to remain in the predicted accuracy because the material models used do not describe characteristics of these materials. The important subject is to upgrade the forming simulation of difficult-processing materials and unknown materials such as by precisely describing plastic anisotropy and instable phenomena of materials into the shape suitable for difficult-processing materials. The subject is also the development of the continuous simulation technology including a series of more than one processes in press processing - welding assembly - strength analysis. (NEDO)
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-03-01
For the purpose of achieving energy conservation by reducing weight of vehicles, survey was made of forming/processing technology of new materials such as high-tensile steel and aluminum alloys, and the future development was studied of 'high grade board forming simulation technology.' The subject of the board forming simulation is to develop a method to precisely predict dimensional accuracy (mainly spring back) and sectional shape. When applying the forming simulation technology to difficult-processing materials such as high-tensile steel and aluminum alloys and unknown materials such as super metal, subjects seem to remain in the predicted accuracy because the material models used do not describe characteristics of these materials. The important subject is to upgrade the forming simulation of difficult-processing materials and unknown materials such as by precisely describing plastic anisotropy and instable phenomena of materials into the shape suitable for difficult-processing materials. The subject is also the development of the continuous simulation technology including a series of more than one processes in press processing - welding assembly - strength analysis. (NEDO)
Velocity measurement accuracy in optical microhemodynamics: experiment and simulation
International Nuclear Information System (INIS)
Chayer, Boris; Cloutier, Guy; L Pitts, Katie; Fenech, Marianne
2012-01-01
Micro particle image velocimetry (µPIV) is a common method to assess flow behavior in blood microvessels in vitro as well as in vivo. The use of red blood cells (RBCs) as tracer particles, as generally considered in vivo, creates a large depth of correlation (DOC), even as large as the vessel itself, which decreases the accuracy of the method. The limitations of µPIV for blood flow measurements based on RBC tracking still have to be evaluated. In this study, in vitro and in silico models were used to understand the effect of the DOC on blood flow measurements using µPIV RBC tracer particles. We therefore employed a µPIV technique to assess blood flow in a 15 µm radius glass tube with a high-speed CMOS camera. The tube was perfused with a sample of 40% hematocrit blood. The flow measured by a cross-correlating speckle tracking technique was compared to the flow rate of the pump. In addition, a three-dimensional mechanical RBC-flow model was used to simulate optical moving speckle at 20% and 40% hematocrits, in 15 and 20 µm radius circular tubes, at different focus planes, flow rates and for various velocity profile shapes. The velocity profiles extracted from the simulated pictures were compared with good agreement with the corresponding velocity profiles implemented in the mechanical model. The flow rates from both the in vitro flow phantom and the mathematical model were accurately measured with less than 10% errors. Simulation results demonstrated that the hematocrit (paired t tests, p = 0.5) and the tube radius (p = 0.1) do not influence the precision of the measured flow rate, whereas the shape of the velocity profile (p < 0.001) and the location of the focus plane (p < 0.001) do, as indicated by measured errors ranging from 3% to 97%. In conclusion, the use of RBCs as tracer particles makes a large DOC and affects the image processing required to estimate the flow velocities. We found that the current µPIV method is acceptable to estimate the flow rate
Directory of Open Access Journals (Sweden)
Fan-Yun Pai
2015-11-01
Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.
High-fidelity large eddy simulation for supersonic jet noise prediction
Aikens, Kurt M.
The problem of intense sound radiation from supersonic jets is a concern for both civil and military applications. As a result, many experimental and computational efforts are focused at evaluating possible noise suppression techniques. Large-eddy simulation (LES) is utilized in many computational studies to simulate the turbulent jet flowfield. Integral methods such as the Ffowcs Williams-Hawkings (FWH) method are then used for propagation of the sound waves to the farfield. Improving the accuracy of this two-step methodology and evaluating beveled converging-diverging nozzles for noise suppression are the main tasks of this work. First, a series of numerical experiments are undertaken to ensure adequate numerical accuracy of the FWH methodology. This includes an analysis of different treatments for the downstream integration surface: with or without including an end-cap, averaging over multiple end-caps, and including an approximate surface integral correction term. Secondly, shock-capturing methods based on characteristic filtering and adaptive spatial filtering are used to extend a highly-parallelizable multiblock subsonic LES code to enable simulations of supersonic jets. The code is based on high-order numerical methods for accurate prediction of the acoustic sources and propagation of the sound waves. Furthermore, this new code is more efficient than the legacy version, allows cylindrical multiblock topologies, and is capable of simulating nozzles with resolved turbulent boundary layers when coupled with an approximate turbulent inflow boundary condition. Even though such wall-resolved simulations are more physically accurate, their expense is often prohibitive. To make simulations more economical, a wall model is developed and implemented. The wall modeling methodology is validated for turbulent quasi-incompressible and compressible zero pressure gradient flat plate boundary layers, and for subsonic and supersonic jets. The supersonic code additions and the
An angle encoder for super-high resolution and super-high accuracy using SelfA
International Nuclear Information System (INIS)
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-01-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period
Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua
2017-02-01
The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.
Fast and High Accuracy Wire Scanner
Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A
2009-01-01
Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...
S. HOOZÉE; M. VANHOUCKE; W. BRUGGEMAN; -
2010-01-01
This paper compares the accuracy of traditional ABC and time-driven ABC in complex and dynamic environments through simulation analysis. First, when unit times in time-driven ABC are known or can be flawlessly estimated, time-driven ABC coincides with the benchmark system and in this case our results show that the overall accuracy of traditional ABC depends on (1) existing capacity utilization, (2) diversity in the actual mix of productive work, and (3) error in the estimated percentage mix. ...
Hydrologic Simulation in Mediterranean flood prone Watersheds using high-resolution quality data
Eirini Vozinaki, Anthi; Alexakis, Dimitrios; Pappa, Polixeni; Tsanis, Ioannis
2015-04-01
Flooding is a significant threat causing lots of inconveniencies in several societies, worldwide. The fact that the climatic change is already happening, increases the flooding risk, which is no longer a substantial menace to several societies and their economies. The improvement of spatial-resolution and accuracy of the topography and land use data due to remote sensing techniques could provide integrated flood inundation simulations. In this work hydrological analysis of several historic flood events in Mediterranean flood prone watersheds (island of Crete/Greece) takes place. Satellite images of high resolution are elaborated. A very high resolution (VHR) digital elevation model (DEM) is produced from a GeoEye-1 0.5-m-resolution satellite stereo pair and is used for floodplain management and mapping applications such as watershed delineation and river cross-section extraction. Sophisticated classification algorithms are implemented for improving Land Use/ Land Cover maps accuracy. In addition, soil maps are updated with means of Radar satellite images. The above high-resolution data are innovatively used to simulate and validate several historical flood events in Mediterranean watersheds, which have experienced severe flooding in the past. The hydrologic/hydraulic models used for flood inundation simulation in this work are HEC-HMS and HEC-RAS. The Natural Resource Conservation Service (NRCS) curve number (CN) approach is implemented to account for the effect of LULC and soil on the hydrologic response of the catchment. The use of high resolution data provides detailed validation results and results of high precision, accordingly. Furthermore, the meteorological forecasting data, which are also combined to the simulation model results, manage the development of an integrated flood forecasting and early warning system tool, which is capable of confronting or even preventing this imminent risk. The research reported in this paper was fully supported by the
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji
2017-01-01
In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.
Directory of Open Access Journals (Sweden)
Agota Fodor
Full Text Available Nowadays, genome-wide association studies (GWAS and genomic selection (GS methods which use genome-wide marker data for phenotype prediction are of much potential interest in plant breeding. However, to our knowledge, no studies have been performed yet on the predictive ability of these methods for structured traits when using training populations with high levels of genetic diversity. Such an example of a highly heterozygous, perennial species is grapevine. The present study compares the accuracy of models based on GWAS or GS alone, or in combination, for predicting simple or complex traits, linked or not with population structure. In order to explore the relevance of these methods in this context, we performed simulations using approx 90,000 SNPs on a population of 3,000 individuals structured into three groups and corresponding to published diversity grapevine data. To estimate the parameters of the prediction models, we defined four training populations of 1,000 individuals, corresponding to these three groups and a core collection. Finally, to estimate the accuracy of the models, we also simulated four breeding populations of 200 individuals. Although prediction accuracy was low when breeding populations were too distant from the training populations, high accuracy levels were obtained using the sole core-collection as training population. The highest prediction accuracy was obtained (up to 0.9 using the combined GWAS-GS model. We thus recommend using the combined prediction model and a core-collection as training population for grapevine breeding or for other important economic crops with the same characteristics.
Directory of Open Access Journals (Sweden)
Tomohiro Fukuda
2014-12-01
Full Text Available The need for visual landscape assessment in large-scale projects for the evaluation of the effects of a particular project on the surrounding landscape has grown in recent years. Augmented reality (AR has been considered for use as a landscape simulation system in which a landscape assessment object created by 3D models is included in the present surroundings. With the use of this system, the time and the cost needed to perform a 3DCG modeling of present surroundings, which is a major issue in virtual reality, are drastically reduced. This research presents the development of a 3D map-oriented handheld AR system that achieves geometric consistency using a 3D map to obtain position data instead of GPS, which has low position information accuracy, particularly in urban areas. The new system also features a gyroscope sensor to obtain posture data and a video camera to capture live video of the present surroundings. All these components are mounted in a smartphone and can be used for urban landscape assessment. Registration accuracy is evaluated to simulate an urban landscape from a short- to a long-range scale. The latter involves a distance of approximately 2000 m. The developed AR system enables users to simulate a landscape from multiple and long-distance viewpoints simultaneously and to walk around the viewpoint fields using only a smartphone. This result is the tolerance level of landscape assessment. In conclusion, the proposed method is evaluated as feasible and effective.
High-accuracy measurements of the normal specular reflectance
International Nuclear Information System (INIS)
Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy
2008-01-01
The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity
A high accuracy land use/cover retrieval system
Directory of Open Access Journals (Sweden)
Alaa Hefnawy
2012-03-01
Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.
High accuracy satellite drag model (HASDM)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Jizhi, Liu; Xingbi, Chen
2009-12-01
A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate.
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation
Directory of Open Access Journals (Sweden)
Jinsong Hu
2013-01-01
Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.
DEFF Research Database (Denmark)
Hansen, David Christoffer; Seco, Joao; Sørensen, Thomas Sangild
2015-01-01
Background. Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in...... development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. Material and methods. A CT calibration phantom and an abdomen cross section...... phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling...
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
High accuracy digital aging monitor based on PLL-VCO circuit
International Nuclear Information System (INIS)
Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong
2015-01-01
As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)
A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.
Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G
2017-02-01
The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.
Heidarinejad, Mohammad
This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate
Vasconcelos, Karla de Faria; Rovaris, Karla; Nascimento, Eduarda Helena Leandro; Oliveira, Matheus Lima; Távora, Débora de Melo; Bóscolo, Frab Norberto
2017-11-01
To evaluate the performance of conventional radiography and photostimulable phosphor (PSP) plate in the detection of simulated internal root resorption (IRR) lesions in early stages. Twenty single-rooted teeth were X-rayed before and after having a simulated IRR early lesion. Three imaging systems were used: Kodak InSight dental film and two PSPs digital systems, Digora Optime and VistaScan. The digital images were displayed on a 20.1″ LCD monitor using the native software of each system, and the conventional radiographs were evaluated on a masked light box. Two radiologists were asked to indicate the presence or absence of IRR and, after two weeks, all images were re-evaluated. Cohen's kappa coefficient was calculated to assess intra- and interobserver agreement. The three imaging systems were compared using the Kruskal-Wallis test. For interexaminer agreement, overall kappa values were 0.70, 0.65 and 0.70 for conventional film, Digora Optima and VistaScan, respectively. Both the conventional and digital radiography presented low sensitivity, specificity, accuracy, positive and negative predictive values with no significant difference between imaging systems (p = .0725). The performance of conventional and PSP was similar in the detection of simulated IRR lesions in early stages with low accuracy.
Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji
2012-07-01
With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.
High current high accuracy IGBT pulse generator
International Nuclear Information System (INIS)
Nesterov, V.V.; Donaldson, A.R.
1995-05-01
A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles
High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS
O.S.Kushnir; O.A.Bevz; O.G.Vlokh
2000-01-01
Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.
Achieving High Accuracy in Calculations of NMR Parameters
DEFF Research Database (Denmark)
Faber, Rasmus
quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...
DEFF Research Database (Denmark)
Tosello, Guido; Gava, Alberto; Hansen, Hans Nørgaard
2009-01-01
Currently available software packages exhibit poor results accuracy when performing micro injection molding (µIM) simulations. However, with an appropriate set-up of the processing conditions, the quality of results can be improved. The effects on the simulation results of different and alternative...... process conditions are investigated, namely the nominal injection speed, as well as the cavity filling time and the evolution of the cavity injection pressure as experimental data. In addition, the sensitivity of the results to the quality of the rheological data is analyzed. Simulated results...... are compared with experiments in terms of flow front position at part and micro features levels, as well as cavity injection filling time measurements....
Topics in the numerical simulation of high temperature flows
International Nuclear Information System (INIS)
Cheret, R.; Dautray, R.; Desgraz, J.C.; Mercier, B.; Meurant, G.; Ovadia, J.; Sitt, B.
1984-06-01
In the fields of inertial confinement fusion, astrophysics, detonation, or other high energy phenomena, one has to deal with multifluid flows involving high temperatures, high speeds and strong shocks initiated e.g. by chemical reactions or even by thermonuclear reactions. The simulation of multifluid flows is reviewed: first are Lagrangian methods which have been successfully applied in the past. Then we describe our experience with newer adaptive mesh methods, originally designed to increase the accuracy of Lagrangian methods. Finally, some facts about Eulerian methods are recalled, with emphasis on the EAD scheme which has been recently extended to the elasto-plastic case. High temperature flows is then considered, described by the equations of radiation hydrodynamics. We show how conservation of energy can be preserved while solving the radiative transfer equation via the Monte Carlo method. For detonation, some models, introduced to describe the initiation of detonation in heterogeneous explosives. Finally we say a few words about instability of these flows
Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry
Energy Technology Data Exchange (ETDEWEB)
Hong G. Im; Arnaud Trouve; Christopher J. Rutland; Jacqueline H. Chen
2009-02-02
The TSTC project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of our approach is direct numerical simulation (DNS) featuring highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. The code named S3D, developed and shared with Chen and coworkers at Sandia National Laboratories, has been enhanced with new numerical algorithms and physical models to provide predictive capabilities for spray dynamics, combustion, and pollutant formation processes in turbulent combustion. Major accomplishments include improved characteristic boundary conditions, fundamental studies of auto-ignition in turbulent stratified reactant mixtures, flame-wall interaction, and turbulent flame extinction by water spray. The overarching scientific issue in our recent investigations is to characterize criticality phenomena (ignition/extinction) in turbulent combustion, thereby developing unified criteria to identify ignition and extinction conditions. The computational development under TSTC has enabled the recent large-scale 3D turbulent combustion simulations conducted at Sandia National Laboratories.
Terascale High-Fidelity Simulations of Turbulent Combustion with Detailed Chemistry
Energy Technology Data Exchange (ETDEWEB)
Im, Hong G [University of Michigan; Trouve, Arnaud [University of Maryland; Rutland, Christopher J [University of Wisconsin; Chen, Jacqueline H [Sandia National Laboratories
2012-08-13
The TSTC project is a multi-university collaborative effort to develop a high-fidelity turbulent reacting flow simulation capability utilizing terascale, massively parallel computer technology. The main paradigm of our approach is direct numerical simulation (DNS) featuring highest temporal and spatial accuracy, allowing quantitative observations of the fine-scale physics found in turbulent reacting flows as well as providing a useful tool for development of sub-models needed in device-level simulations. The code named S3D, developed and shared with Chen and coworkers at Sandia National Laboratories, has been enhanced with new numerical algorithms and physical models to provide predictive capabilities for spray dynamics, combustion, and pollutant formation processes in turbulent combustion. Major accomplishments include improved characteristic boundary conditions, fundamental studies of auto-ignition in turbulent stratified reactant mixtures, flame-wall interaction, and turbulent flame extinction by water spray. The overarching scientific issue in our recent investigations is to characterize criticality phenomena (ignition/extinction) in turbulent combustion, thereby developing unified criteria to identify ignition and extinction conditions. The computational development under TSTC has enabled the recent large-scale 3D turbulent combustion simulations conducted at Sandia National Laboratories.
Accuracy of a hexapod parallel robot kinematics based external fixator.
Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer
2015-12-01
Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.
International Nuclear Information System (INIS)
Ng, Cho; Akcelik, Volkan; Candel, Arno; Chen, Sheng; Ge, Lixin; Kabel, Andreas; Lee, Lie-Quan; Li, Zenghai; Prudencio, Ernesto; Schussman, Greg; Uplenchwar1, Ravi; Xiao1, Liling; Ko1, Kwok; Austin, T.; Cary, J.R.; Ovtchinnikov, S.; Smith, D.N.; Werner, G.R.; Bellantoni, L.; TechX Corp.; Fermilab
2008-01-01
SciDAC1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' (AST) project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC CETs/Institutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider (ILC) and the Large Hadron Collider (LHC) in High Energy Physics (HEP), the JLab 12-GeV Upgrade in Nuclear Physics (NP), as well as the Spallation Neutron Source (SNS) and the Linac Coherent Light Source (LCLS) in Basic Energy Sciences (BES)
International Nuclear Information System (INIS)
Ng, C; Akcelik, V; Candel, A; Chen, S; Ge, L; Kabel, A; Lee, Lie-Quan; Li, Z; Prudencio, E; Schussman, G; Uplenchwar, R; Xiao, L; Ko, K; Austin, T; Cary, J R; Ovtchinnikov, S; Smith, D N; Werner, G R; Bellantoni, L
2008-01-01
SciDAC-1, with its support for the 'Advanced Computing for 21st Century Accelerator Science and Technology' project, witnessed dramatic advances in electromagnetic (EM) simulations for the design and optimization of important accelerators across the Office of Science. In SciDAC2, EM simulations continue to play an important role in the 'Community Petascale Project for Accelerator Science and Simulation' (ComPASS), through close collaborations with SciDAC Centers and Insitutes in computational science. Existing codes will be improved and new multi-physics tools will be developed to model large accelerator systems with unprecedented realism and high accuracy using computing resources at petascale. These tools aim at targeting the most challenging problems facing the ComPASS project. Supported by advances in computational science research, they have been successfully applied to the International Linear Collider and the Large Hadron Collider in high energy physics, the JLab 12-GeV Upgrade in nuclear physics, and the Spallation Neutron Source and the Linac Coherent Light Source in basic energy sciences
High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing
DEFF Research Database (Denmark)
Breinbjerg, Olav
2017-01-01
The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...
High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)
Energy Technology Data Exchange (ETDEWEB)
Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.
2012-07-01
We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.
Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina
2012-03-01
Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.
International Nuclear Information System (INIS)
Patil, Sunil; Tafti, Danesh
2012-01-01
Highlights: ► Large eddy simulation. ► Wall layer modeling. ► Synthetic inlet turbulence. ► Swirl flows. - Abstract: Large eddy simulations of complex high Reynolds number flows are carried out with the near wall region being modeled with a zonal two layer model. A novel formulation for solving the turbulent boundary layer equation for the effective tangential velocity in a generalized co-ordinate system is presented and applied in the near wall zonal treatment. This formulation reduces the computational time in the inner layer significantly compared to the conventional two layer formulations present in the literature and is most suitable for complex geometries involving body fitted structured and unstructured meshes. The cost effectiveness and accuracy of the proposed wall model, used with the synthetic eddy method (SEM) to generate inlet turbulence, is investigated in turbulent channel flow, flow over a backward facing step, and confined swirling flows at moderately high Reynolds numbers. Predictions are compared with available DNS, experimental LDV data, as well as wall resolved LES. In all cases, there is at least an order of magnitude reduction in computational cost with no significant loss in prediction accuracy.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
Energy Technology Data Exchange (ETDEWEB)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.
2012-12-31
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.
Why is a high accuracy needed in dosimetry
International Nuclear Information System (INIS)
Lanzl, L.H.
1976-01-01
Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control
Nonlinear Delta-f Particle Simulations of Collective Effects in High-Intensity Bunched Beams
Qin, Hong; Hudson, Stuart R; Startsev, Edward
2005-01-01
The collective effects in high-intensity 3D bunched beams are described self-consistently by the nonlinear Vlasov-Maxwell equations.* The nonlinear delta-f method,** a particle simulation method for solving the nonlinear Vlasov-Maxwell equations, is being used to study the collective effects in high-intensity 3D bunched beams. The delta-f method, as a nonlinear perturbative scheme, splits the distribution function into equilibrium and perturbed parts. The perturbed distribution function is represented as a weighted summation over discrete particles, where the particle orbits are advanced by equations of motion in the focusing field and self-consistent fields, and the particle weights are advanced by the coupling between the perturbed fields and the zero-order distribution function. The nonlinear delta-f method exhibits minimal noise and accuracy problems in comparison with standard particle-in-cell simulations. A self-consistent 3D kinetic equilibrium is first established for high intensity bunched beams. The...
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II
National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...
High accuracy acoustic relative humidity measurement in duct flow with air.
van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees
2010-01-01
An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
Injection and capture simulations for a high intensity proton synchrotron
International Nuclear Information System (INIS)
Cho, Y.; Lessner, E.; Symon, K.; Univ. of Wisconsin, Madison, WI
1994-01-01
The injection and capture processes in a high intensity, rapid cycling, proton synchrotron are simulated by numerical integration. The equations of motion suitable for rapid numerical simulation are derived so as to maintain symplecticity and second-order accuracy. By careful bookkeeping, the authors can, for each particle that is lost, determine its initial phase space coordinates. They use this information as a guide for different injection schemes and rf voltage programming, so that a minimum of particle losses and dilution are attained. A fairly accurate estimate of the space charge fields is required, as they influence considerably the particle distribution and reduce the capture efficiency. Since the beam is represented by a relatively coarse ensemble of macro particles, the authors study several methods of reducing the statistical fluctuations while retaining the fine structure (high intensity modulations) of the beam distribution. A pre-smoothing of the data is accomplished by the cloud-in-cell method. The program is checked by making sure that it gives correct answers in the absence of space charge, and that it reproduces the negative mass instability properly. Results of simulations for stationary distributions are compared to their analytical predictions. The capture efficiency for the rapid-cycling synchrotron is analyzed with respect to variations in the injected beam energy spread, bunch length, and rf programming
Electron ray tracing with high accuracy
International Nuclear Information System (INIS)
Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.
1986-01-01
An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, Eric M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, E.M.
1996-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
Koopman, Richelle J; Kochendorfer, Karl M; Moore, Joi L; Mehr, David R; Wakefield, Douglas S; Yadamsuren, Borchuluun; Coberly, Jared S; Kruse, Robin L; Wakefield, Bonnie J; Belden, Jeffery L
2011-01-01
We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and "think-aloud" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P dashboard (P dashboard (P dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies.
Accuracy of hiatal hernia detection with esophageal high-resolution manometry
Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.
2015-01-01
The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this
High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air
Directory of Open Access Journals (Sweden)
Cees van der Geld
2010-08-01
Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
Development of flamelet generated manifolds for partially-premixed flame simulations
Ramaekers, W.J.S.
2011-01-01
Accuracy of simulations of combustion processes does not only depend on a meticulous description of the turbulent flow field, its accuracy and detail depends on the representation of combustion chemistry and its interaction with turbulence as well. Simulations with a high level of exactness are very
Load management strategy for Particle-In-Cell simulations in high energy particle acceleration
Energy Technology Data Exchange (ETDEWEB)
Beck, A., E-mail: beck@llr.in2p3.fr [Laboratoire Leprince-Ringuet, École polytechnique, CNRS-IN2P3, Palaiseau 91128 (France); Frederiksen, J.T. [Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, 2100 København Ø (Denmark); Dérouillat, J. [CEA, Maison de La Simulation, 91400 Saclay (France)
2016-09-01
In the wake of the intense effort made for the experimental CILEX project, numerical simulation campaigns have been carried out in order to finalize the design of the facility and to identify optimal laser and plasma parameters. These simulations bring, of course, important insight into the fundamental physics at play. As a by-product, they also characterize the quality of our theoretical and numerical models. In this paper, we compare the results given by different codes and point out algorithmic limitations both in terms of physical accuracy and computational performances. These limitations are illustrated in the context of electron laser wakefield acceleration (LWFA). The main limitation we identify in state-of-the-art Particle-In-Cell (PIC) codes is computational load imbalance. We propose an innovative algorithm to deal with this specific issue as well as milestones towards a modern, accurate high-performance PIC code for high energy particle acceleration.
Coupled high fidelity thermal hydraulics and neutronics for reactor safety simulations
International Nuclear Information System (INIS)
Vincent A. Mousseau; Hongbin Zhang; Haihua Zhao
2008-01-01
This work is a continuation of previous work on the importance of accuracy in the simulation of nuclear reactor safety transients. This work is qualitative in nature and future work will be more quantitative. The focus of this work will be on a simplified single phase nuclear reactor primary. The transient of interest investigates the importance of accuracy related to passive (inherent) safety systems. The transient run here will be an Unprotected Loss of Flow (ULOF) transient. Here the coolant pump is turned off and the un-SCRAM-ed reactor transitions from forced to free convection (Natural circulation). Results will be presented that show the difference that the first order in time truncation physics makes on the transient. The purpose of this document is to illuminate a possible problem in traditional reactor simulation approaches. Detailed studies need to be done on each simulation code for each transient analyzed to determine if the first order truncation physics plays an important role
Energy Technology Data Exchange (ETDEWEB)
Rong, Ye; Winz, Oliver H. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Vernaleken, Ingo [University Hospital Aachen (Germany). Dept. of Psychiatry, Psychotherapy and Psychosomatics; Goedicke, Andreas [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; High Tech Campus, Philips Research Lab., Eindhoven (Netherlands); Mottaghy, Felix M. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Maastricht University Medical Center (Netherlands). Dept. of Nuclear Medicine; Rota Kops, Elena [Forschungszentrum Juelich (Germany). Inst. of Neuroscience and Medicine-4
2015-07-01
Partial volume correction (PVC) is an essential step for quantitative positron emission tomography (PET). In the present study, PVELab, a freely available software, is evaluated for PVC in {sup 18}F-FDOPA brain-PET, with a special focus on the accuracy degradation introduced by various MR-based segmentation approaches. Methods Four PVC algorithms (M-PVC; MG-PVC; mMG-PVC; and R-PVC) were analyzed on simulated {sup 18}F-FDOPA brain-PET images. MR image segmentation was carried out using FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping) packages, including additional adaptation for subcortical regions (SPM{sub L}). Different PVC and segmentation combinations were compared with respect to deviations in regional activity values and time-activity curves (TACs) of the occipital cortex (OCC), caudate nucleus (CN), and putamen (PUT). Additionally, the PVC impact on the determination of the influx constant (K{sub i}) was assessed. Results Main differences between tissue-maps returned by three segmentation algorithms were found in the subcortical region, especially at PUT. Average misclassification errors in combination with volume reduction was found to be lowest for SPM{sub L} (PUT < 30%) and highest for FSL (PUT > 70%). Accurate recovery of activity data at OCC is achieved by M-PVC (apparent recovery coefficient varies between 0.99 and 1.10). The other three evaluated PVC algorithms have demonstrated to be more suitable for subcortical regions with MG-PVC and mMG-PVC being less prone to the largest tissue misclassification error simulated in this study. Except for M-PVC, quantification accuracy of K{sub i} for CN and PUT was clearly improved by PVC. Conclusions The regional activity value of PUT was appreciably overcorrected by most of the PVC approaches employing FSL or SPM segmentation, revealing the importance of accurate MR image segmentation for the presented PVC framework. The selection of a PVC approach should be adapted to the anatomical
Bao, Kai; Yan, Mi; Lu, Ligang; Allen, Rebecca; Salam, Amgad; Jordan, Kirk E.; Sun, Shuyu
2013-01-01
multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our
Breton, Marc D; Hinzmann, Rolf; Campos-Nañez, Enrique; Riddle, Susan; Schoemaker, Michael; Schmelzeisen-Redeker, Guenther
2017-05-01
Computer simulation has been shown over the past decade to be a powerful tool to study the impact of medical devices characteristics on clinical outcomes. Specifically, in type 1 diabetes (T1D), computer simulation platforms have all but replaced preclinical studies and are commonly used to study the impact of measurement errors on glycemia. We use complex mathematical models to represent the characteristics of 3 continuous glucose monitoring systems using previously acquired data. Leveraging these models within the framework of the UVa/Padova T1D simulator, we study the impact of CGM errors in 6 simulation scenarios designed to generate a wide variety of glycemic conditions. Assessment of the simulated accuracy of each different CGM systems is performed using mean absolute relative deviation (MARD) and precision absolute relative deviation (PARD). We also quantify the capacity of each system to detect hypoglycemic events. The simulated Roche CGM sensor prototype (RCGM) outperformed the 2 alternate systems (CGM-1 & CGM-2) in accuracy (MARD = 8% vs 11.4% vs 18%) and precision (PARD = 6.4% vs 9.4% vs 14.1%). These results held for all studied glucose and rate of change ranges. Moreover, it detected more than 90% of hypoglycemia, with a mean time lag less than 4 minutes (CGM-1: 86%/15 min, CGM-2: 57%/24 min). The RCGM system model led to strong performances in these simulation studies, with higher accuracy and precision than alternate systems. Its characteristics placed it firmly as a strong candidate for CGM based therapy, and should be confirmed in large clinical studies.
Factors affecting GEBV accuracy with single-step Bayesian models.
Zhou, Lei; Mrode, Raphael; Zhang, Shengli; Zhang, Qin; Li, Bugao; Liu, Jian-Feng
2018-01-01
A single-step approach to obtain genomic prediction was first proposed in 2009. Many studies have investigated the components of GEBV accuracy in genomic selection. However, it is still unclear how the population structure and the relationships between training and validation populations influence GEBV accuracy in terms of single-step analysis. Here, we explored the components of GEBV accuracy in single-step Bayesian analysis with a simulation study. Three scenarios with various numbers of QTL (5, 50, and 500) were simulated. Three models were implemented to analyze the simulated data: single-step genomic best linear unbiased prediction (GBLUP; SSGBLUP), single-step BayesA (SS-BayesA), and single-step BayesB (SS-BayesB). According to our results, GEBV accuracy was influenced by the relationships between the training and validation populations more significantly for ungenotyped animals than for genotyped animals. SS-BayesA/BayesB showed an obvious advantage over SSGBLUP with the scenarios of 5 and 50 QTL. SS-BayesB model obtained the lowest accuracy with the 500 QTL in the simulation. SS-BayesA model was the most efficient and robust considering all QTL scenarios. Generally, both the relationships between training and validation populations and LD between markers and QTL contributed to GEBV accuracy in the single-step analysis, and the advantages of single-step Bayesian models were more apparent when the trait is controlled by fewer QTL.
High accuracy 3D electromagnetic finite element analysis
International Nuclear Information System (INIS)
Nelson, E.M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics
Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4
Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja
2016-04-01
We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.
Directory of Open Access Journals (Sweden)
Haojie Chai
2018-06-01
Full Text Available In the process of applying high-frequency heating technology to wood drying, controlling the material temperature affects both drying speed and drying quality. Therefore, research on the heat transfer mechanism of high-frequency heating of wood is of great significance. To study the heat transfer mechanism of high-frequency heating, the finite element method was used to establish and solve the wood high-frequency heating model, and experimental verification was carried out. With a decrease in moisture content, the heating rate decreased, then increased, and then decreased again. There was no obvious linear relationship between the moisture content and heating rate; the simulation accuracy of the heating rate was higher in the early and later drying stages and slightly lower near the fiber saturation point. For the central section temperature distribution, the simulation and actual measurement results matched poorly in the early drying stage because the model did not fully consider the differences in the moisture content distribution of the actual test materials. In the later drying stage, the moisture content distribution of the test materials became uniform, which was consistent with the model assumptions. Considering the changes in heating rate and temperature distribution, the accuracy of the model is good under the fiber saturation point, and it can be used to predict the high-frequency heating process of wood.
Relative accuracy of three common methods of parentage analysis in natural populations
Harrison, Hugo B.; Saenz Agudelo, Pablo; Planes, Serge; Jones, Geoffrey P.; Berumen, Michael L.
2012-01-01
Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.
Relative accuracy of three common methods of parentage analysis in natural populations
Harrison, Hugo B.
2012-12-27
Parentage studies and family reconstructions have become increasingly popular for investigating a range of evolutionary, ecological and behavioural processes in natural populations. However, a number of different assignment methods have emerged in common use and the accuracy of each may differ in relation to the number of loci examined, allelic diversity, incomplete sampling of all candidate parents and the presence of genotyping errors. Here, we examine how these factors affect the accuracy of three popular parentage inference methods (colony, famoz and an exclusion-Bayes\\' theorem approach by Christie (Molecular Ecology Resources, 2010a, 10, 115) to resolve true parent-offspring pairs using simulated data. Our findings demonstrate that accuracy increases with the number and diversity of loci. These were clearly the most important factors in obtaining accurate assignments explaining 75-90% of variance in overall accuracy across 60 simulated scenarios. Furthermore, the proportion of candidate parents sampled had a small but significant impact on the susceptibility of each method to either false-positive or false-negative assignments. Within the range of values simulated, colony outperformed FaMoz, which outperformed the exclusion-Bayes\\' theorem method. However, with 20 or more highly polymorphic loci, all methods could be applied with confidence. Our results show that for parentage inference in natural populations, careful consideration of the number and quality of markers will increase the accuracy of assignments and mitigate the effects of incomplete sampling of parental populations. © 2012 Blackwell Publishing Ltd.
Read-only high accuracy volume holographic optical correlator
Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2011-10-01
A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Directory of Open Access Journals (Sweden)
Zheng You
2013-04-01
Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-04-08
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
An accuracy measurement method for star trackers based on direct astronomic observation.
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-03-07
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I
National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...
Bao, Kai
2015-10-26
The present work describes a parallel computational framework for carbon dioxide (CO2) sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel high-performance-computing (HPC) systems. In this framework, a parallel reservoir simulator, reservoir-simulation toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, whereas the MD simulations are performed to provide the required physical parameters. Technologies from several different fields are used to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large-scale CO2 sequestration for long-term storage in subsurface geological formations, such as depleted oil and gas reservoirs and deep saline aquifers, which has been proposed as one of the few attractive and practical solutions to reduce CO2 emissions and address the global-warming threat. Fine grids and accurate prediction of the properties of fluid mixtures under geological conditions are essential for accurate simulations. In this work, CO2 sequestration is presented as a first example for coupling reservoir simulation and MD, although the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical processes in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability is observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well-demonstrated with several experiments with hundreds of millions to one billion cells. To the best of our knowledge, the present work represents the first attempt to couple reservoir simulation and molecular simulation for large-scale modeling. Because of the complexity of
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
International Nuclear Information System (INIS)
Soederman, Christina; Allansdotter Johnsson, Aase; Vikgren, Jenny; Rossi Norrlund, Rauni; Molnar, David; Svalkvist, Angelica; Maansson, Lars Gunnar; Baath, Magnus
2016-01-01
The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intra-observer variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. (authors)
Energy Technology Data Exchange (ETDEWEB)
Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
International Nuclear Information System (INIS)
Vermeire, B.C.; Witherden, F.D.; Vincent, P.E.
2017-01-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
High accuracy interface characterization of three phase material systems in three dimensions
DEFF Research Database (Denmark)
Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus
2010-01-01
Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...
Use of High-Resolution WRF Simulations to Forecast Lightning Threat
McCaul, E. W., Jr.; LaCasse, K.; Goodman, S. J.; Cecil, D. J.
2008-01-01
Recent observational studies have confirmed the existence of a robust statistical relationship between lightning flash rates and the amount of large precipitating ice hydrometeors aloft in storms. This relationship is exploited, in conjunction with the capabilities of cloud-resolving forecast models such as WRF, to forecast explicitly the threat of lightning from convective storms using selected output fields from the model forecasts. The simulated vertical flux of graupel at -15C and the shape of the simulated reflectivity profile are tested in this study as proxies for charge separation processes and their associated lightning risk. Our lightning forecast method differs from others in that it is entirely based on high-resolution simulation output, without reliance on any climatological data. short [6-8 h) simulations are conducted for a number of case studies for which three-dmmensional lightning validation data from the North Alabama Lightning Mapping Array are available. Experiments indicate that initialization of the WRF model on a 2 km grid using Eta boundary conditions, Doppler radar radial velocity fields, and METAR and ACARS data y&eld satisfactory simulations. __nalyses of the lightning threat fields suggests that both the graupel flux and reflectivity profile approaches, when properly calibrated, can yield reasonable lightning threat forecasts, although an ensemble approach is probably desirable in order to reduce the tendency for misplacement of modeled storms to hurt the accuracy of the forecasts. Our lightning threat forecasts are also compared to other more traditional means of forecasting thunderstorms, such as those based on inspection of the convective available potential energy field.
High performance stream computing for particle beam transport simulations
International Nuclear Information System (INIS)
Appleby, R; Bailey, D; Higham, J; Salt, M
2008-01-01
Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed
Directory of Open Access Journals (Sweden)
Brian Johnson
2014-09-01
Full Text Available Introduction: Ultrasound-guided nerve blocks (UGNB are increasingly used in emergency care. The hand-on-syringe (HS needle technique is ideally suited to the emergency department setting because it allows a single operator to perform the block without assistance. The HS technique is assumed to provide less exact needle control than the alternative two-operator hand-on-needle (HN technique; however this assumption has never been directly tested. The primary objective of this study was to compare accuracy of needle targeting under ultrasound guidance by emergency medicine (EM residents using HN and HS techniques on a standardized gelatinous simulation model. Methods: This prospective, randomized study evaluated task performance. We compared needle targeting accuracy using the HN and HS techniques. Each participant performed a set of structured needling maneuvers (both simple and difficult on a standardized partial-task simulator. We evaluated time to task completion, needle visualization during advancement, and accuracy of needle tip at targeting. Resident technique preference was assessed using a post-task survey. Results: We evaluated 60 tasks performed by 10 EM residents. There was no significant difference in time to complete the simple model (HN vs. HS, 18 seconds vs. 18 seconds, p=0.93, time to complete the difficult model (HN vs. HS, 56 seconds vs. 50 seconds, p=0.63, needle visualization, or needle tip targeting accuracy. Most residents (60% preferred the HS technique. Conclusion: For EM residents learning UGNBs, the HN technique was not associated with superior needle control. Our results suggest that the single-operator HS technique provides equivalent needle control when compared to the two-operator HN technique. [West J Emerg Med. 2014;15(6:641–646
Trust in automation and meta-cognitive accuracy in NPP operating crews
Energy Technology Data Exchange (ETDEWEB)
Skraaning Jr, G.; Miberg Skjerve, A. B. [OECD Halden Reactor Project, PO Box 173, 1751 Halden (Norway)
2006-07-01
Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)
Trust in automation and meta-cognitive accuracy in NPP operating crews
International Nuclear Information System (INIS)
Skraaning Jr, G.; Miberg Skjerve, A. B.
2006-01-01
Nuclear power plant operators can over-trust or under-trust automation. Operator trust in automation is said to be mis-calibrated when the level of trust is not corresponding to the actual level of automation reliability. A possible consequence of mis-calibrated trust is degraded meta-cognitive accuracy. Meta-cognitive accuracy is the ability to correctly monitor the effectiveness of ones own performance while engaged in complex tasks. When operators misjudge their own performance, human control actions will be poorly regulated and safety and/or efficiency may suffer. An analysis of simulator data showed that meta-cognitive accuracy and trust in automation were highly correlated for knowledge-based scenarios, but uncorrelated for rule-based scenarios. In the knowledge-based scenarios, the operators overestimated their performance effectiveness under high levels of trust, they underestimated performance under low levels of trust, but showed realistic self-assessment under intermediate levels of trust in automation. The result was interpreted to suggest that trust in automation impact the meta-cognitive accuracy of the operators. (authors)
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Directory of Open Access Journals (Sweden)
Guanwu Zhou
2014-07-01
Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
Terzic, A; Schouman, T; Scolozzi, P
2013-08-06
The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.
Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems
Energy Technology Data Exchange (ETDEWEB)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-08-03
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
International Nuclear Information System (INIS)
Gong Xing; Glick, Stephen J.; Liu, Bob; Vedula, Aruna A.; Thacker, Samta
2006-01-01
Although conventional mammography is currently the best modality to detect early breast cancer, it is limited in that the recorded image represents the superposition of a three-dimensional (3D) object onto a 2D plane. Recently, two promising approaches for 3D volumetric breast imaging have been proposed, breast tomosynthesis (BT) and CT breast imaging (CTBI). To investigate possible improvements in lesion detection accuracy with either breast tomosynthesis or CT breast imaging as compared to digital mammography (DM), a computer simulation study was conducted using simulated lesions embedded into a structured 3D breast model. The computer simulation realistically modeled x-ray transport through a breast model, as well as the signal and noise propagation through a CsI based flat-panel imager. Polyenergetic x-ray spectra of Mo/Mo 28 kVp for digital mammography, Mo/Rh 28 kVp for BT, and W/Ce 50 kVp for CTBI were modeled. For the CTBI simulation, the intensity of the x-ray spectra for each projection view was determined so as to provide a total average glandular dose of 4 mGy, which is approximately equivalent to that given in conventional two-view screening mammography. The same total dose was modeled for both the DM and BT simulations. Irregular lesions were simulated by using a stochastic growth algorithm providing lesions with an effective diameter of 5 mm. Breast tissue was simulated by generating an ensemble of backgrounds with a power law spectrum, with the composition of 50% fibroglandular and 50% adipose tissue. To evaluate lesion detection accuracy, a receiver operating characteristic (ROC) study was performed with five observers reading an ensemble of images for each case. The average area under the ROC curves (A z ) was 0.76 for DM, 0.93 for BT, and 0.94 for CTBI. Results indicated that for the same dose, a 5 mm lesion embedded in a structured breast phantom was detected by the two volumetric breast imaging systems, BT and CTBI, with statistically
Nonlinear magnetohydrodynamics simulation using high-order finite elements
International Nuclear Information System (INIS)
Plimpton, Steven James; Schnack, D.D.; Tarditi, A.; Chu, M.S.; Gianakon, T.A.; Kruger, S.E.; Nebel, R.A.; Barnes, D.C.; Sovinec, C.R.; Glasser, A.H.
2005-01-01
A conforming representation composed of 2D finite elements and finite Fourier series is applied to 3D nonlinear non-ideal magnetohydrodynamics using a semi-implicit time-advance. The self-adjoint semi-implicit operator and variational approach to spatial discretization are synergistic and enable simulation in the extremely stiff conditions found in high temperature plasmas without sacrificing the geometric flexibility needed for modeling laboratory experiments. Growth rates for resistive tearing modes with experimentally relevant Lundquist number are computed accurately with time-steps that are large with respect to the global Alfven time and moderate spatial resolution when the finite elements have basis functions of polynomial degree (p) two or larger. An error diffusion method controls the generation of magnetic divergence error. Convergence studies show that this approach is effective for continuous basis functions with p (ge) 2, where the number of test functions for the divergence control terms is less than the number of degrees of freedom in the expansion for vector fields. Anisotropic thermal conduction at realistic ratios of parallel to perpendicular conductivity (x(parallel)/x(perpendicular)) is computed accurately with p (ge) 3 without mesh alignment. A simulation of tearing-mode evolution for a shaped toroidal tokamak equilibrium demonstrates the effectiveness of the algorithm in nonlinear conditions, and its results are used to verify the accuracy of the numerical anisotropic thermal conduction in 3D magnetic topologies.
Hébrard, Eric; Carrasco, Nathalie; Dobrijevic, Michel; Pernot, Pascal
Ion Neutral Mass Spectrometer (INMS) aboard Cassini revealed a rich coupled ion-neutral chemistry in the ionosphere, producing heavy hydrocarbons and nitriles ions. The modeling of such a complex environment is challenging, as it requires a detailed and accurate description of the different relevant processes such as photodissociation cross sections and neutral-neutral reaction rates on one hand, and ionisation cross sections, ion-molecule and recombination reaction rates on the other hand. Underpinning models calculations, each of these processes is parameterized by kinetic constants which, when known, have been studied experimentally and/or theoretically over a range of temperatures and pressures that are most often not representative of Titan's atmosphere. The sizeable experimental and theoretical uncertainties reported in the literature merge therefore with the uncertainties resulting subsequently from the unavoidable estimations or extrapolations to Titan's atmosphere conditions. Such large overall uncertainties have to be accounted for in all resulting inferences most of all to evaluate the quality of the model definition. We have undertaken a systematic study of the uncertainty sources in the simulation of ion mass spectra as recorded by Cassini/INMS in Titan ionosphere during the T5 flyby at 1200 km. Our simulated spectra seem much less affected by the uncertainties on ion-molecule reactions than on neutral-neutral reactions. Photochemical models of Titan's atmosphere are indeed so poorly predictive at high altitudes, in the sense that their computed predictions display such large uncertainties, that we found them to give rise to bimodal and hypersensitive abundance distributions for some major compounds like acetylene C2 H2 and ethylene C2 H4 . We will show to what extent global uncertainty and sensitivity analysis enabled us to identify the causes of this bimodality and to pinpoint the key processes that mostly contribute to limit the accuracy of the
Multiple sequence alignment accuracy and phylogenetic inference.
Ogden, T Heath; Rosenberg, Michael S
2006-04-01
Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.
Ferreira, Arthur de Sá; Pacheco, Antonio Guilherme
2015-01-01
The aim of this work is to develop and implement the SimTCM, an advanced computational model that incorporates relevant aspects from traditional Chinese medicine (TCM) theory as well as advanced statistical and epidemiological techniques for simulation and analysis of human patients. SimTCM presents five major attributes for simulation: representation of true and false profiles for any single pattern; variable count of manifestations for each manifestation profile; empirical distributions of patterns and manifestations in a disease-specific population; incorporation of uncertainty in clinical data; and the combination of the four examinations. The proposed model is strengthened by following international standards for reporting diagnostic accuracy studies, and incorporates these standards in its treatment of study population, sample size calculation, data collection of manifestation profiles, exclusion criteria and missing data handling, reference standards, randomization and blinding, and test reproducibility. Simulations using data from patients diagnosed with hypertension and post-stroke sensory-motor impairments yielded no significant differences between expected and simulated frequencies of patterns (P=0.22 or higher). Time for convergence of simulations varied from 9.90 s (9.80, 10.27) to 28.31 s (26.33, 29.52). The ratio iteration profile necessary for convergence varied between 1:1 and 5:1. This model is directly connected to forthcoming models in a large project to design and implement the SuiteTCM: ProntTCM, SciTCM, DiagTCM, StudentTCM, ResearchTCM, HerbsTCM, AcuTCM, and DataTCM. It is expected that the continuity of the SuiteTCM project enhances the evidence-based practice of Chinese medicine. The software is freely available for download at: http://suitetcm.unisuam.edu.br.
Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian
2015-06-29
A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.
Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation
Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter
1996-01-01
The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.
Identification and delineation of areas flood hazard using high accuracy of DEM data
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution
Energy Technology Data Exchange (ETDEWEB)
Leake, James E.; Linton, Mark G. [U.S. Naval Research Laboratory, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Schuck, Peter W., E-mail: james.e.leake@nasa.gov [NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771 (United States)
2017-04-01
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the development of coronal models which are “data-driven” at the photosphere. We present an investigation to determine the feasibility and accuracy of such methods. Our validation framework uses a simulation of active region (AR) formation, modeling the emergence of magnetic flux from the convection zone to the corona, as a ground-truth data set, to supply both the photospheric information and to perform the validation of the data-driven method. We focus our investigation on how the accuracy of the data-driven model depends on the temporal frequency of the driving data. The Helioseismic and Magnetic Imager on NASA’s Solar Dynamics Observatory produces full-disk vector magnetic field measurements at a 12-minute cadence. Using our framework we show that ARs that emerge over 25 hr can be modeled by the data-driving method with only ∼1% error in the free magnetic energy, assuming the photospheric information is specified every 12 minutes. However, for rapidly evolving features, under-sampling of the dynamics at this cadence leads to a strobe effect, generating large electric currents and incorrect coronal morphology and energies. We derive a sampling condition for the driving cadence based on the evolution of these small-scale features, and show that higher-cadence driving can lead to acceptable errors. Future work will investigate the source of errors associated with deriving plasma variables from the photospheric magnetograms as well as other sources of errors, such as reduced resolution, instrument bias, and noise.
International Nuclear Information System (INIS)
Gao, Junling; Du, Qungui; Chen, Min; Li, Bo; Zhang, Dongwen
2015-01-01
An accurate mathematical model of thermoelectric modules (TEMs) provides the basis for the analysis and design of thermoelectric conversion system. TEM models from the literature are only valid for the heat transfer of N-type and P-type thermoelectric couples without considering air around the actual thermoelectric couples of TEMs. In fact, air space imposes significant influence on the model computational accuracy, especially for a TEM with large air space inside. In this study, heat transfer analyses of air between the TEM cold and hot plates were carried out in order to propose a new mathematical model that minimises simulation errors. This model was applied to analyse characteristic parameters of two typical TEMs, and the ratio of cross-sectional area of air space to thermocouples were 48.2% and 80.0%, respectively. The average relative errors in simulation decreased from 5.2% to 2.8% and from 12.8% to 3.7%, respectively. It is noted that our new model gives result more accurate than models from the literature provided that higher temperature difference occurs between hot side and cold side of TEM. Thus, the proposed model is of theoretical significance in guiding future design of TEMs for high-power or large-temperature-difference thermoelectric conversion systems. - Highlights: • Built a new accurate model for thermoelectric modules with inner air heat transfer. • Analysed the influence on heat transfer of the air within the TEM ∗ . • Reduced simulation errors for high-power thermoelectric conversion systems. • Two typical TEMs were measured with a good agreement with theoretical results. • ∗ TEM is the abbreviation of thermoelectric module
HTTR plant dynamic simulation using a hybrid computer
International Nuclear Information System (INIS)
Shimazaki, Junya; Suzuki, Katsuo; Nabeshima, Kunihiko; Watanabe, Koichi; Shinohara, Yoshikuni; Nakagawa, Shigeaki.
1990-01-01
A plant dynamic simulation of High-Temperature Engineering Test Reactor has been made using a new-type hybrid computer. This report describes a dynamic simulation model of HTTR, a hybrid simulation method for SIMSTAR and some results obtained from dynamics analysis of HTTR simulation. It concludes that the hybrid plant simulation is useful for on-line simulation on account of its capability of computation at high speed, compared with that of all digital computer simulation. With sufficient accuracy, 40 times faster computation than real time was reached only by changing an analog time scale for HTTR simulation. (author)
Development and simulation of microfluidic Wheatstone bridge for high-precision sensor
International Nuclear Information System (INIS)
Shipulya, N D; Konakov, S A; Krzhizhanovskaya, V V
2016-01-01
In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems. (paper)
High accuracy positioning using carrier-phases with the opensource GPSTK software
Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume
2008-01-01
The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP
High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) during BARCA
Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.
2009-12-01
High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.
Investigation into the Accuracy of Colours Reproduced by the Ricoh Printer
Directory of Open Access Journals (Sweden)
Andrius Gedvila
2013-02-01
Full Text Available The paper investigates the reproduction accuracy of Ricoh Aficio colour 3006 printer. The study has been conducted analyzing four-color (CMYK gradation curves – the compliance of zonal absorbance with standard references and printing stability of gradation scales. The obtained colours have been measured spectrophotometrically determining the coordinates of colours CIE L*a*b* and differences in colours ΔE. Eight printing regimes and their settings have been examined. It has been found that the printer Ricoh has inaccurately colour grading. However, the quality of colour reproduction is sufficient for printing data not requiring high accuracy of colour reproduction. Colour grading significantly differs from the theoretical approaches, though some regimes (Gamma, Brightness, CMYK simulation allows achieving theoretical values. Despite the high inaccuracy of gradation, differences in colour are not high enough due to corrections made by software.Article in Lithuanian
A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform
Directory of Open Access Journals (Sweden)
Ming Yang
2011-12-01
Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.
A new ultra-high-accuracy angle generator: current status and future direction
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Directory of Open Access Journals (Sweden)
Brohi Ali Anwar
2017-01-01
Full Text Available The entropy production in 2-D heat transfer system has been analyzed systematically by using the finite volume method, to develop new criteria for the numerical simulation in case of multidimensional systems, with the aid of the CFD codes. The steady-state heat conduction problem has been investigated for entropy production, and the entropy production profile has been calculated based upon the current approach. From results for 2-D heat conduction, it can be found that the stability of entropy production profile exhibits a better agreement with the exact solution accordingly, and the current approach is effective for measuring the accuracy and stability of numerical simulations for heat transfer problems.
Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems
International Nuclear Information System (INIS)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-01-01
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date
Bao, Kai
2013-01-01
The present work describes a parallel computational framework for CO2 sequestration simulation by coupling reservoir simulation and molecular dynamics (MD) on massively parallel HPC systems. In this framework, a parallel reservoir simulator, Reservoir Simulation Toolbox (RST), solves the flow and transport equations that describe the subsurface flow behavior, while the molecular dynamics simulations are performed to provide the required physical parameters. Numerous technologies from different fields are employed to make this novel coupled system work efficiently. One of the major applications of the framework is the modeling of large scale CO2 sequestration for long-term storage in the subsurface geological formations, such as depleted reservoirs and deep saline aquifers, which has been proposed as one of the most attractive and practical solutions to reduce the CO2 emission problem to address the global-warming threat. To effectively solve such problems, fine grids and accurate prediction of the properties of fluid mixtures are essential for accuracy. In this work, the CO2 sequestration is presented as our first example to couple the reservoir simulation and molecular dynamics, while the framework can be extended naturally to the full multiphase multicomponent compositional flow simulation to handle more complicated physical process in the future. Accuracy and scalability analysis are performed on an IBM BlueGene/P and on an IBM BlueGene/Q, the latest IBM supercomputer. Results show good accuracy of our MD simulations compared with published data, and good scalability are observed with the massively parallel HPC systems. The performance and capacity of the proposed framework are well demonstrated with several experiments with hundreds of millions to a billion cells. To our best knowledge, the work represents the first attempt to couple the reservoir simulation and molecular simulation for large scale modeling. Due to the complexity of the subsurface systems
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Numerical Simulation of Cyclic Thermodynamic Processes
DEFF Research Database (Denmark)
Andersen, Stig Kildegård
2006-01-01
This thesis is on numerical simulation of cyclic thermodynamic processes. A modelling approach and a method for finding periodic steady state solutions are described. Examples of applications are given in the form of four research papers. Stirling machines and pulse tube coolers are introduced...... and a brief overview of the current state of the art in methods for simulating such machines is presented. It was found that different simulation approaches, which model the machines with different levels of detail, currently coexist. Methods using many simplifications can be easy to use and can provide...... models flexible and easy to modify, and to make simulations fast. A high level of accuracy was achieved for integrations of a model created using the modelling approach; the accuracy depended on the settings for the numerical solvers in a very predictable way. Selection of fast numerical algorithms...
Energy Technology Data Exchange (ETDEWEB)
Chaharmiri, Rasoul; Arezoodar, Alireza Fallahi [Amirkabir University, Tehran (Iran, Islamic Republic of)
2016-05-15
Electromagnetic forming (EMF) is a high strain rate forming technology which can effectively deform and shape high electrically conductive materials at room temperature. In this study, the electromagnetic and mechanical parts of the process simulated using Maxwell and ABAQUS software, respectively. To provide a link between the software, two approaches include 'loose' and 'sequential' coupling were applied. This paper is aimed to investigate how sequential coupling would affect radial displacement accuracy, as an indicator of tube final shape, at various discharge voltages. The results indicated a good agreement for the both approaches at lower discharge voltages with more accurate results for sequential coupling, but at high discharge voltages, there was a non-negligible overestimation of about 43% for the loose coupling reduced to only 8.2% difference by applying sequential coupling in the case studied. Therefore, in order to reach more accurate predictions, applying sequential coupling especially at higher discharge voltages is strongly recommended.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.
Qi, Jun; Liu, Guo-Ping
2017-11-06
This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network
Directory of Open Access Journals (Sweden)
Jun Qi
2017-11-01
Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
Assessing phylogenetic accuracy : a simulation study
Heijerman, T.
1995-01-01
A simulation model of phylogeny, called GENESIS, was developed to evaluate and to estimate the qualities of various numerical taxonomic procedures. The model produces sets of imaginary species with known character state distributions and with known phylogenies. The model can be made to
Optimal Kinematic Design of a 6-UCU Kind Gough-Stewart Platform with a Guaranteed Given Accuracy
Directory of Open Access Journals (Sweden)
Guojun Liu
2018-06-01
Full Text Available The 6-UCU (U-universal joint; C-cylinder joint kind Gough-Stewart platform is extensively employed in motion simulators due to its high accuracy, large payload, and high-speed capability. However, because of the manufacturing and assembling errors, the real geometry may be different from the nominal one. In the design process of the high-accuracy Gough-Stewart platform, one needs to consider these errors. The purpose of this paper is to propose an optimal design method for the 6-UCU kind Gough-Stewart platform with a guaranteed given accuracy. Accuracy analysis of the 6-UCU kind Gough-Stewart platform is presented by considering the limb length errors and joint position errors. An optimal design method is proposed by using a multi-objective evolutionary algorithm, the non-dominated sorting genetic algorithm II (NSGA-II. A set of Pareto-optimal parameters was found by applying the proposed optimal design method. An engineering design case was studied to verify the effectiveness of the proposed method.
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
Modelling and simulation of process control systems for WWER
Energy Technology Data Exchange (ETDEWEB)
Pangelov, N [Energoproekt, Sofia (Bulgaria)
1996-12-31
A dynamic modelling method for simulation of process control system is developed (method for identification). It is based on the least squares method and highly efficient linear uninterrupted differential equations. The method has the following advantages: there are no significant limitations in the type of input/output signals and in the length of data time series; identification at none zero initial condition is possible; on-line identification is possible; a high accuracy is observed in case of noise. On the basis of real experiments and data time series simulated with known computer codes it is possible to construct highly efficient models of different systems for solving the following problems: real time simulation with high accuracy for training purposes; estimation of immeasurable parameters important to safety; malfunction diagnostics based on plant dynamics; prediction of dynamic behaviour; control vector estimation in regime adviser. Two real applications of this method are described: in dynamic behaviour modelling for steam generator level, and in creating of a Process Control System Simulator (PCSS) based on KASKAD-2 for WWER-1000 units of the Kozloduy NPP. 6 refs., 8 figs.
Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel
Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie
2011-05-01
Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.
International Nuclear Information System (INIS)
Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.
2009-01-01
We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.
Validation of three-dimensional micro injection molding simulation accuracy
DEFF Research Database (Denmark)
Tosello, Guido; Costa, F.S.; Hansen, Hans Nørgaard
2011-01-01
length, injection pressure profile, molding mass and flow pattern. The importance of calibrated micro molding process monitoring for an accurate implementation strategy of the simulation and its validation has been demonstrated. In fact, inconsistencies and uncertainties in the experimental data must...... be minimized to avoid introducing uncertainties in the simulation calculations. Simulations of bulky sub-100 milligrams micro molded parts have been validated and a methodology for accurate micro molding simulations was established....
Shock Mechanism Analysis and Simulation of High-Power Hydraulic Shock Wave Simulator
Directory of Open Access Journals (Sweden)
Xiaoqiu Xu
2017-01-01
Full Text Available The simulation of regular shock wave (e.g., half-sine can be achieved by the traditional rubber shock simulator, but the practical high-power shock wave characterized by steep prepeak and gentle postpeak is hard to be realized by the same. To tackle this disadvantage, a novel high-power hydraulic shock wave simulator based on the live firing muzzle shock principle was proposed in the current work. The influence of the typical shock characteristic parameters on the shock force wave was investigated via both theoretical deduction and software simulation. According to the obtained data compared with the results, in fact, it can be concluded that the developed hydraulic shock wave simulator can be applied to simulate the real condition of the shocking system. Further, the similarity evaluation of shock wave simulation was achieved based on the curvature distance, and the results stated that the simulation method was reasonable and the structural optimization based on software simulation is also beneficial to the increase of efficiency. Finally, the combination of theoretical analysis and simulation for the development of artillery recoil tester is a comprehensive approach in the design and structure optimization of the recoil system.
Optimal design of a high accuracy photoelectric auto-collimator based on position sensitive detector
Yan, Pei-pei; Yang, Yong-qing; She, Wen-ji; Liu, Kai; Jiang, Kai; Duan, Jing; Shan, Qiusha
2018-02-01
A kind of high accuracy Photo-electric auto-collimator based on PSD was designed. The integral structure composed of light source, optical lens group, Position Sensitive Detector (PSD) sensor, and its hardware and software processing system constituted. Telephoto objective optical type is chosen during the designing process, which effectively reduces the length, weight and volume of the optical system, as well as develops simulation-based design and analysis of the auto-collimator optical system. The technical indicators of auto-collimator presented by this paper are: measuring resolution less than 0.05″; a field of view is 2ω=0.4° × 0.4° measuring range is +/-5' error of whole range measurement is less than 0.2″. Measuring distance is 10m, which are applicable to minor-angle precise measuring environment. Aberration analysis indicates that the MTF close to the diffraction limit, the spot in the spot diagram is much smaller than the Airy disk. The total length of the telephoto lens is only 450mm by the design of the optical machine structure optimization. The autocollimator's dimension get compact obviously under the condition of the image quality is guaranteed.
High accuracy of family history of melanoma in Danish melanoma cases
DEFF Research Database (Denmark)
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-01-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
Localisation accuracy of semi-dense monocular SLAM
Schreve, Kristiaan; du Plessies, Pieter G.; Rätsch, Matthias
2017-06-01
Understanding the factors that influence the accuracy of visual SLAM algorithms is very important for the future development of these algorithms. So far very few studies have done this. In this paper, a simulation model is presented and used to investigate the effect of the number of scene points tracked, the effect of the baseline length in triangulation and the influence of image point location uncertainty. It is shown that the latter is very critical, while the other all play important roles. Experiments with a well known semi-dense visual SLAM approach are also presented, when used in a monocular visual odometry mode. The experiments shows that not including sensor bias and scale factor uncertainty is very detrimental to the accuracy of the simulation results.
International Nuclear Information System (INIS)
Lopatta, E.; Liesenfeld, S.M.; Bank, P.; Guenther, R.; Wiezorek, T.; Wendt, T.G.; Wurm, R.
2003-01-01
Background: For high precision radiotherapy of the neurocranium a precise, reproducible positioning technique is the basic pre-requisite. The aim of this study was to assess the influence of a modification of the commercially available stereotactical BrainLab trademark -head mask system on accuracy in patient positioning during fractionated radiotherapy. Material and Methods: 29 patients were treated with stereotactic radiotherapy of the head. Immobilization was provided by a two layer thermoplastic mask system (BrainLab trademark). 18 of these patients received an additional custom made fixation either of the upper jaw (OKF) or of the mandibula (UKF). The positioning accuracy was assessed by measurements of the shifting of anatomical landmarks in relation to therigid mask system on biplanar simulator films using a digital imaging system. Before each measurement a fine adjustment of the simulator to an optical ring system was performed. The reference radiographs were done just before CT-planning. During a 2-7 weeks lasting course of radiotherapy displacement measurements in relation to the reference images for all three dimensions (z, y and x) were done once a week. In 29 patients 844 measurements were analyzed. Results: An additional jaw fixation improves the reproducibility of patient positioning significantly in all three spatial dimensions. The standard deviation in lateral direction (x) was 0.6 mm with jaw fixation vs. 0.7 mm without jaw fixation (p [de
Obert, Martin; Kubelt, Carolin; Schaaf, Thomas; Dassinger, Benjamin; Grams, Astrid; Gizewski, Elke R; Krombach, Gabriele A; Verhoff, Marcel A
2013-05-10
The objective of this article was to explore age-at-death estimates in forensic medicine, which were methodically based on age-dependent, radiologically defined bone-density (HC) decay and which were investigated with a standard clinical computed tomography (CT) system. Such density decay was formerly discovered with a high-resolution flat-panel CT in the skulls of adult females. The development of a standard CT methodology for age estimations--with thousands of installations--would have the advantage of being applicable everywhere, whereas only few flat-panel prototype CT systems are in use worldwide. A Multi-Slice CT scanner (MSCT) was used to obtain 22,773 images from 173 European human skulls (89 male, 84 female), taken from a population of patients from the Department of Neuroradiology at the University Hospital Giessen and Marburg during 2010 and 2011. An automated image analysis was carried out to evaluate HC of all images. The age dependence of HC was studied by correlation analysis. The prediction accuracy of age-at-death estimates was calculated. Computer simulations were carried out to explore the influence of noise on the accuracy of age predictions. Human skull HC values strongly scatter as a function of age for both sexes. Adult male skull bone-density remains constant during lifetime. Adult female HC decays during lifetime, as indicated by a correlation coefficient (CC) of -0.53. Prediction errors for age-at-death estimates for both of the used scanners are in the range of ±18 years at a 75% confidence interval (CI). Computer simulations indicate that this is the best that can be expected for such noisy data. Our results indicate that HC-decay is indeed present in adult females and that it can be demonstrated both by standard and by high-resolution CT methods, applied to different subject groups of an identical population. The weak correlation between HC and age found by both CT methods only enables a method to estimate age-at-death with limited
High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry
Directory of Open Access Journals (Sweden)
Guy Jean-Pierre Schumann
2016-01-01
Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST
Energy Technology Data Exchange (ETDEWEB)
Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others
2016-12-20
Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.
Hansen, David C; Seco, Joao; Sørensen, Thomas Sangild; Petersen, Jørgen Breede Baltzer; Wildberger, Joachim E; Verhaegen, Frank; Landry, Guillaume
2015-01-01
Accurate stopping power estimation is crucial for treatment planning in proton therapy, and the uncertainties in stopping power are currently the largest contributor to the employed dose margins. Dual energy x-ray computed tomography (CT) (clinically available) and proton CT (in development) have both been proposed as methods for obtaining patient stopping power maps. The purpose of this work was to assess the accuracy of proton CT using dual energy CT scans of phantoms to establish reference accuracy levels. A CT calibration phantom and an abdomen cross section phantom containing inserts were scanned with dual energy and single energy CT with a state-of-the-art dual energy CT scanner. Proton CT scans were simulated using Monte Carlo methods. The simulations followed the setup used in current prototype proton CT scanners and included realistic modeling of detectors and the corresponding noise characteristics. Stopping power maps were calculated for all three scans, and compared with the ground truth stopping power from the phantoms. Proton CT gave slightly better stopping power estimates than the dual energy CT method, with root mean square errors of 0.2% and 0.5% (for each phantom) compared to 0.5% and 0.9%. Single energy CT root mean square errors were 2.7% and 1.6%. Maximal errors for proton, dual energy and single energy CT were 0.51%, 1.7% and 7.4%, respectively. Better stopping power estimates could significantly reduce the range errors in proton therapy, but requires a large improvement in current methods which may be achievable with proton CT.
Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan
2013-03-01
Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
Estimating the Accuracy of the Return on Investment (ROI Performance Evaluations
Directory of Open Access Journals (Sweden)
Alexei Botchkarev
2015-12-01
Full Text Available Return on Investment (ROI is one of the most popular performance measurement and evaluation metrics. ROI analysis (when applied correctly is a powerful tool in comparing solutions and making informed decisions on the acquisitions of information systems. The purpose of this study is to provide a systematic research of the accuracy of the ROI evaluations in the context of information systems implementations. Measurements theory and error analysis, specifically propagation of uncertainties methods, were used to derive analytical expressions for ROI errors. Monte Carlo simulation methodology was used to design and deliver a quantitative experiment to model costs and returns estimating errors and calculate ROI accuracies. Spreadsheet simulation (Microsoft Excel spreadsheets enhanced with Visual Basic for Applications was used to implement Monte Carlo simulations. The main contribution of the study is that this is the first systematic effort to evaluate ROI accuracy. Analytical expressions have been derived for estimating errors of the ROI evaluations. Results of the Monte Carlo simulation will help practitioners in making informed decisions based on explicitly stated factors influencing the ROI uncertainties.
Energy Technology Data Exchange (ETDEWEB)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
2017-02-01
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.
Assessing phylogenetic accuracy : a simulation study
Heijerman, T.
1995-01-01
A simulation model of phylogeny, called GENESIS, was developed to evaluate and to estimate the qualities of various numerical taxonomic procedures. The model produces sets of imaginary species with known character state distributions and with known phylogenies. The model can be made to produce these species and their phylogenies under different evolutionary conditions.
Within GENESIS, there are two mathematical models that describe the diversification of the number of taxa. T...
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
High accuracy of family history of melanoma in Danish melanoma cases.
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-12-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.
Fission product model for BWR analysis with improved accuracy in high burnup
International Nuclear Information System (INIS)
Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira
1998-01-01
A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)
Dehnavi, E; Mahyari, S Ansari; Schenkel, F S; Sargolzaei, M
2018-06-01
Using cow data in the training population is attractive as a way to mitigate bias due to highly selected training bulls and to implement genomic selection for countries with no or limited proven bull data. However, one potential issue with cow data is a bias due to the preferential treatment. The objectives of this study were to (1) investigate the effect of including cow genotype and phenotype data into the training population on accuracy and bias of genomic predictions and (2) assess the effect of preferential treatment for different proportions of elite cows. First, a 4-pathway Holstein dairy cattle population was simulated for 2 traits with low (0.05) and moderate (0.3) heritability. Then different numbers of cows (0, 2,500, 5,000, 10,000, 15,000, or 20,000) were randomly selected and added to the training group composed of different numbers of top bulls (0, 2,500, 5,000, 10,000, or 15,000). Reliability levels of de-regressed estimated breeding values for training cows and bulls were 30 and 75% for traits with low heritability and were 60 and 90% for traits with moderate heritability, respectively. Preferential treatment was simulated by introducing upward bias equal to 35% of phenotypic variance to 5, 10, and 20% of elite bull dams in each scenario. Two different validation data sets were considered: (1) all animals in the last generation of both elite and commercial tiers (n = 42,000) and (2) only animals in the last generation of the elite tier (n = 12,000). Adding cow data into the training population led to an increase in accuracy (r) and decrease in bias of genomic predictions in all considered scenarios without preferential treatment. The gain in r was higher for the low heritable trait (from 0.004 to 0.166 r points) compared with the moderate heritable trait (from 0.004 to 0.116 r points). The gain in accuracy in scenarios with a lower number of training bulls was relatively higher (from 0.093 to 0.166 r points) than with a higher number of training
Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung
2018-01-30
Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.
Energy Technology Data Exchange (ETDEWEB)
Muennich, A.
2007-03-26
The International Linear Collider (ILC) is planned to be the next large accelerator. The ILC will be able to perform high precision measurements only possible at the clean environment of electron positron collisions. In order to reach this high accuracy, the requirements for the detector performance are challenging. Several detector concepts are currently under study. The understanding of the detector and its performance will be crucial to extract the desired physics results from the data. To optimise the detector design, simulation studies are needed. Simulation packages like GEANT4 allow to model the detector geometry and simulate the energy deposit in the different materials. However, the detector response taking into account the transportation of the produced charge to the readout devices and the effects ofthe readout electronics cannot be described in detail. These processes in the detector will change the measured position of the energy deposit relative to the point of origin. The determination of this detector response is the task of detailed simulation studies, which have to be carried out for each subdetector. A high resolution Time Projection Chamber (TPC) with gas amplification based on micro pattern gas detectors, is one of the options for the main tracking system at the ILC. In the present thesis a detailed simulation tool to study the performance of a TPC was developed. Its goal is to find the optimal settings to reach an excellent momentum and spatial resolution. After an introduction to the present status of particle physics and the ILC project with special focus on the TPC as central tracker, the simulation framework is presented. The basic simulation methods and implemented processes are introduced. Within this stand-alone simulation framework each electron produced by primary ionisation is transferred through the gas volume and amplified using Gas Electron Multipliers (GEMs). The output format of the simulation is identical to the raw data from a
Geant4 electromagnetic physics for high statistic simulation of LHC experiments
Allison, J; Bagulya, A; Champion, C; Elles, S; Garay, F; Grichine, V; Howard, A; Incerti, S; Ivanchenko, V; Jacquemier, J; Maire, M; Mantero, A; Nieminen, P; Pandola, L; Santin, G; Sawkey, D; Schalicke, A; Urban, L
2012-01-01
An overview of the current status of electromagnetic physics (EM) of the Geant4 toolkit is presented. Recent improvements are focused on the performance of large scale production for LHC and on the precision of simulation results over a wide energy range. Significant efforts have been made to improve the accuracy without compromising of CPU speed for EM particle transport. New biasing options have been introduced, which are applicable to any EM process. These include algorithms to enhance and suppress processes, force interactions or splitting of secondary particles. It is shown that the performance of the EM sub-package is improved. We will report extensions of the testing suite allowing high statistics validation of EM physics. It includes validation of multiple scattering, bremsstrahlung and other models. Cross checks between standard and low-energy EM models have been performed using evaluated data libraries and reference benchmark results.
DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING
Directory of Open Access Journals (Sweden)
A. Rizaldy
2012-07-01
Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.
DEFF Research Database (Denmark)
Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen
2009-01-01
Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...
High-speed extended-term time-domain simulation for online cascading analysis of power system
Fu, Chuan
implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large
Propagation Diagnostic Simulations Using High-Resolution Equatorial Plasma Bubble Simulations
Rino, C. L.; Carrano, C. S.; Yokoyama, T.
2017-12-01
In a recent paper, under review, equatorial-plasma-bubble (EPB) simulations were used to conduct a comparative analysis of the EPB spectra characteristics with high-resolution in-situ measurements from the C/NOFS satellite. EPB realizations sampled in planes perpendicular to magnetic field lines provided well-defined EPB structure at altitudes penetrating both high and low-density regions. The average C/NOFS structure in highly disturbed regions showed nearly identical two-component inverse-power-law spectral characteristics as the measured EPB structure. This paper describes the results of PWE simulations using the same two-dimensional cross-field EPB realizations. New Irregularity Parameter Estimation (IPE) diagnostics, which are based on two-dimensional equivalent-phase-screen theory [A theory of scintillation for two-component power law irregularity spectra: Overview and numerical results, by Charles Carrano and Charles Rino, DOI: 10.1002/2015RS005903], have been successfully applied to extract two-component inverse-power-law parameters from measured intensity spectra. The EPB simulations [Low and Midlatitude Ionospheric Plasma DensityIrregularities and Their Effects on Geomagnetic Field, by Tatsuhiro Yokoyama and Claudia Stolle, DOI 10.1007/s11214-016-0295-7] have sufficient resolution to populate the structure scales (tens of km to hundreds of meters) that cause strong scintillation at GPS frequencies. The simulations provide an ideal geometry whereby the ramifications of varying structure along the propagation path can be investigated. It is well known path-integrated one-dimensional spectra increase the one-dimensional index by one. The relation requires decorrelation along the propagation path. Correlated structure would be interpreted as stochastic total-electron-content (TEC). The simulations are performed with unmodified structure. Because the EPB structure is confined to the central region of the sample planes, edge effects are minimized. Consequently
High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be
2002-01-01
State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...
High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A
International Nuclear Information System (INIS)
J. Denard; A. Saha; G. Lavessiere
2001-01-01
CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described
Implementation of angular response function modeling in SPECT simulations with GATE
International Nuclear Information System (INIS)
Descourt, P; Visvikis, D; Carlier, T; Bardies, M; Du, Y; Song, X; Frey, E C; Tsui, B M W; Buvat, I
2010-01-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)
Implementation of angular response function modeling in SPECT simulations with GATE
Energy Technology Data Exchange (ETDEWEB)
Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)
2010-05-07
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)
Enhancing spatial detection accuracy for syndromic surveillance with street level incidence data
Directory of Open Access Journals (Sweden)
Alemi Farrokh
2010-01-01
Full Text Available Abstract Background The Department of Defense Military Health System operates a syndromic surveillance system that monitors medical records at more than 450 non-combat Military Treatment Facilities (MTF worldwide. The Electronic Surveillance System for Early Notification of Community-based Epidemics (ESSENCE uses both temporal and spatial algorithms to detect disease outbreaks. This study focuses on spatial detection and attempts to improve the effectiveness of the ESSENCE implementation of the spatial scan statistic by increasing the spatial resolution of incidence data from zip codes to street address level. Methods Influenza-Like Illness (ILI was used as a test syndrome to develop methods to improve the spatial accuracy of detected alerts. Simulated incident clusters of various sizes were superimposed on real ILI incidents from the 2008/2009 influenza season. Clusters were detected using the spatial scan statistic and their displacement from simulated loci was measured. Detected cluster size distributions were also evaluated for compliance with simulated cluster sizes. Results Relative to the ESSENCE zip code based method, clusters detected using street level incidents were displaced on average 65% less for 2 and 5 mile radius clusters and 31% less for 10 mile radius clusters. Detected cluster size distributions for the street address method were quasi normal and sizes tended to slightly exceed simulated radii. ESSENCE methods yielded fragmented distributions and had high rates of zero radius and oversized clusters. Conclusions Spatial detection accuracy improved notably with regard to both location and size when incidents were geocoded to street addresses rather than zip code centroids. Since street address geocoding success rates were only 73.5%, zip codes were still used for more than one quarter of ILI cases. Thus, further advances in spatial detection accuracy are dependant on systematic improvements in the collection of individual
Inference of Altimeter Accuracy on Along-track Gravity Anomaly Recovery
Directory of Open Access Journals (Sweden)
LI Yang
2015-04-01
Full Text Available A correlation model between along-track gravity anomaly accuracy, spatial resolution and altimeter accuracy is proposed. This new model is based on along-track gravity anomaly recovery and resolution estimation. Firstly, an error propagation formula of along-track gravity anomaly is derived from the principle of satellite altimetry. Then the mathematics between the SNR (signal to noise ratio and cross spectral coherence is deduced. The analytical correlation between altimeter accuracy and spatial resolution is finally obtained from the results above. Numerical simulation results show that along-track gravity anomaly accuracy is proportional to altimeter accuracy, while spatial resolution has a power relation with altimeter accuracy. e.g., with altimeter accuracy improving m times, gravity anomaly accuracy improves m times while spatial resolution improves m0.4644 times. This model is verified by real-world data.
High-accuracy determination of the neutron flux at n{sub T}OF
Energy Technology Data Exchange (ETDEWEB)
Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)
2013-12-15
The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)
DEFF Research Database (Denmark)
Zhao, Ying; Pang, Xiaodan; Deng, Lei
2011-01-01
A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....
Innovative Technique for High-Accuracy Remote Monitoring of Surface Water
Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.
2016-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
Hsu, Sam Sheng-Pin; Gateno, Jaime; Bell, R. Bryan; Hirsch, David L.; Markiewicz, Michael R.; Teichgraeber, John F.; Zhou, Xiaobo; Xia, James J.
2012-01-01
Purpose The purpose of this prospective multicenter study was to assess the accuracy of a computer-aided surgical simulation (CASS) protocol for orthognathic surgery. Materials and Methods The accuracy of the CASS protocol was assessed by comparing planned and postoperative outcomes of 65 consecutive patients enrolled from 3 centers. Computer-generated surgical splints were used for all patients. For the genioplasty, one center utilized computer-generated chin templates to reposition the chin segment only for patients with asymmetry. Standard intraoperative measurements were utilized without the chin templates for the remaining patients. The primary outcome measurements were linear and angular differences for the maxilla, mandible and chin when the planned and postoperative models were registered at the cranium. The secondary outcome measurements were: maxillary dental midline difference between the planned and postoperative positions; and linear and angular differences of the chin segment between the groups with and without the use of the template. The latter was measured when the planned and postoperative models were registered at mandibular body. Statistical analyses were performed, and the accuracy was reported using root mean square deviation (RMSD) and Bland and Altman's method for assessing measurement agreement. Results In the primary outcome measurements, there was no statistically significant difference among the 3 centers for the maxilla and mandible. The largest RMSD was 1.0mm and 1.5° for the maxilla, and 1.1mm and 1.8° for the mandible. For the chin, there was a statistically significant difference between the groups with and without the use of the chin template. The chin template group showed excellent accuracy with largest positional RMSD of 1.0mm and the largest orientational RSMD of 2.2°. However, larger variances were observed in the group not using the chin template. This was significant in anteroposterior and superoinferior directions, as in
A generalized polynomial chaos based ensemble Kalman filter with high accuracy
International Nuclear Information System (INIS)
Li Jia; Xiu Dongbin
2009-01-01
As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.
Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model
International Nuclear Information System (INIS)
Pope, Michael A.; Mousseau, Vincent A.
2009-01-01
The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle this larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient.
Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model
International Nuclear Information System (INIS)
Vincent A. Mousseau; Michael A. Pope
2007-01-01
The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle the larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient
Virtual Learning Simulations in High School
DEFF Research Database (Denmark)
Thisgaard, Malene Warming; Makransky, Guido
2017-01-01
The present study compared the value of using a virtual learning simulation compared to traditional lessons on the topic of evolution, and investigated if the virtual learning simulation could serve as a catalyst for STEM academic and career development, based on social cognitive career theory....... The investigation was conducted using a crossover repeated measures design based on a sample of 128 high school biology/biotech students. The results showed that the virtual learning simulation increased knowledge of evolution significantly, compared to the traditional lesson. No significant differences between...... the simulation and lesson were found in their ability to increase the non-cognitive measures. Both interventions increased self-efficacy significantly, and none of them had a significant effect on motivation. In addition, the results showed that the simulation increased interest in biology related tasks...
International Nuclear Information System (INIS)
Salazar, Ramon B.; Appenzeller, Joerg; Ilatikhameneh, Hesameddin; Rahman, Rajib; Klimeck, Gerhard
2015-01-01
A new compact modeling approach is presented which describes the full current-voltage (I-V) characteristic of high-performance (aggressively scaled-down) tunneling field-effect-transistors (TFETs) based on homojunction direct-bandgap semiconductors. The model is based on an analytic description of two key features, which capture the main physical phenomena related to TFETs: (1) the potential profile from source to channel and (2) the elliptic curvature of the complex bands in the bandgap region. It is proposed to use 1D Poisson's equations in the source and the channel to describe the potential profile in homojunction TFETs. This allows to quantify the impact of source/drain doping on device performance, an aspect usually ignored in TFET modeling but highly relevant in ultra-scaled devices. The compact model is validated by comparison with state-of-the-art quantum transport simulations using a 3D full band atomistic approach based on non-equilibrium Green's functions. It is shown that the model reproduces with good accuracy the data obtained from the simulations in all regions of operation: the on/off states and the n/p branches of conduction. This approach allows calculation of energy-dependent band-to-band tunneling currents in TFETs, a feature that allows gaining deep insights into the underlying device physics. The simplicity and accuracy of the approach provide a powerful tool to explore in a quantitatively manner how a wide variety of parameters (material-, size-, and/or geometry-dependent) impact the TFET performance under any bias conditions. The proposed model presents thus a practical complement to computationally expensive simulations such as the 3D NEGF approach
Multi-wavelength approach towards on-product overlay accuracy and robustness
Bhattacharyya, Kaustuve; Noot, Marc; Chang, Hammer; Liao, Sax; Chang, Ken; Gosali, Benny; Su, Eason; Wang, Cathy; den Boef, Arie; Fouquet, Christophe; Huang, Guo-Tsai; Chen, Kai-Hsiung; Cheng, Kevin; Lin, John
2018-03-01
Success of diffraction-based overlay (DBO) technique1,4,5 in the industry is not just for its good precision and low toolinduced shift, but also for the measurement accuracy2 and robustness that DBO can provide. Significant efforts are put in to capitalize on the potential that DBO has to address measurement accuracy and robustness. Introduction of many measurement wavelength choices (continuous wavelength) in DBO is one of the key new capabilities in this area. Along with the continuous choice of wavelengths, the algorithms (fueled by swing-curve physics) on how to use these wavelengths are of high importance for a robust recipe setup that can avoid the impact from process stack variations (symmetric as well as asymmetric). All these are discussed. Moreover, another aspect of boosting measurement accuracy and robustness is discussed that deploys the capability to combine overlay measurement data from multiple wavelength measurements. The goal is to provide a method to make overlay measurements immune from process stack variations and also to report health KPIs for every measurement. By combining measurements from multiple wavelengths, a final overlay measurement is generated. The results show a significant benefit in accuracy and robustness against process stack variation. These results are supported by both measurement data as well as simulation from many product stacks.
Preview-based sampling for controlling gaseous simulations
Huang, Ruoguan
2011-01-01
In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.
High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique
International Nuclear Information System (INIS)
Atkinson, I.
1995-01-01
Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)
International Nuclear Information System (INIS)
Chakraborty, Brahmananda
2009-01-01
Random number plays an important role in any Monte Carlo simulation. The accuracy of the results depends on the quality of the sequence of random numbers employed in the simulation. These include randomness of the random numbers, uniformity of their distribution, absence of correlation and long period. In a typical Monte Carlo simulation of particle transport in a nuclear reactor core, the history of a particle from its birth in a fission event until its death by an absorption or leakage event is tracked. The geometry of the core and the surrounding materials are exactly modeled in the simulation. To track a neutron history one needs random numbers for determining inter collision distance, nature of the collision, the direction of the scattered neutron etc. Neutrons are tracked in batches. In one batch approximately 2000-5000 neutrons are tracked. The statistical accuracy of the results of the simulation depends on the total number of particles (number of particles in one batch multiplied by the number of batches) tracked. The number of histories to be generated is usually large for a typical radiation transport problem. To track a very large number of histories one needs to generate a long sequence of independent random numbers. In other words the cycle length of the random number generator (RNG) should be more than the total number of random numbers required for simulating the given transport problem. The number of bits of the machine generally limits the cycle length. For a binary machine of p bits the maximum cycle length is 2 p . To achieve higher cycle length in the same machine one has to use either register arithmetic or bit manipulation technique
Tests of numerical simulation algorithms for the Kubo oscillator
International Nuclear Information System (INIS)
Fox, R.F.; Roy, R.; Yu, A.W.
1987-01-01
Numerical simulation algorithms for multiplicative noise (white or colored) are tested for accuracy against closed-form expressions for the Kubo oscillator. Direct white noise simulations lead to spurious decay of the modulus of the oscillator amplitude. A straightforward colored noise algorithm greatly reduces this decay and also provides highly accurate results in the white noise limit
Realistic simulations of coaxial atomisation
Zaleski, Stephane; Fuster, Daniel; Arrufat Jackson, Tomas; Ling, Yue; Cenni, Matteo; Scardovelli, Ruben; Tryggvason, Gretar
2015-11-01
We discuss advances in the methodology for Direct Numerical Simulations of coaxial atomization in typical experimental conditions. Such conditions are extremely demanding for the numerical methods. The key difficulty seems to be the combination of high density ratios, surface tension, and large Reynolds numbers. We explore how using a momentum-conserving Volume-Of-Fluid scheme allows to improve the stability and accuracy of the simulations. We show computational evidence that the use of momentum conserving methods allows to reduce the required number of grid points by an order of magnitude in the simple case of a falling rain drop. We then apply these ideas to coaxial atomization. We show that in moderate-size simulations in air-water conditions close to real experiments, instabilities are still present and then discuss ways to fix them. Among those, removing small VOF debris and improving the time-stepping scheme are two important directions.The accuracy of the simulations is then discussed in comparison with experimental results and in particular the angle of ejection of the structures. The code used for this research is free and distributed at http://parissimulator.sf.net.
Interprofessional education in pharmacology using high-fidelity simulation.
Meyer, Brittney A; Seefeldt, Teresa M; Ngorsuraches, Surachat; Hendrickx, Lori D; Lubeck, Paula M; Farver, Debra K; Heins, Jodi R
2017-11-01
This study examined the feasibility of an interprofessional high-fidelity pharmacology simulation and its impact on pharmacy and nursing students' perceptions of interprofessionalism and pharmacology knowledge. Pharmacy and nursing students participated in a pharmacology simulation using a high-fidelity patient simulator. Faculty-facilitated debriefing included discussion of the case and collaboration. To determine the impact of the activity on students' perceptions of interprofessionalism and their ability to apply pharmacology knowledge, surveys were administered to students before and after the simulation. Attitudes Toward Health Care Teams scale (ATHCT) scores improved from 4.55 to 4.72 on a scale of 1-6 (p = 0.005). Almost all (over 90%) of the students stated their pharmacology knowledge and their ability to apply that knowledge improved following the simulation. A simulation in pharmacology is feasible and favorably affected students' interprofessionalism and pharmacology knowledge perceptions. Pharmacology is a core science course required by multiple health professions in early program curricula, making it favorable for incorporation of interprofessional learning experiences. However, reports of high-fidelity interprofessional simulation in pharmacology courses are limited. This manuscript contributes to the literature in the field of interprofessional education by demonstrating that an interprofessional simulation in pharmacology is feasible and can favorably affect students' perceptions of interprofessionalism. This manuscript provides an example of a pharmacology interprofessional simulation that faculty in other programs can use to build similar educational activities. Copyright © 2017 Elsevier Inc. All rights reserved.
Measurement system with high accuracy for laser beam quality.
Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming
2015-05-20
Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.
Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.
Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E
2016-02-01
Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and
Monte Carlo Simulations of Ultra-High Energy Resolution Gamma Detectors for Nuclear Safeguards
International Nuclear Information System (INIS)
Robles, A.; Drury, O.B.; Friedrich, S.
2009-01-01
Ultra-high energy resolution superconducting gamma-ray detectors can improve the accuracy of non-destructive analysis for unknown radioactive materials. These detectors offer an order of magnitude improvement in resolution over conventional high purity germanium detectors. The increase in resolution reduces errors from line overlap and allows for the identification of weaker gamma-rays by increasing the magnitude of the peaks above the background. In order to optimize the detector geometry and to understand the spectral response function Geant4, a Monte Carlo simulation package coded in C++, was used to model the detectors. Using a 1 mm 3 Sn absorber and a monochromatic gamma source, different absorber geometries were tested. The simulation was expanded to include the Cu block behind the absorber and four layers of shielding required for detector operation at 0.1 K. The energy spectrum was modeled for an Am-241 and a Cs-137 source, including scattering events in the shielding, and the results were compared to experimental data. For both sources the main spectral features such as the photopeak, the Compton continuum, the escape x-rays and the backscatter peak were identified. Finally, the low energy response of a Pu-239 source was modeled to assess the feasibility of Pu-239 detection in spent fuel. This modeling of superconducting detectors can serve as a guide to optimize the configuration in future spectrometer designs.
Energy Technology Data Exchange (ETDEWEB)
Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.; Cameron, Bruce M.; Robb, Richard A. [Biomedical Imaging Resource, Mayo Clinic College of Medicine, Rochester, Minnesota 55905 (United States); Kwartowitz, David M. [Department of Bioengineering, Clemson University, Clemson, South Carolina 29634 (United States); Gunawan, Mia [Department of Biochemistry and Molecular and Cellular Biology, Georgetown University, Washington D.C. 20057 (United States); Johnson, Susan B.; Packer, Douglas L. [Division of Cardiovascular Diseases, Mayo Clinic, Rochester, Minnesota 55905 (United States); Dalegrave, Charles [Clinical Cardiac Electrophysiology, Cardiology Division Hospital Sao Paulo, Federal University of Sao Paulo, 04024-002 Brazil (Brazil); Kolasa, Mark W. [David Grant Medical Center, Fairfield, California 94535 (United States)
2014-02-15
Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved
International Nuclear Information System (INIS)
Yasutaka Sakurai; Takashi Yabe; Tomomasa Ohkubo; Yoichi Ogata; Michitsugu Mori
2005-01-01
Generally, there are two coordinate systems in computation of fluid dynamics: curvilinear coordinate or Cartesian coordinate. The former is suitable for describing complex figure, but it cannot get high accuracy. On the other hand, the latter can easily increase the accuracy, but it needs a large number of grids to describe complex figure. In this paper, we propose a new grid generating method, the Soroban grid, which has large capability for treating complex figure and does not lose the accuracy. Coupling this grid generating method and the CIP method, we can get flexibility to describe complex figure without loosing (3rd order) accuracy. Since the Soroban grid is unstructured grid, we can not use the staggered grid and had better use the co-location grid. Although the fluid computation in the co-location grid is usually unstable, we succeeded in calculating the multi-phase flow that has large density difference applying the C-CUP method to this grid system. In this paper, we shall introduce this grid generating method and apply these methods to simulate the steam injector of power plant. (authors)
Directory of Open Access Journals (Sweden)
Junya Lv
2017-01-01
Full Text Available The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which play critical roles in process design and optimization. In this investigation, the true stress-strain data of an Inconel 718 superalloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1153–1353 K and strain rate range of 0.01–10 s−1 on a Gleeble 3500 testing machine (DSI, St. Paul, DE, USA. Then the constitutive relationship was modeled by an optimally-constructed and well-trained back-propagation artificial neural network (ANN. The evaluation of the ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of Inconel 718 superalloy. Consequently, the developed ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the continuous mapping relationship for temperature, strain rate, strain and stress. Finally, the constructed ANN was implanted in a finite element solver though the interface of “URPFLO” subroutine to simulate the isothermal compression tests. The results show that the integration of finite element method with ANN model can significantly promote the accuracy improvement of numerical simulations for hot forming processes.
Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography
Directory of Open Access Journals (Sweden)
Jonathan Jones
2018-03-01
Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.
Safety Assessment of Advanced Imaging Sequences II: Simulations
DEFF Research Database (Denmark)
Jensen, Jørgen Arendt
2016-01-01
.6%, when using the impulse response of the probe estimated from an independent measurement. The accuracy is increased to between -22% to 24.5% for MI and between -33.2% to 27.0% for Ispta.3, when using the pressure response measured at a single point to scale the simulation. The spatial distribution of MI...... Mechanical Index (MI) and Ispta.3 as required by FDA. The method is performed on four different imaging schemes and compared to measurements conducted using the SARUS experimental scanner. The sequences include focused emissions with an F-number of 2 with 64 elements that generate highly non-linear fields....... The simulation time is between 0.67 ms to 2.8 ms per emission and imaging point, making it possible to simulate even complex emission sequences in less than 1 s for a single spatial position. The linear simulations yield a relative accuracy on MI between -12.1% to 52.3% and for Ispta.3 between -38.6% to 62...
Chai, Linguo; Cai, Baigen; ShangGuan, Wei; Wang, Jian; Wang, Huashen
2017-08-23
To enhance the reality of Connected and Autonomous Vehicles (CAVs) kinematic simulation scenarios and to guarantee the accuracy and reliability of the verification, a four-layer CAVs kinematic simulation framework, which is composed with road network layer, vehicle operating layer, uncertainties modelling layer and demonstrating layer, is proposed in this paper. Properties of the intersections are defined to describe the road network. A target position based vehicle position updating method is designed to simulate such vehicle behaviors as lane changing and turning. Vehicle kinematic models are implemented to maintain the status of the vehicles when they are moving towards the target position. Priorities for individual vehicle control are authorized for different layers. Operation mechanisms of CAVs uncertainties, which are defined as position error and communication delay in this paper, are implemented in the simulation to enhance the reality of the simulation. A simulation platform is developed based on the proposed methodology. A comparison of simulated and theoretical vehicle delay has been analyzed to prove the validity and the creditability of the platform. The scenario of rear-end collision avoidance is conducted to verify the uncertainties operating mechanisms, and a slot-based intersections (SIs) control strategy is realized and verified in the simulation platform to show the supports of the platform to CAVs kinematic simulation and verification.
Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R
1997-12-01
This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.
Accuracy and high-speed technique for autoprocessing of Young's fringes
Chen, Wenyi; Tan, Yushan
1991-12-01
In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.
Directory of Open Access Journals (Sweden)
Mark Lyons
2013-06-01
Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on
Simulation study of the high intensity S-Band photoinjector
Energy Technology Data Exchange (ETDEWEB)
Zhu, Xiongwei; Nakajima, Kazuhisa [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan)
2001-10-01
In this paper, we report the results of simulation study of the high intensity S-Band photoinjector. The aim of the simulation study is to transport high bunch charge with low emittance evolution. The simulation result shows that 7nC bunch with rms emittance 22.3 {pi} mm mrad can be outputted at the exit of photoinjector. (author)
Simulation study of the high intensity S-Band photoinjector
International Nuclear Information System (INIS)
Zhu, Xiongwei; Nakajima, Kazuhisa
2001-01-01
In this paper, we report the results of simulation study of the high intensity S-Band photoinjector. The aim of the simulation study is to transport high bunch charge with low emittance evolution. The simulation result shows that 7nC bunch with rms emittance 22.3 π mm mrad can be outputted at the exit of photoinjector. (author)
Kinematics Simulation Analysis of Packaging Robot with Joint Clearance
Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.
2018-03-01
Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.
On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.
Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis
2016-07-01
To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Computer simulation of high energy displacement cascades
International Nuclear Information System (INIS)
Heinisch, H.L.
1990-01-01
A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)
Energy Technology Data Exchange (ETDEWEB)
Flueck, Alex [Illinois Inst. of Technology, Chicago, IL (United States)
2017-07-14
The “High Fidelity, Faster than RealTime Simulator for Predicting Power System Dynamic Behavior” was designed and developed by Illinois Institute of Technology with critical contributions from Electrocon International, Argonne National Laboratory, Alstom Grid and McCoy Energy. Also essential to the project were our two utility partners: Commonwealth Edison and AltaLink. The project was a success due to several major breakthroughs in the area of largescale power system dynamics simulation, including (1) a validated faster than real time simulation of both stable and unstable transient dynamics in a largescale positive sequence transmission grid model, (2) a threephase unbalanced simulation platform for modeling new grid devices, such as independently controlled singlephase static var compensators (SVCs), (3) the world’s first high fidelity threephase unbalanced dynamics and protection simulator based on Electrocon’s CAPE program, and (4) a firstofits kind implementation of a singlephase induction motor model with stall capability. The simulator results will aid power grid operators in their true time of need, when there is a significant risk of cascading outages. The simulator will accelerate performance and enhance accuracy of dynamics simulations, enabling operators to maintain reliability and steer clear of blackouts. In the longterm, the simulator will form the backbone of the newly conceived hybrid realtime protection and control architecture that will coordinate local controls, widearea measurements, widearea controls and advanced realtime prediction capabilities. The nation’s citizens will benefit in several ways, including (1) less down time from power outages due to the fasterthanrealtime simulator’s predictive capability, (2) higher levels of reliability due to the detailed dynamics plus protection simulation capability, and (3) more resiliency due to the three phase unbalanced simulator’s ability to
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Very high-accuracy calibration of radiation pattern and gain of a near-field probe
DEFF Research Database (Denmark)
Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav
2014-01-01
In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
High-fidelity simulation among bachelor students in simulation groups and use of different roles.
Thidemann, Inger-Johanne; Söderhamn, Olle
2013-12-01
Cost limitations might challenge the use of high-fidelity simulation as a teaching-learning method. This article presents the results of a Norwegian project including two simulation studies in which simulation teaching and learning were studied among students in the second year of a three-year bachelor nursing programme. The students were organised into small simulation groups with different roles; nurse, physician, family member and observer. Based on experiences in different roles, the students evaluated the simulation design characteristics and educational practices used in the simulation. In addition, three simulation outcomes were measured; knowledge (learning), Student Satisfaction and Self-confidence in Learning. The simulation was evaluated to be a valuable teaching-learning method to develop professional understanding and insight independent of roles. Overall, the students rated the Student Satisfaction and Self-confidence in Learning as high. Knowledge about the specific patient focus increased after the simulation activity. Students can develop practical, communication and collaboration skills, through experiencing the nurse's role. Assuming the observer role, students have the potential for vicarious learning, which could increase the learning value. Both methods of learning (practical experience or vicarious learning) may bridge the gap between theory and practice and contribute to the development of skills in reflective and critical thinking. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states
Directory of Open Access Journals (Sweden)
Grünewald Stefan
2011-01-01
accuracies on 1000 simulated Yule trees also exhibit similar behaviors. For comb-shaped trees, the limiting reconstruction accuracies of using all taxa are always less than or equal to those of using the nearest root-to-leaf path when the conservation probability is not less than 1N. As a result, more taxa are suggested for ancestral reconstruction when the tree topology is balanced and the sequences are highly similar, and a few taxa close to the root are recommended otherwise.
Ritsch, Elmar; Froidevaux, Daniel; Salzburger, Andreas
One of the cornerstones for the success of the ATLAS experiment at the Large Hadron Collider (LHC) is a very accurate Monte Carlo detector simulation. However, a limit is being reached regarding the amount of simulated data which can be produced and stored with the computing resources available through the worldwide LHC computing grid (WLCG). The Integrated Simulation Framework (ISF) is a novel approach to detector simula- tion which enables a more efficient use of these computing resources and thus allows for the generation of more simulated data. Various simulation technologies are combined to allow for faster simulation approaches which are targeted at the specific needs of in- dividual physics studies. Costly full simulation technologies are only used where high accuracy is required by physics analyses and fast simulation technologies are applied everywhere else. As one of the first applications of the ISF, a new combined simulation approach is developed for the generation of detector calibration samples ...
Meditation experience predicts introspective accuracy.
Directory of Open Access Journals (Sweden)
Kieran C R Fox
Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.
High-speed LWR transients simulation for optimizing emergency response
International Nuclear Information System (INIS)
Wulff, W.; Cheng, H.S.; Lekach, S.V.; Mallen, A.N.; Stritar, A.
1984-01-01
The purpose of computer-assisted emergency response in nuclear power plants, and the requirements for achieving such a response, are presented. An important requirement is the attainment of realistic high-speed plant simulations at the reactor site. Currently pursued development programs for plant simulations are reviewed. Five modeling principles are established and a criterion is presented for selecting numerical procedures and efficient computer hardware to achieve high-speed simulations. A newly developed technology for high-speed power plant simulation is described and results are presented. It is shown that simulation speeds ten times greater than real-time process-speeds are possible, and that plant instrumentation can be made part of the computational loop in a small, on-site minicomputer. Additional technical issues are presented which must still be resolved before the newly developed technology can be implemented in a nuclear power plant
The fusion of satellite and UAV data: simulation of high spatial resolution band
Jenerowicz, Agnieszka; Siok, Katarzyna; Woroszkiewicz, Malgorzata; Orych, Agata
2017-10-01
Remote sensing techniques used in the precision agriculture and farming that apply imagery data obtained with sensors mounted on UAV platforms became more popular in the last few years due to the availability of low- cost UAV platforms and low- cost sensors. Data obtained from low altitudes with low- cost sensors can be characterised by high spatial and radiometric resolution but quite low spectral resolution, therefore the application of imagery data obtained with such technology is quite limited and can be used only for the basic land cover classification. To enrich the spectral resolution of imagery data acquired with low- cost sensors from low altitudes, the authors proposed the fusion of RGB data obtained with UAV platform with multispectral satellite imagery. The fusion is based on the pansharpening process, that aims to integrate the spatial details of the high-resolution panchromatic image with the spectral information of lower resolution multispectral or hyperspectral imagery to obtain multispectral or hyperspectral images with high spatial resolution. The key of pansharpening is to properly estimate the missing spatial details of multispectral images while preserving their spectral properties. In the research, the authors presented the fusion of RGB images (with high spatial resolution) obtained with sensors mounted on low- cost UAV platforms and multispectral satellite imagery with satellite sensors, i.e. Landsat 8 OLI. To perform the fusion of UAV data with satellite imagery, the simulation of the panchromatic bands from RGB data based on the spectral channels linear combination, was conducted. Next, for simulated bands and multispectral satellite images, the Gram-Schmidt pansharpening method was applied. As a result of the fusion, the authors obtained several multispectral images with very high spatial resolution and then analysed the spatial and spectral accuracies of processed images.
Impact of a highly detailed emission inventory on modeling accuracy
Taghavi, M.; Cautenet, S.; Arteta, J.
2005-03-01
During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.
From journal to headline: the accuracy of climate science news in Danish high quality newspapers
DEFF Research Database (Denmark)
Vestergård, Gunver Lystbæk
2011-01-01
analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...
International Nuclear Information System (INIS)
Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo
2014-01-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting
Energy Technology Data Exchange (ETDEWEB)
Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)
2014-07-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
Garner, Andrew J. P.; Liu, Qing; Thompson, Jayne; Vedral, Vlatko; Gu, mile
2017-10-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart.
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Issenberg, S Barry; McGaghie, William C; Petrusa, Emil R; Lee Gordon, David; Scalese, Ross J
2005-01-01
1969 to 2003, 34 years. Simulations are now in widespread use in medical education and medical personnel evaluation. Outcomes research on the use and effectiveness of simulation technology in medical education is scattered, inconsistent and varies widely in methodological rigor and substantive focus. Review and synthesize existing evidence in educational science that addresses the question, 'What are the features and uses of high-fidelity medical simulations that lead to most effective learning?'. The search covered five literature databases (ERIC, MEDLINE, PsycINFO, Web of Science and Timelit) and employed 91 single search terms and concepts and their Boolean combinations. Hand searching, Internet searches and attention to the 'grey literature' were also used. The aim was to perform the most thorough literature search possible of peer-reviewed publications and reports in the unpublished literature that have been judged for academic quality. Four screening criteria were used to reduce the initial pool of 670 journal articles to a focused set of 109 studies: (a) elimination of review articles in favor of empirical studies; (b) use of a simulator as an educational assessment or intervention with learner outcomes measured quantitatively; (c) comparative research, either experimental or quasi-experimental; and (d) research that involves simulation as an educational intervention. Data were extracted systematically from the 109 eligible journal articles by independent coders. Each coder used a standardized data extraction protocol. Qualitative data synthesis and tabular presentation of research methods and outcomes were used. Heterogeneity of research designs, educational interventions, outcome measures and timeframe precluded data synthesis using meta-analysis. Coding accuracy for features of the journal articles is high. The extant quality of the published research is generally weak. The weight of the best available evidence suggests that high-fidelity medical
Methods Research about Accuracy Loss Tracing of Dynamic Measurement System Based on WNN
International Nuclear Information System (INIS)
Lin, S-W; Fei, Y T; Jiang, M L; Tsai, C-Y; Cheng Hsinyu
2006-01-01
The paper presents a method of achieving accuracy loss of the dynamic measurement system according to change of errors on different period of the system. WNN, used to trace the accuracy loss of dynamic measurement system, traces the total precision loss during a certain period to every part of the system, and the accuracy loss of every part can be get, so retaining the accuracy and optimum design of the system is possible. Take tracing the accuracy loss of a simulated system for an example to testify the method
Application of Nuclear Power Plant Simulator for High School Student Training
Energy Technology Data Exchange (ETDEWEB)
Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)
2014-10-15
In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant.
Application of Nuclear Power Plant Simulator for High School Student Training
International Nuclear Information System (INIS)
Kong, Chi Dong; Choi, Soo Young; Park, Min Young; Lee, Duck Jung
2014-01-01
In this context, two lectures on nuclear power plant simulator and practical training were provided to high school students in 2014. The education contents were composed of two parts: the micro-physics simulator and the macro-physics simulator. The micro-physics simulator treats only in-core phenomena, whereas the macro-physics simulator describes whole system of a nuclear power plant but it considers a reactor core as a point. The high school students showed strong interests caused by the fact that they operated the simulation by themselves. This abstract reports the training detail and evaluation of the effectiveness of the training. Lectures on nuclear power plant simulator and practical exercises were performed at Ulsan Energy High School and Ulsan Meister High School. Two simulators were used: the macro- and micro-physics simulator. Using the macro-physics simulator, the following five simulations were performed: reactor power increase/decrease, reactor trip, single reactor coolant pump trip, large break loss of coolant accident, and station black-out with D.C. power loss. Using the micro-physics simulator, the following three analyses were performed: the transient analysis, fuel rod performance analysis, and thermal-hydraulics analysis. The students at both high schools showed interest and strong support for the simulator-based training. After the training, the students showed passionate responses that the education was of help for them to get interest in a nuclear power plant
A High-Speed Train Operation Plan Inspection Simulation Model
Directory of Open Access Journals (Sweden)
Yang Rui
2018-01-01
Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.
High-accuracy critical exponents for O(N) hierarchical 3D sigma models
International Nuclear Information System (INIS)
Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.
2006-01-01
The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits
MUMAX: A new high-performance micromagnetic simulation tool
International Nuclear Information System (INIS)
Vansteenkiste, A.; Van de Wiele, B.
2011-01-01
We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.
The latest full-scale PWR simulator in Japan
International Nuclear Information System (INIS)
Nishimuru, Y.; Tagi, H.; Nakabayashi, T.
2004-01-01
The latest MHI Full-scale Simulator has an excellent system configuration, in both flexibility and extendability, and has highly sophisticated performance in PWR simulation by the adoption of CANAC-II and PRETTY codes. It also has an instructive character to display the plant's internal status, such as RCS condition, through animation. Further, the simulation has been verified to meet a functional examination at model plant, and with a scale model test result in a two-phase flow event, after evaluation for its accuracy. Thus, the Simulator can be devoted to a sophisticated and broad training course on PWR operation. (author)
Bottenberg, Michelle M; Bryant, Ginelle A; Haack, Sally L; North, Andrew M
2013-06-12
To compare student accuracy in measuring normal and high blood pressures using a simulator arm. In this prospective, single-blind, study involving third-year pharmacy students, simulator arms were programmed with prespecified normal and high blood pressures. Students measured preset normal and high diastolic and systolic blood pressure using a crossover design. One hundred sixteen students completed both blood pressure measurements. There was a significant difference between the accuracy of high systolic blood pressure (HSBP) measurement and normal systolic blood pressure (NSBP) measurement (mean HSBP difference 8.4 ± 10.9 mmHg vs NSBP 3.6 ± 6.4 mmHg; pdifference between the accuracy of high diastolic blood pressure (HDBP) measurement and normal diastolic blood pressure (NDBP) measurement (mean HDBP difference 6.8 ± 9.6 mmHg vs. mean NDBP difference 4.6 ± 4.5 mmHg; p=0.089). Pharmacy students may need additional instruction and experience with taking high blood pressure measurements to ensure they are able to accurately assess this important vital sign.
Bryant, Ginelle A.; Haack, Sally L.; North, Andrew M.
2013-01-01
Objective. To compare student accuracy in measuring normal and high blood pressures using a simulator arm. Methods. In this prospective, single-blind, study involving third-year pharmacy students, simulator arms were programmed with prespecified normal and high blood pressures. Students measured preset normal and high diastolic and systolic blood pressure using a crossover design. Results. One hundred sixteen students completed both blood pressure measurements. There was a significant difference between the accuracy of high systolic blood pressure (HSBP) measurement and normal systolic blood pressure (NSBP) measurement (mean HSBP difference 8.4 ± 10.9 mmHg vs NSBP 3.6 ± 6.4 mmHg; pdifference between the accuracy of high diastolic blood pressure (HDBP) measurement and normal diastolic blood pressure (NDBP) measurement (mean HDBP difference 6.8 ± 9.6 mmHg vs. mean NDBP difference 4.6 ± 4.5 mmHg; p=0.089). Conclusions. Pharmacy students may need additional instruction and experience with taking high blood pressure measurements to ensure they are able to accurately assess this important vital sign. PMID:23788809
International Nuclear Information System (INIS)
Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu
2016-01-01
A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)
Accuracy improvement of a hybrid robot for ITER application using POE modeling method
International Nuclear Information System (INIS)
Wang, Yongbo; Wu, Huapeng; Handroos, Heikki
2013-01-01
Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device
Accuracy improvement of a hybrid robot for ITER application using POE modeling method
Energy Technology Data Exchange (ETDEWEB)
Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)
2013-10-15
Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.
Cavitation simulation on marine propellers
DEFF Research Database (Denmark)
Shin, Keun Woo
Cavitation on marine propellers causes thrust breakdown, noise, vibration and erosion. The increasing demand for high-efficiency propellers makes it difficult to avoid the occurrence of cavitation. Currently, practical analysis of propeller cavitation depends on cavitation tunnel test, empirical...... criteria and inviscid flow method, but a series of model test is costly and the other two methods have low accuracy. Nowadays, computational fluid dynamics by using a viscous flow solver is common for practical industrial applications in many disciplines. Cavitation models in viscous flow solvers have been...... hydrofoils and conventional/highly-skewed propellers are performed with one of three cavitation models proven in 2D analysis. 3D cases also show accuracy and robustness of numerical method in simulating steady and unsteady sheet cavitation on complicated geometries. Hydrodynamic characteristics of cavitation...
Airfoil noise computation use high-order schemes
DEFF Research Database (Denmark)
Zhu, Wei Jun; Shen, Wen Zhong; Sørensen, Jens Nørkær
2007-01-01
High-order finite difference schemes with at least 4th-order spatial accuracy are used to simulate aerodynamically generated noise. The aeroacoustic solver with 4th-order up to 8th-order accuracy is implemented into the in-house flow solver, EllipSys2D/3D. Dispersion-Relation-Preserving (DRP) fin...
Very high-resolution regional climate simulations over Scandinavia-present climate
DEFF Research Database (Denmark)
Christensen, Ole B.; Christensen, Jens H.; Machenhauer, Bennert
1998-01-01
realistically simulated. It is found in particular that in mountainous regions the high-resolution simulation shows improvements in the simulation of hydrologically relevant fields such as runoff and snow cover. Also, the distribution of precipitation on different intensity classes is most realistically...... on a high-density station network for the Scandinavian countries compiled for the present study. The simulated runoff is compared with observed data from Sweden extracted from a Swedish climatological atlas. These runoff data indicate that the precipitation analyses are underestimating the true...... simulated in the high-resolution simulation. It does, however, inherit certain large-scale systematic errors from the driving GCM. In many cases these errors increase with increasing resolution. Model verification of near-surface temperature and precipitation is made using a new gridded climatology based...
Rapp, Richard H.
1993-01-01
The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.
Erkaya, Yunus
The number of solar photovoltaic (PV) installations is growing exponentially, and to improve the energy yield and the efficiency of PV systems, it is necessary to have correct methods for simulation, measurement, and emulation. PV systems can be simulated using PV models for different configurations and technologies of PV modules. Additionally, different environmental conditions of solar irradiance, temperature, and partial shading can be incorporated in the model to accurately simulate PV systems for any given condition. The electrical measurement of PV systems both prior to and after making electrical connections is important for attaining high efficiency and reliability. Measuring PV modules using a current-voltage (I-V) curve tracer allows the installer to know whether the PV modules are 100% operational. The installed modules can be properly matched to maximize performance. Once installed, the whole system needs to be characterized similarly to detect mismatches, partial shading, or installation damage before energizing the system. This will prevent any reliability issues from the onset and ensure the system efficiency will remain high. A capacitive load is implemented in making I-V curve measurements with the goal of minimizing the curve tracer volume and cost. Additionally, the increase of measurement resolution and accuracy is possible via the use of accurate voltage and current measurement methods and accurate PV models to translate the curves to standard testing conditions. A move from mechanical relays to solid-state MOSFETs improved system reliability while significantly reducing device volume and costs. Finally, emulating PV modules is necessary for testing electrical components of a PV system. PV emulation simplifies and standardizes the tests allowing for different irradiance, temperature and partial shading levels to be easily tested. Proper emulation of PV modules requires an accurate and mathematically simple PV model that incorporates all known
The analysis of the neutron flux of n_TOF (in EAR1) revealed an anomaly in the 10-30 keV neutron energy range. While the flux extracted on the basis of the $^{6}$Li(n,t)$^{4}$He and $^{10}$B(n,$\\alpha$)$^{7}$Li reactions mostly agreed with each other and with the results of FLUKA simulations of the neutron beam, the one based on the $^{235}$U(n,f) reaction was found to be systematically lower, independently of the detection system used. A possible explanation is that the $^{235}$U(n,f) crosssection in that energy region, where in principle should be known with an uncertainty of 1%, may be systematically overestimated. Such a finding, which has a negligible influence on thermal reactors, would be important for future fast critical or subcritical reactors. Furthermore, its interest is more general, since the $^{235}$U(n,f) reaction is often used at that energy to determine the neutron flux, or as reference in measurements of fission cross section of other actinides. We propose to perform a high-accuracy, high-r...
An output amplitude configurable wideband automatic gain control with high gain step accuracy
International Nuclear Information System (INIS)
He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan
2012-01-01
An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)
RELAP5: Applications to high fidelity simulation
International Nuclear Information System (INIS)
Johnsen, G.W.; Chen, Y.S.
1988-01-01
RELAP5 is a pressurized water reactor system transient simulation code for use in nuclear power plant safety analysis. The latest version, MOD2, may be used to simulate and study a wide variety of abnormal events, including loss-of-coolant accidents, operational transients, and transients in which the entire secondary system must be modeled. In this paper, a basic overview of the code is given, its assessment and application illustrated, and progress toward its use as a high fidelity simulator described. 7 refs., 7 figs
High accuracy magnetic field mapping of the LEP spectrometer magnet
Roncarolo, F
2000-01-01
The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...
Accuracy optimization with wavelength tunability in overlay imaging technology
Lee, Honggoo; Kang, Yoonshik; Han, Sangjoon; Shim, Kyuchan; Hong, Minhyung; Kim, Seungyoung; Lee, Jieun; Lee, Dongyoung; Oh, Eungryong; Choi, Ahlin; Kim, Youngsik; Marciano, Tal; Klein, Dana; Hajaj, Eitan M.; Aharon, Sharon; Ben-Dov, Guy; Lilach, Saltoun; Serero, Dan; Golotsvan, Anna
2018-03-01
As semiconductor manufacturing technology progresses and the dimensions of integrated circuit elements shrink, overlay budget is accordingly being reduced. Overlay budget closely approaches the scale of measurement inaccuracies due to both optical imperfections of the measurement system and the interaction of light with geometrical asymmetries of the measured targets. Measurement inaccuracies can no longer be ignored due to their significant effect on the resulting device yield. In this paper we investigate a new approach for imaging based overlay (IBO) measurements by optimizing accuracy rather than contrast precision, including its effect over the total target performance, using wavelength tunable overlay imaging metrology. We present new accuracy metrics based on theoretical development and present their quality in identifying the measurement accuracy when compared to CD-SEM overlay measurements. The paper presents the theoretical considerations and simulation work, as well as measurement data, for which tunability combined with the new accuracy metrics is shown to improve accuracy performance.
Determination of UAV position using high accuracy navigation platform
Directory of Open Access Journals (Sweden)
Ireneusz Kubicki
2016-07-01
Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration
Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele
2018-03-01
In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.
A clinical study of lung cancer dose calculation accuracy with Monte Carlo simulation.
Zhao, Yanqun; Qi, Guohai; Yin, Gang; Wang, Xianliang; Wang, Pei; Li, Jian; Xiao, Mingyong; Li, Jie; Kang, Shengwei; Liao, Xiongfei
2014-12-16
The accuracy of dose calculation is crucial to the quality of treatment planning and, consequently, to the dose delivered to patients undergoing radiation therapy. Current general calculation algorithms such as Pencil Beam Convolution (PBC) and Collapsed Cone Convolution (CCC) have shortcomings in regard to severe inhomogeneities, particularly in those regions where charged particle equilibrium does not hold. The aim of this study was to evaluate the accuracy of the PBC and CCC algorithms in lung cancer radiotherapy using Monte Carlo (MC) technology. Four treatment plans were designed using Oncentra Masterplan TPS for each patient. Two intensity-modulated radiation therapy (IMRT) plans were developed using the PBC and CCC algorithms, and two three-dimensional conformal therapy (3DCRT) plans were developed using the PBC and CCC algorithms. The DICOM-RT files of the treatment plans were exported to the Monte Carlo system to recalculate. The dose distributions of GTV, PTV and ipsilateral lung calculated by the TPS and MC were compared. For 3DCRT and IMRT plans, the mean dose differences for GTV between the CCC and MC increased with decreasing of the GTV volume. For IMRT, the mean dose differences were found to be higher than that of 3DCRT. The CCC algorithm overestimated the GTV mean dose by approximately 3% for IMRT. For 3DCRT plans, when the volume of the GTV was greater than 100 cm(3), the mean doses calculated by CCC and MC almost have no difference. PBC shows large deviations from the MC algorithm. For the dose to the ipsilateral lung, the CCC algorithm overestimated the dose to the entire lung, and the PBC algorithm overestimated V20 but underestimated V5; the difference in V10 was not statistically significant. PBC substantially overestimates the dose to the tumour, but the CCC is similar to the MC simulation. It is recommended that the treatment plans for lung cancer be developed using an advanced dose calculation algorithm other than PBC. MC can accurately
Accuracy Analysis and Parameters Optimization in Urban Flood Simulation by PEST Model
Keum, H.; Han, K.; Kim, H.; Ha, C.
2017-12-01
The risk of urban flooding has been increasing due to heavy rainfall, flash flooding and rapid urbanization. Rainwater pumping stations, underground reservoirs are used to actively take measures against flooding, however, flood damage from lowlands continues to occur. Inundation in urban areas has resulted in overflow of sewer. Therefore, it is important to implement a network system that is intricately entangled within a city, similar to the actual physical situation and accurate terrain due to the effects on buildings and roads for accurate two-dimensional flood analysis. The purpose of this study is to propose an optimal scenario construction procedure watershed partitioning and parameterization for urban runoff analysis and pipe network analysis, and to increase the accuracy of flooded area prediction through coupled model. The establishment of optimal scenario procedure was verified by applying it to actual drainage in Seoul. In this study, optimization was performed by using four parameters such as Manning's roughness coefficient for conduits, watershed width, Manning's roughness coefficient for impervious area, Manning's roughness coefficient for pervious area. The calibration range of the parameters was determined using the SWMM manual and the ranges used in the previous studies, and the parameters were estimated using the automatic calibration method PEST. The correlation coefficient showed a high correlation coefficient for the scenarios using PEST. The RPE and RMSE also showed high accuracy for the scenarios using PEST. In the case of RPE, error was in the range of 13.9-28.9% in the no-parameter estimation scenarios, but in the scenario using the PEST, the error range was reduced to 6.8-25.7%. Based on the results of this study, it can be concluded that more accurate flood analysis is possible when the optimum scenario is selected by determining the appropriate reference conduit for future urban flooding analysis and if the results is applied to various
Extended-Term Dynamic Simulations with High Penetrations of Photovoltaic Generation.
Energy Technology Data Exchange (ETDEWEB)
Concepcion, Ricky James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Elliott, Ryan Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Donnelly, Matt [Montana Tech., Butte, MT (United States); Sanchez-Gasca, Juan [GE Energy, Schenectady, NY (United States)
2016-01-01
The uncontrolled intermittent availability of renewable energy sources makes integration of such devices into today's grid a challenge. Thus, it is imperative that dynamic simulation tools used to analyze power system performance are able to support systems with high amounts of photovoltaic (PV) generation. Additionally, simulation durations expanding beyond minutes into hours must be supported. This report aims to identify the path forward for dynamic simulation tools to accom- modate these needs by characterizing the properties of power systems (with high PV penetration), analyzing how these properties affect dynamic simulation software, and offering solutions for po- tential problems. We present a study of fixed time step, explicit numerical integration schemes that may be more suitable for these goals, based on identified requirements for simulating high PV penetration systems. We also present the alternative of variable time step integration. To help determine the characteristics of systems with high PV generation, we performed small signal sta- bility studies and time domain simulations of two representative systems. Along with feedback from stakeholders and vendors, we identify the current gaps in power system modeling including fast and slow dynamics and propose a new simulation framework to improve our ability to model and simulate longer-term dynamics.
Taylor bubbles at high viscosity ratios: experiments and numerical simulations
Hewakandamby, Buddhika; Hasan, Abbas; Azzopardi, Barry; Xie, Zhihua; Pain, Chris; Matar, Omar
2015-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube, often occurring in gas-liquid slug flows in many industrial applications, particularly oil and gas production. The objective of this study is to investigate the fluid dynamics of three-dimensional Taylor bubble rising in highly viscous silicone oil in a vertical pipe. An adaptive unstructured mesh modelling framework is adopted here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rising and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a `volume of fluid'-type method for the interface-capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Experimental results for the Taylor bubble shape and rise velocity are presented, together with numerical results for the dynamics of the bubbles. A comparison of the simulation predictions with experimental data available in the literature is also presented to demonstrate the capabilities of our numerical method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
High-Performance Modeling and Simulation of Anchoring in Granular Media for NEO Applications
Quadrelli, Marco B.; Jain, Abhinandan; Negrut, Dan; Mazhar, Hammad
2012-01-01
NASA is interested in designing a spacecraft capable of visiting a near-Earth object (NEO), performing experiments, and then returning safely. Certain periods of this mission would require the spacecraft to remain stationary relative to the NEO, in an environment characterized by very low gravity levels; such situations require an anchoring mechanism that is compact, easy to deploy, and upon mission completion, easy to remove. The design philosophy used in this task relies on the simulation capability of a high-performance multibody dynamics physics engine. On Earth, it is difficult to create low-gravity conditions, and testing in low-gravity environments, whether artificial or in space, can be costly and very difficult to achieve. Through simulation, the effect of gravity can be controlled with great accuracy, making it ideally suited to analyze the problem at hand. Using Chrono::Engine, a simulation pack age capable of utilizing massively parallel Graphic Processing Unit (GPU) hardware, several validation experiments were performed. Modeling of the regolith interaction has been carried out, after which the anchor penetration tests were performed and analyzed. The regolith was modeled by a granular medium composed of very large numbers of convex three-dimensional rigid bodies, subject to microgravity levels and interacting with each other with contact, friction, and cohesional forces. The multibody dynamics simulation approach used for simulating anchors penetrating a soil uses a differential variational inequality (DVI) methodology to solve the contact problem posed as a linear complementarity method (LCP). Implemented within a GPU processing environment, collision detection is greatly accelerated compared to traditional CPU (central processing unit)- based collision detection. Hence, systems of millions of particles interacting with complex dynamic systems can be efficiently analyzed, and design recommendations can be made in a much shorter time. The figure
Hybrid RANS-LES using high order numerical methods
Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael
2017-11-01
Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
Computer modeling of oil spill trajectories with a high accuracy method
International Nuclear Information System (INIS)
Garcia-Martinez, Reinaldo; Flores-Tovar, Henry
1999-01-01
This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce
Improving the accuracy of micro injection moulding process simulations
DEFF Research Database (Denmark)
Marhöfer, David Maximilian; Tosello, Guido; Islam, Aminul
and are therefore limited in the capability of modelling the polymer flow in micro cavities. Hence, new strategies for comprehensive simulation models which provide more precise results open up new opportunities and will be discussed. Modelling and meshing recommendations are presented, leading to a multi......Process simulations in micro injection moulding aim at the optimization and support of the design of the mould, mould inserts, the plastic product, and the process. Nevertheless, dedicated software packages for micro injection moulding are not available. They are developed for macro plastic parts...
SIMULATIONS OF HIGH-VELOCITY CLOUDS. I. HYDRODYNAMICS AND HIGH-VELOCITY HIGH IONS
International Nuclear Information System (INIS)
Kwak, Kyujin; Henley, David B.; Shelton, Robin L.
2011-01-01
We present hydrodynamic simulations of high-velocity clouds (HVCs) traveling through the hot, tenuous medium in the Galactic halo. A suite of models was created using the FLASH hydrodynamics code, sampling various cloud sizes, densities, and velocities. In all cases, the cloud-halo interaction ablates material from the clouds. The ablated material falls behind the clouds where it mixes with the ambient medium to produce intermediate-temperature gas, some of which radiatively cools to less than 10,000 K. Using a non-equilibrium ionization algorithm, we track the ionization levels of carbon, nitrogen, and oxygen in the gas throughout the simulation period. We present observation-related predictions, including the expected H I and high ion (C IV, N V, and O VI) column densities on sightlines through the clouds as functions of evolutionary time and off-center distance. The predicted column densities overlap those observed for Complex C. The observations are best matched by clouds that have interacted with the Galactic environment for tens to hundreds of megayears. Given the large distances across which the clouds would travel during such time, our results are consistent with Complex C having an extragalactic origin. The destruction of HVCs is also of interest; the smallest cloud (initial mass ∼ 120 M sun ) lost most of its mass during the simulation period (60 Myr), while the largest cloud (initial mass ∼ 4 x 10 5 M sun ) remained largely intact, although deformed, during its simulation period (240 Myr).
Toghiani, S; Aggrey, S E; Rekaya, R
2016-07-01
Availability of high-density single nucleotide polymorphism (SNP) genotyping platforms provided unprecedented opportunities to enhance breeding programmes in livestock, poultry and plant species, and to better understand the genetic basis of complex traits. Using this genomic information, genomic breeding values (GEBVs), which are more accurate than conventional breeding values. The superiority of genomic selection is possible only when high-density SNP panels are used to track genes and QTLs affecting the trait. Unfortunately, even with the continuous decrease in genotyping costs, only a small fraction of the population has been genotyped with these high-density panels. It is often the case that a larger portion of the population is genotyped with low-density and low-cost SNP panels and then imputed to a higher density. Accuracy of SNP genotype imputation tends to be high when minimum requirements are met. Nevertheless, a certain rate of genotype imputation errors is unavoidable. Thus, it is reasonable to assume that the accuracy of GEBVs will be affected by imputation errors; especially, their cumulative effects over time. To evaluate the impact of multi-generational selection on the accuracy of SNP genotypes imputation and the reliability of resulting GEBVs, a simulation was carried out under varying updating of the reference population, distance between the reference and testing sets, and the approach used for the estimation of GEBVs. Using fixed reference populations, imputation accuracy decayed by about 0.5% per generation. In fact, after 25 generations, the accuracy was only 7% lower than the first generation. When the reference population was updated by either 1% or 5% of the top animals in the previous generations, decay of imputation accuracy was substantially reduced. These results indicate that low-density panels are useful, especially when the generational interval between reference and testing population is small. As the generational interval
Numerical simulations of novel high-power high-brightness diode laser structures
Boucke, Konstantin; Rogg, Joseph; Kelemen, Marc T.; Poprawe, Reinhart; Weimann, Guenter
2001-07-01
One of the key topics in today's semiconductor laser development activities is to increase the brightness of high-power diode lasers. Although structures showing an increased brightness have been developed specific draw-backs of these structures lead to a still strong demand for investigation of alternative concepts. Especially for the investigation of basically novel structures easy-to-use and fast simulation tools are essential to avoid unnecessary, cost and time consuming experiments. A diode laser simulation tool based on finite difference representations of the Helmholtz equation in 'wide-angle' approximation and the carrier diffusion equation has been developed. An optimized numerical algorithm leads to short execution times of a few seconds per resonator round-trip on a standard PC. After each round-trip characteristics like optical output power, beam profile and beam parameters are calculated. A graphical user interface allows online monitoring of the simulation results. The simulation tool is used to investigate a novel high-power, high-brightness diode laser structure, the so-called 'Z-Structure'. In this structure an increased brightness is achieved by reducing the divergency angle of the beam by angular filtering: The round trip path of the beam is two times folded using internal total reflection at surfaces defined by a small index step in the semiconductor material, forming a stretched 'Z'. The sharp decrease of the reflectivity for angles of incidence above the angle of total reflection leads to a narrowing of the angular spectrum of the beam. The simulations of the 'Z-Structure' indicate an increase of the beam quality by a factor of five to ten compared to standard broad-area lasers.
Algorithms and parameters for improved accuracy in physics data libraries
International Nuclear Information System (INIS)
Batič, M; Hoff, G; Pia, M G; Saracco, P; Han, M; Kim, C H; Hauf, S; Kuster, M; Seo, H
2012-01-01
Recent efforts for the improvement of the accuracy of physics data libraries used in particle transport are summarized. Results are reported about a large scale validation analysis of atomic parameters used by major Monte Carlo systems (Geant4, EGS, MCNP, Penelope etc.); their contribution to the accuracy of simulation observables is documented. The results of this study motivated the development of a new atomic data management software package, which optimizes the provision of state-of-the-art atomic parameters to physics models. The effect of atomic parameters on the simulation of radioactive decay is illustrated. Ideas and methods to deal with physics models applicable to different energy ranges in the production of data libraries, rather than at runtime, are discussed.
International Nuclear Information System (INIS)
Yang, Ching-Ching; Chan, Kai-Chieh
2013-06-01
-Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The
High-Performance Beam Simulator for the LANSCE Linac
International Nuclear Information System (INIS)
Pang, Xiaoying; Rybarcyk, Lawrence J.; Baily, Scott A.
2012-01-01
A high performance multiparticle tracking simulator is currently under development at Los Alamos. The heart of the simulator is based upon the beam dynamics simulation algorithms of the PARMILA code, but implemented in C++ on Graphics Processing Unit (GPU) hardware using NVIDIA's CUDA platform. Linac operating set points are provided to the simulator via the EPICS control system so that changes of the real time linac parameters are tracked and the simulation results updated automatically. This simulator will provide valuable insight into the beam dynamics along a linac in pseudo real-time, especially where direct measurements of the beam properties do not exist. Details regarding the approach, benefits and performance are presented.
Chuang, Ming-Tung; Fu, Joshua S; Jang, Carey J; Chan, Chang-Chuan; Ni, Pei-Cheng; Lee, Chung-Te
2008-11-15
Aerosol is frequently transported by a southward high-pressure system from the Asian Continent to Taiwan and had been recorded a 100% increase in mass level compared to non-event days from 2002 to 2005. During this time period, PM2.5 sulfate was found to increase as high as 155% on event days as compared to non-event days. In this study, Asian emission estimations, Taiwan Emission Database System (TEDS), and meteorological simulation results from the fifth-generation Mesoscale Model (MM5) were used as inputs for the Community Multiscale Air Quality (CMAQ) model to simulate a long-range transport of PM2.5 event in a southward high-pressure system from the Asian Continent to Taiwan. The simulation on aerosol mass level and the associated aerosol components were found within a reasonable accuracy. During the transport process, the percentage of semi-volatile PM2.5 organic carbon in PM2.5 plume only slightly decreased from 22-24% in Shanghai to 21% near Taiwan. However, the percentage of PM2.5 nitrate in PM2.5 decreased from 16-25% to 1%. In contrast, the percentage of PM2.5 sulfate in PM2.5 increased from 16-19% to 35%. It is interesting to note that the percentage of PM2.5 ammonium and PM2.5 elemental carbon in PM2.5 remained nearly constant. Simulation results revealed that transported pollutants dominate the air quality in Taipei when the southward high-pressure system moved to Taiwan. Such condition demonstrates the dynamic chemical transformation of pollutants during the transport process from continental origin over the sea area and to the downwind land.
Pineles, Lisa L; Morgan, Daniel J; Limper, Heather M; Weber, Stephen G; Thom, Kerri A; Perencevich, Eli N; Harris, Anthony D; Landon, Emily
2014-02-01
Hand hygiene (HH) is a critical part of infection prevention in health care settings. Hospitals around the world continuously struggle to improve health care personnel (HCP) HH compliance. The current gold standard for monitoring compliance is direct observation; however, this method is time-consuming and costly. One emerging area of interest involves automated systems for monitoring HH behavior such as radiofrequency identification (RFID) tracking systems. To assess the accuracy of a commercially available RFID system in detecting HCP HH behavior, we compared direct observation with data collected by the RFID system in a simulated validation setting and to a real-life clinical setting over 2 hospitals. A total of 1,554 HH events was observed. Accuracy for identifying HH events was high in the simulated validation setting (88.5%) but relatively low in the real-life clinical setting (52.4%). This difference was significant (P RFID system, almost half of the HH events were missed. More research is necessary to further develop these systems and improve accuracy prior to widespread adoption. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Dynamic modelling and simulation for control of a cylindrical robotic manipulator
International Nuclear Information System (INIS)
Iqbal, A.; Athar, S.M.
1995-03-01
In this report a dynamic model for the three degrees-of-freedom cylindrical manipulator, INFOMATE has been developed. Although the robot dynamics are highly coupled and non-linear, the developed model is relatively straight forward and compact for control engineering and simulation applications. The model has been simulated using the graphical simulation package SIMULINK. Different aspects of INFOMATE associated with forward dynamics, inverse dynamics and control have been investigated by performing various simulation experiments. These simulation experiments confirm the accuracy and applicability of the dynamic robot model. (author) 18 figs
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
International Nuclear Information System (INIS)
Wolff, I.; Konopka, J.; Fritsch, U.; Hofschen, S.; Rittweger, M.; Becks, T.; Schroeder, W.; Ma Jianguo.
1994-01-01
The basis of computer aided design of the physical properties of high temperature superconductors in high frequency and microwave areas were not well known and understood at the beginning of this research project. For this reason within in the research project as well new modells for describing the microwave properties of these superconductors have been developed as alos well known numerical analysis techniques as e.g. the boundary integral method, the method of finite differences in time domain and the spectral domain analysis technique have been changed so that they meet the requirements of superconducting high frequency and microwave circuits. Hereby it especially also was considered that the substrate materials used for high temperature superconductors normally have high dielectric constants and big anisotropies so that new analysis techniques had to be developed to consider the influence of these parameters on the components and circuits. The dielectric properties of the substrate materials furthermore have been a subject of measurement activities in which the permittivity tensor of the materials have been determined with high accuracy and ogver a large frequency range. As a result of the performed investigations now improved numerical simulation techniques on a realistic basis are available for the analysis of superconducting high frequency and microwave circuits. (orig.) [de
Wu, T.-H.; Liang, C.-H.; Wu, J.-K.; Lien, C.-Y.; Yang, B.-H.; Huang, Y.-H.; Lee, J. J. S.
2009-07-01
Hybrid positron emission tomography-computed tomography (PET-CT) system enhances better differentiation of tissue uptake of 18F-fluorodeoxyglucose (18F-FDG) and provides much more diagnostic value in the non-small-cell lung cancer and nasopharyngeal carcinoma (NPC). In PET-CT, high quality CT images not only offer diagnostic value on anatomic delineation of the tissues but also shorten the acquisition time for attenuation correction (AC) compared with PET-alone imaging. The linear accelerators equipped with the X-ray cone-beam computed tomography (CBCT) imaging system for image-guided radiotherapy (IGRT) provides excellent verification on position setup error. The purposes of our study were to optimize the CT acquisition protocols of PET-CT and to integrate the PET-CT and CBCT for IGRT. The CT imaging parameters were modified in PET-CT for increasing the image quality in order to enhance the diagnostic value on tumour delineation. Reproducibility and registration accuracy via bone co-registration algorithm between the PET-CT and CBCT were evaluated by using a head phantom to simulate a head and neck treatment condition. Dose measurement in computed tomography dose index (CTDI) was also estimated. Optimization of the CT acquisition protocols of PET-CT was feasible in this study. Co-registration accuracy between CBCT and PET-CT on axial and helical modes was in the range of 1.06 to 2.08 and 0.99 to 2.05 mm, respectively. In our result, it revealed that the accuracy of the co-registration with CBCT on helical mode was more accurate than that on axial mode. Radiation doses in CTDI were 4.76 to 18.5 mGy and 4.83 to 18.79 mGy on axial and helical modes, respectively. Registration between PET-CT and CBCT is a state-of-the-art registration technology which could provide much information on diagnosis and accurate tumour contouring on radiotherapy while implementing radiotherapy procedures. This novelty technology of PET-CT and cone-beam CT integration for IGRT may have a
International Nuclear Information System (INIS)
Wu, T-H; Liang, C-H; Wu, J-K; Lien, C-Y; Yang, B-H; Lee, J J S; Huang, Y-H
2009-01-01
Hybrid positron emission tomography-computed tomography (PET-CT) system enhances better differentiation of tissue uptake of 18 F-fluorodeoxyglucose ( 18 F-FDG) and provides much more diagnostic value in the non-small-cell lung cancer and nasopharyngeal carcinoma (NPC). In PET-CT, high quality CT images not only offer diagnostic value on anatomic delineation of the tissues but also shorten the acquisition time for attenuation correction (AC) compared with PET-alone imaging. The linear accelerators equipped with the X-ray cone-beam computed tomography (CBCT) imaging system for image-guided radiotherapy (IGRT) provides excellent verification on position setup error. The purposes of our study were to optimize the CT acquisition protocols of PET-CT and to integrate the PET-CT and CBCT for IGRT. The CT imaging parameters were modified in PET-CT for increasing the image quality in order to enhance the diagnostic value on tumour delineation. Reproducibility and registration accuracy via bone co-registration algorithm between the PET-CT and CBCT were evaluated by using a head phantom to simulate a head and neck treatment condition. Dose measurement in computed tomography dose index (CTDI) was also estimated. Optimization of the CT acquisition protocols of PET-CT was feasible in this study. Co-registration accuracy between CBCT and PET-CT on axial and helical modes was in the range of 1.06 to 2.08 and 0.99 to 2.05 mm, respectively. In our result, it revealed that the accuracy of the co-registration with CBCT on helical mode was more accurate than that on axial mode. Radiation doses in CTDI were 4.76 to 18.5 mGy and 4.83 to 18.79 mGy on axial and helical modes, respectively. Registration between PET-CT and CBCT is a state-of-the-art registration technology which could provide much information on diagnosis and accurate tumour contouring on radiotherapy while implementing radiotherapy procedures. This novelty technology of PET-CT and cone-beam CT integration for IGRT may have a
Yang, Zhiyong; Tang, Zhanwen; Xie, Yongjie; Shi, Hanqiao; Zhang, Boming; Guo, Hongjun
2018-02-01
Composite space mirror can completely replicate the high-precision surface of mould by replication process, but the actual surface accuracy of the replication composite mirror always decreases. Lamina thickness of prepreg affects the layers and layup sequence of composite space mirror, and which would affect surface accuracy of space mirror. In our research, two groups of contrasting cases through finite element analyses (FEA) and comparative experiments were studied; the effect of different lamina thicknesses of prepreg and corresponding lay-up sequences was focused as well. We describe a special analysis model, validated process and result analysis. The simulated and measured surface figures both get the same conclusion. Reducing lamina thickness of prepreg used in replicating composite space mirror is propitious to optimal design of layup sequence for fabricating composite mirror, and could improve its surface accuracy.
Provably unbounded memory advantage in stochastic simulation using quantum mechanics
International Nuclear Information System (INIS)
Garner, Andrew J P; Thompson, Jayne; Vedral, Vlatko; Gu, Mile; Liu, Qing
2017-01-01
Simulating the stochastic evolution of real quantities on a digital computer requires a trade-off between the precision to which these quantities are approximated, and the memory required to store them. The statistical accuracy of the simulation is thus generally limited by the internal memory available to the simulator. Here, using tools from computational mechanics, we show that quantum processors with a fixed finite memory can simulate stochastic processes of real variables to arbitrarily high precision. This demonstrates a provable, unbounded memory advantage that a quantum simulator can exhibit over its best possible classical counterpart. (paper)
Simulation of automatic frequency and power regulators
Borovikov, Y. S.; Pischulin, A. Y.; Ufa, R. A.
2015-10-01
The motivation of the presented research is based on the need for development of new methods and tools for adequate real time simulation of automation control frequency and power regulators of generator played an important role in the planning, design and operation of electric power system. This paper proposes a Hybrid real time simulator of electric power system for simulation of automation control frequency and power regulators of generator. The obtained results of experimental researches of turbine emergency control of generator demonstrate high accuracy of the simulator and possibility of real-time simulation of all the processes in the electric power system without any decomposition and limitation on their duration, and the effectiveness of the proposed simulator in solving of the design, operational and research tasks of electric power system.
International Nuclear Information System (INIS)
Ko, P; Kurosawa, S
2014-01-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges
Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.
2012-04-01
In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site
Decision-Making Accuracy of CBM Progress-Monitoring Data
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
Energy Technology Data Exchange (ETDEWEB)
Jung, W; Ogawa, T [Yokohama National University, Yokohama (Japan); Tamagawa, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)
1997-10-22
This paper describes that a high-accuracy simulation can be made on seismic exploration by using the numerical grid method. When applying a wave field simulation using the difference calculus to an area subjected to seismic exploration, a problem occurs as to how a boundary of the velocity structure including the ground surface should be dealt with. Simply applying grids to a boundary changing continuously makes accuracy of the simulation worse. The difference calculus using a numerical grid is a method to solve the problem by imaging a certain region into a rectangular region through use of variable conversion, which can impose the boundary condition more accurately. The wave field simulation was carried out on a simple two-layer inclined structure and a two-layer waved structure. It was revealed that amplitudes of direct waves and reflection waves are disturbed in the case where no numerical grid method is applied, and the amplitudes are more disperse in the reflection waves than those obtained by using the numerical grid method. 7 refs., 10 figs.
Synchrotron accelerator technology for proton beam therapy with high accuracy
International Nuclear Information System (INIS)
Hiramoto, Kazuo
2009-01-01
Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)
Novel high-fidelity realistic explosion damage simulation for urban environments
Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya
2010-04-01
Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.
Flow distribution of pebble bed high temperature gas cooled reactors using large eddy simulation
International Nuclear Information System (INIS)
Gokhan Yesilyurt; Hassan, Y.A.
2003-01-01
authors' knowledge there is no detailed complete calculations for this kind of reactor to address this local phenomena. This work is an attempt to evaluate and calculate this effect. The simulation of these local phenomena cannot be computed with existing conventional computational tools. Not all Computational Fluid Dynamic (CFD) methods are applicable to solve turbulence problems, in complex geometries. As in pebble bed reactor core, a compromise is needed between accuracy of results and time/cost of effort in acquiring the results. Resolving all the scales of a turbulent flow is too costly, while employing highly empirical turbulence models to complex problems could give inaccurate simulation results. The large eddy simulation (LES) method would achieve the above requirements. Here, the large scales in the flow are solved and the small scales are modeled. A schematic of the simulated core region used in the calculations is presented in Figure 1.1. (author)
Image Positioning Accuracy Analysis for Super Low Altitude Remote Sensing Satellites
Directory of Open Access Journals (Sweden)
Ming Xu
2012-10-01
Full Text Available Super low altitude remote sensing satellites maintain lower flight altitudes by means of ion propulsion in order to improve image resolution and positioning accuracy. The use of engineering data in design for achieving image positioning accuracy is discussed in this paper based on the principles of the photogrammetry theory. The exact line-of-sight rebuilding of each detection element and this direction precisely intersecting with the Earth's elliptical when the camera on the satellite is imaging are both ensured by the combined design of key parameters. These parameters include: orbit determination accuracy, attitude determination accuracy, camera exposure time, accurately synchronizing the reception of ephemeris with attitude data, geometric calibration and precise orbit verification. Precise simulation calculations show that image positioning accuracy of super low altitude remote sensing satellites is not obviously improved. The attitude determination error of a satellite still restricts its positioning accuracy.
A high-orbit collimating infrared earth simulator
International Nuclear Information System (INIS)
Zhang Guoyu; Jiang Huilin; Fang Yang; Yu Huadong; Xu Xiping; Wang, Lingyun; Liu Xuli; Huang Lan; Yue Shixin; Peng Hui
2007-01-01
The earth simulator is the most important testing equipment ground-based for the infrared earth sensor, and it is also a key component in the satellite controlling system. for three orbit heights 18000Km, 35786Km and 42000Km, in this paper we adopt a project of collimation and replaceable earth diaphragm and develop a high orbit collimation earth simulator. This simulator can afford three angles 15.19 0 , 17.46 0 and 30.42 0 , resulting simulating the earth on the ground which can be seen in out space by the satellite. In this paper we introduce the components, integer structure, and the earth's field angles testing method of the earth simulator in detail. Germanium collimation lens is the most important component in the earth simulator. According to the optical configuration parameter of Germanium collimation lens, we find the location and size of the earth diaphragm and the hot earth by theoretical analyses and optics calculation, which offer foundation of design in the study of the earth simulator. The earth angle is the index to scale the precision of earth simulator. We test the three angles by experiment and the results indicate that three angles errors are all less than ±0.05 0
Implicit vessel surface reconstruction for visualization and CFD simulation
International Nuclear Information System (INIS)
Schumann, Christian; Peitgen, Heinz-Otto; Neugebauer, Mathias; Bade, Ragnar; Preim, Bernhard
2008-01-01
Accurate and high-quality reconstructions of vascular structures are essential for vascular disease diagnosis and blood flow simulations.These applications necessitate a trade-off between accuracy and smoothness. An additional requirement for the volume grid generation for Computational Fluid Dynamics (CFD) simulations is a high triangle quality. We propose a method that produces an accurate reconstruction of the vessel surface with satisfactory surface quality. A point cloud representing the vascular boundary is generated based on a segmentation result. Thin vessels are subsampled to enable an accurate reconstruction. A signed distance field is generated using Multi-level Partition of Unity Implicits and subsequently polygonized using a surface tracking approach. To guarantee a high triangle quality, the surface is remeshed. Compared to other methods, our approach represents a good trade-off between accuracy and smoothness. For the tested data, the average surface deviation to the segmentation results is 0.19 voxel diagonals and the maximum equi-angle skewness values are below 0.75. The generated surfaces are considerably more accurate than those obtained using model-based approaches. Compared to other model-free approaches, the proposed method produces smoother results and thus better supports the perception and interpretation of the vascular topology. Moreover, the triangle quality of the generated surfaces is suitable for CFD simulations. (orig.)
Accuracy of binary black hole waveform models for aligned-spin binaries
Kumar, Prayush; Chu, Tony; Fong, Heather; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-05-01
Coalescing binary black holes are among the primary science targets for second generation ground-based gravitational wave detectors. Reliable gravitational waveform models are central to detection of such systems and subsequent parameter estimation. This paper performs a comprehensive analysis of the accuracy of recent waveform models for binary black holes with aligned spins, utilizing a new set of 84 high-accuracy numerical relativity simulations. Our analysis covers comparable mass binaries (mass-ratio 1 ≤q ≤3 ), and samples independently both black hole spins up to a dimensionless spin magnitude of 0.9 for equal-mass binaries and 0.85 for unequal mass binaries. Furthermore, we focus on the high-mass regime (total mass ≳50 M⊙ ). The two most recent waveform models considered (PhenomD and SEOBNRv2) both perform very well for signal detection, losing less than 0.5% of the recoverable signal-to-noise ratio ρ , except that SEOBNRv2's efficiency drops slightly for both black hole spins aligned at large magnitude. For parameter estimation, modeling inaccuracies of the SEOBNRv2 model are found to be smaller than systematic uncertainties for moderately strong GW events up to roughly ρ ≲15 . PhenomD's modeling errors are found to be smaller than SEOBNRv2's, and are generally irrelevant for ρ ≲20 . Both models' accuracy deteriorates with increased mass ratio, and when at least one black hole spin is large and aligned. The SEOBNRv2 model shows a pronounced disagreement with the numerical relativity simulation in the merger phase, for unequal masses and simultaneously both black hole spins very large and aligned. Two older waveform models (PhenomC and SEOBNRv1) are found to be distinctly less accurate than the more recent PhenomD and SEOBNRv2 models. Finally, we quantify the bias expected from all four waveform models during parameter estimation for several recovered binary parameters: chirp mass, mass ratio, and effective spin.
Patterns of communication in high-fidelity simulation.
Anderson, Judy K; Nelson, Kimberly
2015-01-01
High-fidelity simulation is commonplace in nursing education. However, critical thinking, decision making, and psychomotor skills scenarios are emphasized. Scenarios involving communication occur in interprofessional or intraprofessional settings. The importance of effective nurse-patient communication is reflected in statements from the American Nurses Association and Quality and Safety Education for Nurses, and in the graduate outcomes of most nursing programs. This qualitative study examined the patterns of communication observed in video recordings of a medical-surgical scenario with 71 senior students in a baccalaureate program. Thematic analysis revealed patterns of (a) focusing on tasks, (b) communicating-in-action, and (c) being therapeutic. Additional categories under the patterns included missing opportunities, viewing the "small picture," relying on informing, speaking in "medical tongues," offering choices…okay?, feeling uncomfortable, and using therapeutic techniques. The findings suggest the importance of using high-fidelity simulation to develop expertise in communication. In addition, the findings reinforce the recommendation to prioritize communication aspects of scenarios and debriefing for all simulations. Copyright 2015, SLACK Incorporated.
Emulation of dynamic simulators with application to hydrology
Energy Technology Data Exchange (ETDEWEB)
Machac, David, E-mail: david.machac@eawag.ch [Eawag, Swiss Federal Institute of Aquatic Science and Technology, Department of Systems Analysis, Integrated Assessment and Modelling, 8600 Dübendorf (Switzerland); ETH Zurich, Department of Environmental Systems Science, 8092 Zurich (Switzerland); Reichert, Peter [Eawag, Swiss Federal Institute of Aquatic Science and Technology, Department of Systems Analysis, Integrated Assessment and Modelling, 8600 Dübendorf (Switzerland); ETH Zurich, Department of Environmental Systems Science, 8092 Zurich (Switzerland); Albert, Carlo [Eawag, Swiss Federal Institute of Aquatic Science and Technology, Department of Systems Analysis, Integrated Assessment and Modelling, 8600 Dübendorf (Switzerland)
2016-05-15
Many simulation-intensive tasks in the applied sciences, such as sensitivity analysis, parameter inference or real time control, are hampered by slow simulators. Emulators provide the opportunity of speeding up simulations at the cost of introducing some inaccuracy. An emulator is a fast approximation to a simulator that interpolates between design input–output pairs of the simulator. Increasing the number of design data sets is a computationally demanding way of improving the accuracy of emulation. We investigate the complementary approach of increasing emulation accuracy by including knowledge about the mechanisms of the simulator into the formulation of the emulator. To approximately reproduce the output of dynamic simulators, we consider emulators that are based on a system of linear, ordinary or partial stochastic differential equations with a noise term formulated as a Gaussian process of the parameters to be emulated. This stochastic model is then conditioned to the design data so that it mimics the behavior of the nonlinear simulator as a function of the parameters. The drift terms of the linear model are designed to provide a simplified description of the simulator as a function of its key parameters so that the required corrections by the conditioned Gaussian process noise are as small as possible. The goal of this paper is to compare the gain in accuracy of these emulators by enlarging the design data set and by varying the degree of simplification of the linear model. We apply this framework to a simulator for the shallow water equations in a channel and compare emulation accuracy for emulators based on different spatial discretization levels of the channel and for a standard non-mechanistic emulator. Our results indicate that we have a large gain in accuracy already when using the simplest mechanistic description by a single linear reservoir to formulate the drift term of the linear model. Adding some more reservoirs does not lead to a significant
High Fidelity In Situ Shoulder Dystocia Simulation
Directory of Open Access Journals (Sweden)
Andrew Pelikan, MD
2018-04-01
Full Text Available Audience: Resident physicians, emergency department (ED staff Introduction: Precipitous deliveries are high acuity, low occurrence in most emergency departments. Shoulder dystocia is a rare but potentially fatal complication of labor that can be relieved by specific maneuvers that must be implemented in a timely manner. This simulation is designed to educate resident learners on the critical management steps in a shoulder dystocia presenting to the emergency department. A special aspect of this simulation is the unique utilization of the “Noelle” model with an instructing physician at bedside maneuvering the fetus through the stations of labor and providing subtle adjustments to fetal positioning not possible though a mechanized model. A literature search of “shoulder dystocia simulation” consists primarily of obstetrics and mid-wife journals, many of which utilize various mannequin models. None of the reviewed articles utilized a bedside provider maneuvering the fetus with the Noelle model, making this method unique. While the Noelle model is equipped with a remote-controlled motor that automatically rotates and delivers the baby either to the head or to the shoulders and can produce a turtle sign and which will prevent delivery of the baby until signaled to do so by the instructor, using the bedside instructor method allows this simulation to be reproduced with less mechanistically advanced and lower cost models.1-5 Objectives: At the end of this simulation, learners will: 1 Recognize impending delivery and mobilize appropriate resources (ie, both obstetrics [OB] and NICU/pediatrics; 2 Identify risk factors for shoulder dystocia based on history and physical; 3 Recognize shoulder dystocia during delivery; 4 Demonstrate maneuvers to relieve shoulder dystocia; 5 Communicate with team members and nursing staff during resuscitation of a critically ill patient. Method: High-fidelity simulation. Topics: High fidelity, in situ, Noelle model
Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept
Directory of Open Access Journals (Sweden)
Ahmed Elsaadany
2014-01-01
Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.
Accuracy improvement capability of advanced projectile based on course correction fuze concept.
Elsaadany, Ahmed; Wen-jun, Yi
2014-01-01
Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Accuracy of Self-Evaluation in Adults with ADHD: Evidence from a Driving Study
Knouse, Laura E.; Bagwell, Catherine L.; Barkley, Russell A.; Murphy, Kevin R.
2005-01-01
Research on children with ADHD indicates an association with inaccuracy of self-appraisal. This study examines the accuracy of self-evaluations in clinic-referred adults diagnosed with ADHD. Self-assessments and performance measures of driving in naturalistic settings and on a virtual-reality driving simulator are used to assess accuracy of…
Go with the Flow. Moving meshes and solution monitoring for compressible flow simulation
van Dam, A.
2009-01-01
The simulation of time-dependent physical problems, such as flows of some kind, places high demands on the domain discretization in order to obtain high accuracy of the numerical solution. We present a moving mesh method in which the mesh points automatically move towards regions where high spatial
PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components
Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia
2016-01-01
The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...
DEFF Research Database (Denmark)
Møgelhøj, Andreas; Kelkkanen, Kari André; Wikfeldt, K Thor
2011-01-01
The structure of liquid water at ambient conditions is studied in ab initio molecular dynamics simulations in the NVE ensemble using van der Waals (vdW) density-functional theory, i.e., using the new exchange-correlation functionals optPBE-vdW and vdW-DF2, where the latter has softer nonlocal...... protocol could cause the deviation. An O-O PCF consisting of a linear combination of 70% from vdW-DF2 and 30% from low-density liquid water, as extrapolated from experiments, reproduces near-quantitatively the experimental O-O PCF for ambient water. This suggests the possibility that the new functionals...... shows some resemblance with experiment for high-density water ( Soper , A. K. and Ricci , M. A. Phys. Rev. Lett. 2000 , 84 , 2881 ), but not directly with experiment for ambient water. Considering the accuracy of the new functionals for interaction energies, we investigate whether the simulation...
Frey, Bradley J.; Leviton, Douglas B.
2005-01-01
The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.
Directory of Open Access Journals (Sweden)
Lei Baiwei
2016-10-01
Full Text Available In coal mine fire rescues, if the abnormal increase of gas concentration occurs, it is the primary thing to analyze the reasons and identify sources of the abnormal forming, which is also the basis of judge the combustion state of fire area and formulate proper fire reliefs. Nowadays, related researches have recognized the methane explosion as the source of high concentration of H2 formation, but there are few studies about the conditions and reaction mechanism of gas explosion generating high concentration of H2.Therefore, this paper uses the chemical kinetic calculation software, ChemKin, and the 20L spherical explosion experimental device to simulate the generating process and formation conditions of H2 in gas explosion. The experimental results show that: the decomposition of water vapor is the main base element reaction (R84 which leads to the generation of H2.The free radical H is the key factor to influence the formation of H2 generated from gas explosion. With the gradual increase of gas explosion concentration, the explosive reaction becomes more incomplete, and then the generating quantity of H2 increases gradually. Experimental results of 20L spherical explosion are consistent with the change trend about simulation results, which verifies the accuracy of simulation analysis. The results of explosion experiments show that when gas concentration is higher than 9%, the incomplete reaction of methane explosion increases which leads to the gradual increase of H2 formation.
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
Horizontal Positional Accuracy of Google EarthÃ¢Â€Â™s High-Resolution Imagery Archive
Directory of Open Access Journals (Sweden)
David Potere
2008-12-01
Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the EarthÃ¢Â€Â™s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the worldÃ¢Â€Â™s peri-urban areas.
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
International Nuclear Information System (INIS)
Jeong, Hae Sun
2010-02-01
.5 cm 2 ). In addition, a method using a pixel-based unfolding curve was developed and applied to correct the non-uniform response of flat-bed type scanners for a radiochromic film. Also, the accuracy of the method was finally evaluated by comparing the results with those of an ion chamber, Monte Carlo simulation, and CF-based conventional method. For individual dose, the dosimetric error of using conventional method and using the pixel-based unfolding curve was reduced to less than 3%, and 1%, respectively. In case of step-wise doses, the average difference of 16% with MC calculation was reduced up to 1% by using the correction method in this study. Consequently, the accuracy of dose computation algorithms in TPS can be evaluated by the developed LEGO-type solid phantom, small filed dosimetry, the correction method for non-uniform response of scanners. It is also recognized that the developed hardware and software which are possible to be used for QA procedure are very reliable and they could be used for reference study of other radiation therapies
A new device for liver cancer biomarker detection with high accuracy
Directory of Open Access Journals (Sweden)
Shuaipeng Wang
2015-06-01
Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.
International Nuclear Information System (INIS)
Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A
2013-01-01
Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now
Cadastral Database Positional Accuracy Improvement
Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.
2017-10-01
Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.
Pelties, Christian
2012-02-18
Accurate and efficient numerical methods to simulate dynamic earthquake rupture and wave propagation in complex media and complex fault geometries are needed to address fundamental questions in earthquake dynamics, to integrate seismic and geodetic data into emerging approaches for dynamic source inversion, and to generate realistic physics-based earthquake scenarios for hazard assessment. Modeling of spontaneous earthquake rupture and seismic wave propagation by a high-order discontinuous Galerkin (DG) method combined with an arbitrarily high-order derivatives (ADER) time integration method was introduced in two dimensions by de la Puente et al. (2009). The ADER-DG method enables high accuracy in space and time and discretization by unstructured meshes. Here we extend this method to three-dimensional dynamic rupture problems. The high geometrical flexibility provided by the usage of tetrahedral elements and the lack of spurious mesh reflections in the ADER-DG method allows the refinement of the mesh close to the fault to model the rupture dynamics adequately while concentrating computational resources only where needed. Moreover, ADER-DG does not generate spurious high-frequency perturbations on the fault and hence does not require artificial Kelvin-Voigt damping. We verify our three-dimensional implementation by comparing results of the SCEC TPV3 test problem with two well-established numerical methods, finite differences, and spectral boundary integral. Furthermore, a convergence study is presented to demonstrate the systematic consistency of the method. To illustrate the capabilities of the high-order accurate ADER-DG scheme on unstructured meshes, we simulate an earthquake scenario, inspired by the 1992 Landers earthquake, that includes curved faults, fault branches, and surface topography. Copyright 2012 by the American Geophysical Union.
High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer
2002-01-01
The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \
Accuracy optimization of high-speed AFM measurements using Design of Experiments
DEFF Research Database (Denmark)
Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard
2010-01-01
Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...
Wang, Liping; Jiang, Yao; Li, Tiemin
2014-09-01
Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.
Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics
小野, 貴継; 井上, 弘士; 村上, 和彰
2007-01-01
This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...
International Nuclear Information System (INIS)
Van Geemert, Rene
2008-01-01
For satisfaction of future global customer needs, dedicated efforts are being coordinated internationally and pursued continuously at AREVA NP. The currently ongoing CONVERGENCE project is committed to the development of the ARCADIA R next generation core simulation software package. ARCADIA R will be put to global use by all AREVA NP business regions, for the entire spectrum of core design processes, licensing computations and safety studies. As part of the currently ongoing trend towards more sophisticated neutronics methodologies, an SP 3 nodal transport concept has been developed for ARTEMIS which is the steady-state and transient core simulation part of ARCADIA R . For enabling a high computational performance, the SP N calculations are accelerated by applying multi-level coarse mesh re-balancing. In the current implementation, SP 3 is about 1.4 times as expensive computationally as SP 1 (diffusion). The developed SP 3 solution concept is foreseen as the future computational workhorse for many-group 3D pin-by-pin full core computations by ARCADIA R . With the entire numerical workload being highly parallelizable through domain decomposition techniques, associated CPU-time requirements that adhere to the efficiency needs in the nuclear industry can be expected to become feasible in the near future. The accuracy enhancement obtainable by using SP 3 instead of SP 1 has been verified by a detailed comparison of ARTEMIS 16-group pin-by-pin SP N results with KAERI's DeCart reference results for the 2D pin-by-pin Purdue UO 2 /MOX benchmark. This article presents the accuracy enhancement verification and quantifies the achieved ARTEMIS-SP 3 computational performance for a number of 2D and 3D multi-group and multi-box (up to pin-by-pin) core computations. (authors)
Simulant Basis for the Standard High Solids Vessel Design
Energy Technology Data Exchange (ETDEWEB)
Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2016-09-01
This document provides the requirements for a test simulant suitable for demonstrating the mixing requirements for the Single High Solids Vessel Design (SHSVD). This simulant has not been evaluated for other purposes such as gas retention and release or erosion. The objective of this work is to provide an underpinning for the simulant properties based on actual waste characterization.
Nano-level instrumentation for analyzing the dynamic accuracy of a rolling element bearing
International Nuclear Information System (INIS)
Yang, Z.; Hong, J.; Zhang, J.; Wang, M. Y.; Zhu, Y.
2013-01-01
The rotational performance of high-precision rolling bearings is fundamental to the overall accuracy of complex mechanical systems. A nano-level instrument to analyze rotational accuracy of high-precision bearings of machine tools under working conditions was developed. In this instrument, a high-precision (error motion < 0.15 μm) and high-stiffness (2600 N axial loading capacity) aerostatic spindle was applied to spin the test bearing. Operating conditions could be simulated effectively because of the large axial loading capacity. An air-cylinder, controlled by a proportional pressure regulator, was applied to drive an air-bearing subjected to non-contact and precise loaded axial forces. The measurement results on axial loading and rotation constraint with five remaining degrees of freedom were completely unconstrained and uninfluenced by the instrument's structure. Dual capacity displacement sensors with 10 nm resolution were applied to measure the error motion of the spindle using a double-probe error separation method. This enabled the separation of the spindle's error motion from the measurement results of the test bearing which were measured using two orthogonal laser displacement sensors with 5 nm resolution. Finally, a Lissajous figure was used to evaluate the non-repetitive run-out (NRRO) of the bearing at different axial forces and speeds. The measurement results at various axial loadings and speeds showed the standard deviations of the measurements’ repeatability and accuracy were less than 1% and 2%. Future studies will analyze the relationship between geometrical errors and NRRO, such as the ball diameter differences of and the geometrical errors in the grooves of rings
International Nuclear Information System (INIS)
Kryeziu, D.
2006-09-01
The aim of this work was to test and validate the Monte-Carlo (MC) ionization chamber simulation method in calculating the activity of radioactive solutions. This is required when no or not sufficient experimental calibration figures are available as well as to improve the accuracy of activity measurements for other radionuclides. Well-type or 4π γ ISOCAL IV ionization chambers (IC) are widely used in many national standard laboratories around the world. As secondary standard measuring systems these radionuclide calibrators serve to maintain measurement consistency checks and to ensure the quality of standards disseminated to users for a wide range of radionuclide where many of them are with special interest in nuclear medicine as well as in different applications on radionuclide metrology. For the studied radionuclides the calibration figures (efficiencies) and their respective volume correction factors are determined by using the PENELOPE MC computer code system. The ISOCAL IV IC filled with nitrogen gas at approximately 1 MPa is simulated. The simulated models of the chamber are designed by means of reduced quadric equation and applying the appropriate mathematical transformations. The simulations are done for various container geometries of the standard solution which take forms of: i) sealed Jena glass 5 ml PTB standard ampoule, ii) 10 ml (P6) vial and iii) 10 R Schott Type 1+ vial. Simulation of the ISOCAL IV IC is explained. The effect of density variation of the nitrogen filling gas on the sensitivity of the chamber is investigated. The code is also used to examine the effects of using lead and copper shields as well as to evaluate the sensitivity of the chamber to electrons and positrons. Validation of the Monte-Carlo simulation method has been proved by comparing the Monte-Carlo simulation calculated and experimental calibration figures available from the National Physical Laboratory (NPL) England which are deduced from the absolute activity
Accuracy in tangential breast treatment set-up
International Nuclear Information System (INIS)
Tienhoven, G. van; Lanson, J.H.; Crabeels, D.; Heukelom, S.; Mijnheer, B.J.
1991-01-01
To test accuracy and reproducibility of tangential breast treatment set-up used in The Netherlands Cancer Institute, a portal imaging study was performed in 12 patients treated for early stage breast cancer. With an on-line electronic portal imaging device (EPID) images were obtained of each patient in several fractions and compared with simulator films and with each other. In 5 patients multiple images (on the average 7) per fraction were obtained to evaluate set-up variations due to respiratory movement. The central lung distance (CLD) and other set-up parameters varied within 1 fraction about 1mm (1SD). The average variation of these parameters between various fractions was about 2 mm (1SD). The differences between simulator and treatment set-up over all patients and all fractions was on the average 2-3mm for the central beam edge to skin distance and CLD. It can be concluded that the tangential breast treatment set-up is very stable and reproducible and that respiration does not have a significant influence on treatment volume. EPID appears to be an adequate tool for studies of treatment set-up accuracy like this. (author). 35 refs.; 2 figs.; 3 tabs
Social Power Increases Interoceptive Accuracy
Directory of Open Access Journals (Sweden)
Mehrad Moeini-Jazani
2017-08-01
Full Text Available Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research.
Simulating high-frequency seismograms in complicated media: A spectral approach
International Nuclear Information System (INIS)
Orrey, J.L.; Archambeau, C.B.
1993-01-01
The main attraction of using a spectral method instead of a conventional finite difference or finite element technique for full-wavefield forward modeling in elastic media is the increased accuracy of a spectral approximation. While a finite difference method accurate to second order typically requires 8 to 10 computational grid points to resolve the smallest wavelengths on a 1-D grid, a spectral method that approximates the wavefield by trignometric functions theoretically requires only 2 grid points per minimum wavelength and produces no numerical dispersion from the spatial discretization. The resultant savings in computer memory, which is very significant in 2 and 3 dimensions, allows for larger scale and/or higher frequency simulations
Methodology for GPS Synchronization Evaluation with High Accuracy
Li Zan; Braun Torsten; Dimitrova Desislava
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...
Methodology for GPS Synchronization Evaluation with High Accuracy
Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...
Phenomenological reports diagnose accuracy of eyewitness identification decisions.
Palmer, Matthew A; Brewer, Neil; McKinnon, Anna C; Weber, Nathan
2010-02-01
This study investigated whether measuring the phenomenology of eyewitness identification decisions aids evaluation of their accuracy. Witnesses (N=502) viewed a simulated crime and attempted to identify two targets from lineups. A divided attention manipulation during encoding reduced the rate of remember (R) correct identifications, but not the rates of R foil identifications or know (K) judgments in the absence of recollection (i.e., K/[1-R]). Both RK judgments and recollection ratings (a novel measure of graded recollection) distinguished correct from incorrect positive identifications. However, only recollection ratings improved accuracy evaluation after identification confidence was taken into account. These results provide evidence that RK judgments for identification decisions function in a similar way as for recognition decisions; are consistent with the notion of graded recollection; and indicate that measures of phenomenology can enhance the evaluation of identification accuracy. Copyright 2009 Elsevier B.V. All rights reserved.
A high-accuracy optical linear algebra processor for finite element applications
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Lievens, Filip; Patterson, Fiona
2011-01-01
In high-stakes selection among candidates with considerable domain-specific knowledge and experience, investigations of whether high-fidelity simulations (assessment centers; ACs) have incremental validity over low-fidelity simulations (situational judgment tests; SJTs) are lacking. Therefore, this article integrates research on the validity of…
Directory of Open Access Journals (Sweden)
V. Y. Chow
2010-03-01
Full Text Available High-accuracy continuous measurements of greenhouse gases (CO2 and CH4 during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.
Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.
2010-03-01
High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.
Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods
International Nuclear Information System (INIS)
Narita, Y.; Eberl, S.; Nakamura, T.
1996-01-01
Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for 99m Tc and 201 Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for 99m Tc with TDCS and TEW, respectively. For 201 Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT
Direct numerical simulation of bluff-body-stabilized premixed flames
Arias, Paul G.; Lee, Bok Jik; Im, Hong G.
2014-01-01
are important in confined multicomponent reacting flows. Results show that the DNS with embedded boundaries can be extended to more complex geometries without loss of accuracy and the high fidelity simulation data can be used to develop and validate turbulence and combustion models for the design of practical combustion devices.
Computer simulation of bounded plasmas
International Nuclear Information System (INIS)
Lawson, W.S.
1987-01-01
The problems of simulating a one-dimensional bounded plasma system using particles in a gridded space are systematically explored and solutions to them are given. Such problems include the injection of particles at the boundaries, the solution of Poisson's equation, and the inclusion of an external circuit between the confining boundaries. A recently discovered artificial cooling effect is explained as being a side-effect of quiet injection, and its potential for causing serious but subtle errors in bounded simulation is noted. The methods described in the first part of the thesis are then applied to the simulation of an extension of the Pierce diode problem, specifically a Pierce diode modified by an external circuit between the electrodes. The results of these simulations agree to high accuracy with theory when a theory exists, and also show some interesting chaotic behavior in certain parameter regimes. The chaotic behavior is described in detail
ANALYSIS OF OPERATING INSTRUMENT LANDING SYSTEM ACCURACY UNDER SIMULATED CONDITIONS
Directory of Open Access Journals (Sweden)
Jerzy MERKISZ
2017-03-01
Full Text Available The instrument landing system (ILS is the most popular landing aid in the world. It is a distance-angled support system for landing in reduced visibility, while its task is the safe conduct of the aircraft from the prescribed course landing on the approach path. The aim of this study is to analyse the correctness of the ILS in simulated conditions. The study was conducted using a CKAS MotionSim5 flight simulator in the Simulation Research Laboratory of the Institute of Combustion Engines and Transport at Poznan University of Technology. With the advancement of technical equipment, it was possible to check the operation of the system in various weather conditions. Studies have shown that the impact of fog, rain and snow on the correct operation of the system is marginal. Significant influence has been observed, however, during landing in strong winds.
Development of Simulator for High-Speed Elevator System
Energy Technology Data Exchange (ETDEWEB)
Ryu, Hyung Min; Kim, Sung Jun; Sul, Seung Ki; Seok, Ki Riong [Seoul National University, Seoul(Korea); Kwon, Tae Seok [Hanyang University, Seoul(Korea); Kim, Ki Su [Konkuk University, Seoul(Korea); Shim, Young Seok [Inha University, incheon(Korea)
2002-02-01
This paper describes the dynamic load simulator for high-speed elevator system, which can emulate 3-mass system as well as equivalent 1-mass system 1-mass system. In order to implement the equivalent inertia of entire elevator system, the conventional simulators have generally utilized the mechanical inertia(flywheel) with large radius, which makes the entire system large and heavy. In addition, the mechanical inertia should be replaced each time in order to test another elevator system. In this paper, the dynamic load simulation methods using electrical inertia are presented so that the volume and weight of simulator system are greatly reduced and the adjustment of inertia value can be achieved easily by software. Experimental results show the feasibility of this simulator system. (author). 5 refs., 7 figs., 2 tabs.
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang
2011-03-01
We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
1996-04-10
The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.
3D finite element simulation of optical modes in VCSELs
Rozova, M.; Pomplun, J.; Zschiedrich, L.; Schmidt, F.; Burger, S.
2011-01-01
We present a finite element method (FEM) solver for computation of optical resonance modes in VCSELs. We perform a convergence study and demonstrate that high accuracies for 3D setups can be attained on standard computers. We also demonstrate simulations of thermo-optical effects in VCSELs.
Measurement Accuracy Limitation Analysis on Synchrophasors
Energy Technology Data Exchange (ETDEWEB)
Zhao, Jiecheng [University of Tennessee (UT); Zhan, Lingwei [University of Tennessee (UT); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL); Qi, Hairong [University of Tennessee, Knoxville (UTK); Gracia, Jose R [ORNL; Ewing, Paul D [ORNL
2015-01-01
This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.
Design of DSP-based high-power digital solar array simulator
Zhang, Yang; Liu, Zhilong; Tong, Weichao; Feng, Jian; Ji, Yibo
2013-12-01
To satisfy rigid performance specifications, a feedback control was presented for zoom optical lens plants. With the increasing of global energy consumption, research of the photovoltaic(PV) systems get more and more attention. Research of the digital high-power solar array simulator provides technical support for high-power grid-connected PV systems research.This paper introduces a design scheme of the high-power digital solar array simulator based on TMS320F28335. A DC-DC full-bridge topology was used in the system's main circuit. The switching frequency of IGBT is 25kHz.Maximum output voltage is 900V. Maximum output current is 20A. Simulator can be pre-stored solar panel IV curves.The curve is composed of 128 discrete points .When the system was running, the main circuit voltage and current values was feedback to the DSP by the voltage and current sensors in real-time. Through incremental PI,DSP control the simulator in the closed-loop control system. Experimental data show that Simulator output voltage and current follow a preset solar panels IV curve. In connection with the formation of high-power inverter, the system becomes gridconnected PV system. The inverter can find the simulator's maximum power point and the output power can be stabilized at the maximum power point (MPP).
International Nuclear Information System (INIS)
Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi
1990-01-01
Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)
Bauer, Jan Stefan; Noël, Peter Benjamin; Vollhardt, Christiane; Much, Daniela; Degirmenci, Saliha; Brunner, Stefanie; Rummeny, Ernst Josef; Hauner, Hans
2015-01-01
MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo. MR images of ten phantoms simulating subcutaneous fat of an infant's torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE) sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days). This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence. In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility. With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy.
Directory of Open Access Journals (Sweden)
Jan Stefan Bauer
Full Text Available MR might be well suited to obtain reproducible and accurate measures of fat tissues in infants. This study evaluates MR-measurements of adipose tissue in young infants in vitro and in vivo.MR images of ten phantoms simulating subcutaneous fat of an infant's torso were obtained using a 1.5T MR scanner with and without simulated breathing. Scans consisted of a cartesian water-suppression turbo spin echo (wsTSE sequence, and a PROPELLER wsTSE sequence. Fat volume was quantified directly and by MR imaging using k-means clustering and threshold-based segmentation procedures to calculate accuracy in vitro. Whole body MR was obtained in sleeping young infants (average age 67±30 days. This study was approved by the local review board. All parents gave written informed consent. To obtain reproducibility in vivo, cartesian and PROPELLER wsTSE sequences were repeated in seven and four young infants, respectively. Overall, 21 repetitions were performed for the cartesian sequence and 13 repetitions for the PROPELLER sequence.In vitro accuracy errors depended on the chosen segmentation procedure, ranging from 5.4% to 76%, while the sequence showed no significant influence. Artificial breathing increased the minimal accuracy error to 9.1%. In vivo reproducibility errors for total fat volume of the sleeping infants ranged from 2.6% to 3.4%. Neither segmentation nor sequence significantly influenced reproducibility.With both cartesian and PROPELLER sequences an accurate and reproducible measure of body fat was achieved. Adequate segmentation was mandatory for high accuracy.
Cluster computing software for GATE simulations
International Nuclear Information System (INIS)
Beenhouwer, Jan de; Staelens, Steven; Kruecker, Dirk; Ferrer, Ludovic; D'Asseler, Yves; Lemahieu, Ignace; Rannou, Fernando R.
2007-01-01
Geometry and tracking (GEANT4) is a Monte Carlo package designed for high energy physics experiments. It is used as the basis layer for Monte Carlo simulations of nuclear medicine acquisition systems in GEANT4 Application for Tomographic Emission (GATE). GATE allows the user to realistically model experiments using accurate physics models and time synchronization for detector movement through a script language contained in a macro file. The downside of this high accuracy is long computation time. This paper describes a platform independent computing approach for running GATE simulations on a cluster of computers in order to reduce the overall simulation time. Our software automatically creates fully resolved, nonparametrized macros accompanied with an on-the-fly generated cluster specific submit file used to launch the simulations. The scalability of GATE simulations on a cluster is investigated for two imaging modalities, positron emission tomography (PET) and single photon emission computed tomography (SPECT). Due to a higher sensitivity, PET simulations are characterized by relatively high data output rates that create rather large output files. SPECT simulations, on the other hand, have lower data output rates but require a long collimator setup time. Both of these characteristics hamper scalability as a function of the number of CPUs. The scalability of PET simulations is improved here by the development of a fast output merger. The scalability of SPECT simulations is improved by greatly reducing the collimator setup time. Accordingly, these two new developments result in higher scalability for both PET and SPECT simulations and reduce the computation time to more practical values
High-Order Approximation of Chromatographic Models using a Nodal Discontinuous Galerkin Approach
DEFF Research Database (Denmark)
Meyer, Kristian; Huusom, Jakob Kjøbsted; Abildskov, Jens
2018-01-01
by Javeed et al. (2011a,b, 2013) with an efficient quadrature-free implementation. The framework is used to simulate linear and non-linear multicomponent chromatographic systems. The results confirm arbitrary high-order accuracy and demonstrate the potential for accuracy and speed-up gains obtainable...
Simulant Basis for the Standard High Solids Vessel Design
Energy Technology Data Exchange (ETDEWEB)
Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fiskum, Sandra K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suffield, Sarah R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Daniel, Richard C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gauglitz, Phillip A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wells, Beric E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2017-09-30
The Waste Treatment and Immobilization Plant (WTP) is working to develop a Standard High Solids Vessel Design (SHSVD) process vessel. To support testing of this new design, WTP engineering staff requested that a Newtonian simulant and a non-Newtonian simulant be developed that would represent the Most Adverse Design Conditions (in development) with respect to mixing performance as specified by WTP. The majority of the simulant requirements are specified in 24590-PTF-RPT-PE-16-001, Rev. 0. The first step in this process is to develop the basis for these simulants. This document describes the basis for the properties of these two simulant types. The simulant recipes that meet this basis will be provided in a subsequent document.
Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores
DEFF Research Database (Denmark)
Vilhjálmsson, Bjarni J; Yang, Jian; Finucane, Hilary K
2015-01-01
to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred...
Impact of Simulation Technology on Die and Stamping Business
Stevens, Mark W.
2005-08-01
Over the last ten years, we have seen an explosion in the use of simulation-based techniques to improve the engineering, construction, and operation of GM production tools. The impact has been as profound as the overall switch to CAD/CAM from the old manual design and construction methods. The changeover to N/C machining from duplicating milling machines brought advances in accuracy and speed to our construction activity. It also brought significant reductions in fitting sculptured surfaces. Changing over to CAD design brought similar advances in accuracy, and today's use of solid modeling has enhanced that accuracy gain while finally leading to the reduction in lead time and cost through the development of parametric techniques. Elimination of paper drawings for die design, along with the process of blueprinting and distribution, provided the savings required to install high capacity computer servers, high-speed data transmission lines and integrated networks. These historic changes in the application of CAE technology in manufacturing engineering paved the way for the implementation of simulation to all aspects of our business. The benefits are being realized now, and the future holds even greater promise as the simulation techniques mature and expand. Every new line of dies is verified prior to casting for interference free operation. Sheet metal forming simulation validates the material flow, eliminating the high costs of physical experimentation dependent on trial and error methods of the past. Integrated forming simulation and die structural analysis and optimization has led to a reduction in die size and weight on the order of 30% or more. The latest techniques in factory simulation enable analysis of automated press lines, including all stamping operations with corresponding automation. This leads to manufacturing lines capable of running at higher levels of throughput, with actual results providing the capability of two or more additional strokes per
Impact of different conditions on accuracy of five rules for principal components retention
Directory of Open Access Journals (Sweden)
Zorić Aleksandar
2013-01-01
Full Text Available Polemics about criteria for nontrivial principal components are still present in the literature. Finding of a lot of papers, is that the most frequently used Guttman Kaiser’s criterion has very poor performance. In the last three years some new criteria were proposed. In this Monte Carlo experiment we aimed to investigate the impact that sample size, number of analyzed variables, number of supposed factors and proportion of error variance have on the accuracy of analyzed criteria for principal components retention. We compared the following criteria: Bartlett’s χ2 test, Horn’s Parallel Analysis, Guttman-Kaiser’s eigenvalue over one, Velicer’s MAP and CHull originally proposed by Ceulemans & Kiers. Factors were systematically combined resulting in 690 different combinations. A total of 138,000 simulations were performed. Novelty in this research is systematic variation of the error variance. Performed simulations showed that, in favorable research conditions, all analyzed criteria work properly. Bartlett’s and Horns criterion expressed the robustness in most of analyzed situations. Velicer’s MAP had the best accuracy in situations with small number of subjects and high number of variables. Results confirm earlier findings of Guttman-Kaiser’s criterion having the worse performance.
Energy Technology Data Exchange (ETDEWEB)
Franz, A., LLNL
1998-02-17
The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented
Present Status of GNF New Nodal Simulator
International Nuclear Information System (INIS)
Iwamoto, T.; Tamitani, M.; Moore, B.
2001-01-01
This paper presents core simulator consolidation work done at Global Nuclear Fuel (GNF). The unified simulator needs to supercede the capabilities of past simulator packages from the original GNF partners: GE, Hitachi, and Toshiba. At the same time, an effort is being made to produce a simulation package that will be a state-of-the-art analysis tool when released, in terms of the physics solution methodology and functionality. The core simulator will be capable and qualified for (a) high-energy cycles in the U.S. markets, (b) mixed-oxide (MOX) introduction in Japan, and (c) high-power density plants in Europe, etc. The unification of the lattice physics code is also in progress based on a transport model with collision probability methods. The AETNA core simulator is built upon the PANAC11 software base. The goal is to essentially replace the 1.5-energy-group model with a higher-order multigroup nonlinear nodal solution capable of the required modeling fidelity, while keeping highly automated library generation as well as functionality. All required interfaces to PANAC11 will be preserved, which minimizes the impact on users and process automation. Preliminary results show statistical accuracy improvement over the 1.5-group model
International Nuclear Information System (INIS)
Wen, Chenyang; He, Shengyang; Hu, Peida; Bu, Changgen
2017-01-01
Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy. (paper)
Directory of Open Access Journals (Sweden)
Francisco J Valverde-Albacete
Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.
The Accuracy and Bias of Single-Step Genomic Prediction for Populations Under Selection
Directory of Open Access Journals (Sweden)
Wan-Ling Hsu
2017-08-01
Full Text Available In single-step analyses, missing genotypes are explicitly or implicitly imputed, and this requires centering the observed genotypes using the means of the unselected founders. If genotypes are only available for selected individuals, centering on the unselected founder mean is not straightforward. Here, computer simulation is used to study an alternative analysis that does not require centering genotypes but fits the mean μg of unselected individuals as a fixed effect. Starting with observed diplotypes from 721 cattle, a five-generation population was simulated with sire selection to produce 40,000 individuals with phenotypes, of which the 1000 sires had genotypes. The next generation of 8000 genotyped individuals was used for validation. Evaluations were undertaken with (J or without (N μg when marker covariates were not centered; and with (JC or without (C μg when all observed and imputed marker covariates were centered. Centering did not influence accuracy of genomic prediction, but fitting μg did. Accuracies were improved when the panel comprised only quantitative trait loci (QTL; models JC and J had accuracies of 99.4%, whereas models C and N had accuracies of 90.2%. When only markers were in the panel, the 4 models had accuracies of 80.4%. In panels that included QTL, fitting μg in the model improved accuracy, but had little impact when the panel contained only markers. In populations undergoing selection, fitting μg in the model is recommended to avoid bias and reduction in prediction accuracy due to selection.
The shared neural basis of empathy and facial imitation accuracy.
Braadbaart, L; de Grauw, H; Perrett, D I; Waiter, G D; Williams, J H G
2014-01-01
Empathy involves experiencing emotion vicariously, and understanding the reasons for those emotions. It may be served partly by a motor simulation function, and therefore share a neural basis with imitation (as opposed to mimicry), as both involve sensorimotor representations of intentions based on perceptions of others' actions. We recently showed a correlation between imitation accuracy and Empathy Quotient (EQ) using a facial imitation task and hypothesised that this relationship would be mediated by the human mirror neuron system. During functional Magnetic Resonance Imaging (fMRI), 20 adults observed novel 'blends' of facial emotional expressions. According to instruction, they either imitated (i.e. matched) the expressions or executed alternative, pre-prescribed mismatched actions as control. Outside the scanner we replicated the association between imitation accuracy and EQ. During fMRI, activity was greater during mismatch compared to imitation, particularly in the bilateral insula. Activity during imitation correlated with EQ in somatosensory cortex, intraparietal sulcus and premotor cortex. Imitation accuracy correlated with activity in insula and areas serving motor control. Overlapping voxels for the accuracy and EQ correlations occurred in premotor cortex. We suggest that both empathy and facial imitation rely on formation of action plans (or a simulation of others' intentions) in the premotor cortex, in connection with representations of emotional expressions based in the somatosensory cortex. In addition, the insula may play a key role in the social regulation of facial expression. © 2013.
Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding.
Ould Estaghvirou, Sidi Boubacar; Ogutu, Joseph O; Schulz-Streeck, Torben; Knaak, Carsten; Ouzunova, Milena; Gordillo, Andres; Piepho, Hans-Peter
2013-12-06
In genomic prediction, an important measure of accuracy is the correlation between the predicted and the true breeding values. Direct computation of this quantity for real datasets is not possible, because the true breeding value is unknown. Instead, the correlation between the predicted breeding values and the observed phenotypic values, called predictive ability, is often computed. In order to indirectly estimate predictive accuracy, this latter correlation is usually divided by an estimate of the square root of heritability. In this study we use simulation to evaluate estimates of predictive accuracy for seven methods, four (1 to 4) of which use an estimate of heritability to divide predictive ability computed by cross-validation. Between them the seven methods cover balanced and unbalanced datasets as well as correlated and uncorrelated genotypes. We propose one new indirect method (4) and two direct methods (5 and 6) for estimating predictive accuracy and compare their performances and those of four other existing approaches (three indirect (1 to 3) and one direct (7)) with simulated true predictive accuracy as the benchmark and with each other. The size of the estimated genetic variance and hence heritability exerted the strongest influence on the variation in the estimated predictive accuracy. Increasing the number of genotypes considerably increases the time required to compute predictive accuracy by all the seven methods, most notably for the five methods that require cross-validation (Methods 1, 2, 3, 4 and 6). A new method that we propose (Method 5) and an existing method (Method 7) used in animal breeding programs were the fastest and gave the least biased, most precise and stable estimates of predictive accuracy. Of the methods that use cross-validation Methods 4 and 6 were often the best. The estimated genetic variance and the number of genotypes had the greatest influence on predictive accuracy. Methods 5 and 7 were the fastest and produced the least
Energy Technology Data Exchange (ETDEWEB)
Wang, Carolyn L., E-mail: wangcl@uw.edu [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Schopp, Jennifer G.; Kani, Kimia [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States); Petscavage-Thomas, Jonelle M. [Penn State Hershey Medical Center, Department of Radiology, 500 University Drive, Hershey, PA 17033 (United States); Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H. [Department of Radiology, University of Washington, Box 357115, 1959 NE Pacific Street, Seattle, WA 98195-7115 (United States)
2013-12-01
Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation.
International Nuclear Information System (INIS)
Wang, Carolyn L.; Schopp, Jennifer G.; Kani, Kimia; Petscavage-Thomas, Jonelle M.; Zaidi, Sadaf; Hippe, Dan S.; Paladin, Angelisa M.; Bush, William H.
2013-01-01
Purpose: We developed a computer-based interactive simulation program for teaching contrast reaction management to radiology trainees and compared its effectiveness to high-fidelity hands-on simulation training. Materials and methods: IRB approved HIPAA compliant prospective study of 44 radiology residents, fellows and faculty who were randomized into either the high-fidelity hands-on simulation group or computer-based simulation group. All participants took separate written tests prior to and immediately after their intervention. Four months later participants took a delayed written test and a hands-on high-fidelity severe contrast reaction scenario performance test graded on predefined critical actions. Results: There was no statistically significant difference between the computer and hands-on groups’ written pretest, immediate post-test, or delayed post-test scores (p > 0.6 for all). Both groups’ scores improved immediately following the intervention (p < 0.001). The delayed test scores 4 months later were still significantly higher than the pre-test scores (p ≤ 0.02). The computer group's performance was similar to the hands-on group on the severe contrast reaction simulation scenario test (p = 0.7). There were also no significant differences between the computer and hands-on groups in performance on the individual core competencies of contrast reaction management during the contrast reaction scenario. Conclusion: It is feasible to develop a computer-based interactive simulation program to teach contrast reaction management. Trainees that underwent computer-based simulation training scored similarly on written tests and on a hands-on high-fidelity severe contrast reaction scenario performance test as those trained with hands-on high-fidelity simulation
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations
Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems
Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.
2015-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth
Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus
2013-03-01
Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.
Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation
Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.
2015-01-01
Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Sun
2010-02-15
% to 7% (0.5 x 0.5 cm{sup 2}). In addition, a method using a pixel-based unfolding curve was developed and applied to correct the non-uniform response of flat-bed type scanners for a radiochromic film. Also, the accuracy of the method was finally evaluated by comparing the results with those of an ion chamber, Monte Carlo simulation, and CF-based conventional method. For individual dose, the dosimetric error of using conventional method and using the pixel-based unfolding curve was reduced to less than 3%, and 1%, respectively. In case of step-wise doses, the average difference of 16% with MC calculation was reduced up to 1% by using the correction method in this study. Consequently, the accuracy of dose computation algorithms in TPS can be evaluated by the developed LEGO-type solid phantom, small filed dosimetry, the correction method for non-uniform response of scanners. It is also recognized that the developed hardware and software which are possible to be used for QA procedure are very reliable and they could be used for reference study of other radiation therapies
International Nuclear Information System (INIS)
Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens
2015-01-01
Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo
A study on temporal accuracy of OpenFOAM
Directory of Open Access Journals (Sweden)
Sang Bong Lee
2017-07-01
Full Text Available Crank–Nicolson scheme in native OpenFOAM source libraries was not able to provide 2nd order temporal accuracy of velocity and pressure since the volume flux of convective nonlinear terms was 1st accurate in time. In the present study the simplest way of getting the volume flux with 2nd order accuracy was proposed by using old fluxes. A possible numerical instability originated from an explicit estimation of volume fluxes could be handled by introducing a weighting factor which was determined by observing the ratio of the finally corrected volume flux to the intermediate volume flux at the previous step. The new calculation of volume fluxes was able to provide temporally accurate velocity and pressure with 2nd order. The improvement of temporal accuracy was validated by performing numerical simulations of 2D Taylor–Green vortex of which an exact solution was known and 2D vortex shedding from a circular cylinder.
Energy Technology Data Exchange (ETDEWEB)
Hallstrom, Jason; Ni, Zheng Richard
2018-05-15
This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a
Importance of debriefing in high-fidelity simulations
Directory of Open Access Journals (Sweden)
Igor Karnjuš
2014-04-01
Full Text Available Debriefing has been identified as one of the most important parts of a high-fidelity simulation learning process. During debriefing, the mentor invites learners to critically assess the knowledge and skills used during the execution of a scenario. Regardless of the abundance of studies that have examined simulation-based education, debriefing is still poorly defined.The present article examines the essential features of debriefing, its phases, techniques and methods with a systematic review of recent publications. It emphasizes the mentor’s role, since the effectiveness of debriefing largely depends on the mentor’s skills to conduct it. The guidelines that allow the mentor to evaluate his performance in conducting debriefing are also presented. We underline the importance of debriefing in clinical settings as part of continuous learning process. Debriefing allows the medical teams to assess their performance and develop new strategies to achieve higher competencies.Although the debriefing is the cornerstone of high-fidelity simulation learning process, it also represents an important learning strategy in the clinical setting. Many important aspects of debriefing are still poorly explored and understood, therefore this part of the learning process should be given greater attention in the future.
Enabling parallel simulation of large-scale HPC network systems
International Nuclear Information System (INIS)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip
2016-01-01
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations
Speeding Up Network Simulations Using Discrete Time
Lucas, Aaron; Armbruster, Benjamin
2013-01-01
We develop a way of simulating disease spread in networks faster at the cost of some accuracy. Instead of a discrete event simulation (DES) we use a discrete time simulation. This aggregates events into time periods. We prove a bound on the accuracy attained. We also discuss the choice of step size and do an analytical comparison of the computational costs. Our error bound concept comes from the theory of numerical methods for SDEs and the basic proof structure comes from the theory of numeri...
International Nuclear Information System (INIS)
McMurray, J. S.; Williams, C. C.
1998-01-01
Scanning Capacitance Microscopy (SCM) is capable of providing two-dimensional information about dopant and carrier concentrations in semiconducting devices. This information can be used to calibrate models used in the simulation of these devices prior to manufacturing and to develop and optimize the manufacturing processes. To provide information for future generations of devices, ultra-high spatial accuracy (<10 nm) will be required. One method, which potentially provides a means to obtain these goals, is inverse modeling of SCM data. Current semiconducting devices have large dopant gradients. As a consequence, the capacitance probe signal represents an average over the local dopant gradient. Conversion of the SCM signal to dopant density has previously been accomplished with a physical model which assumes that no dopant gradient exists in the sampling area of the tip. The conversion of data using this model produces results for abrupt profiles which do not have adequate resolution and accuracy. A new inverse model and iterative method has been developed to obtain higher resolution and accuracy from the same SCM data. This model has been used to simulate the capacitance signal obtained from one and two-dimensional ideal abrupt profiles. This simulated data has been input to a new iterative conversion algorithm, which has recovered the original profiles in both one and two dimensions. In addition, it is found that the shape of the tip can significantly impact resolution. Currently SCM tips are found to degrade very rapidly. Initially the apex of the tip is approximately hemispherical, but quickly becomes flat. This flat region often has a radius of about the original hemispherical radius. This change in geometry causes the silicon directly under the disk to be sampled with approximately equal weight. In contrast, a hemispherical geometry samples most strongly the silicon centered under the SCM tip and falls off quickly with distance from the tip's apex. Simulation
Goodrich, Kenneth H.; McManus, John W.; Chappell, Alan R.
1992-01-01
A batch air combat simulation environment known as the Tactical Maneuvering Simulator (TMS) is presented. The TMS serves as a tool for developing and evaluating tactical maneuvering logics. The environment can also be used to evaluate the tactical implications of perturbations to aircraft performance or supporting systems. The TMS is capable of simulating air combat between any number of engagement participants, with practical limits imposed by computer memory and processing power. Aircraft are modeled using equations of motion, control laws, aerodynamics and propulsive characteristics equivalent to those used in high-fidelity piloted simulation. Databases representative of a modern high-performance aircraft with and without thrust-vectoring capability are included. To simplify the task of developing and implementing maneuvering logics in the TMS, an outer-loop control system known as the Tactical Autopilot (TA) is implemented in the aircraft simulation model. The TA converts guidance commands issued by computerized maneuvering logics in the form of desired angle-of-attack and wind axis-bank angle into inputs to the inner-loop control augmentation system of the aircraft. This report describes the capabilities and operation of the TMS.
National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M IllinoisRocstar) sets up the infrastructure for...
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
Directory of Open Access Journals (Sweden)
Hepburn Iain
2012-05-01
Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-05-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.
Simulations of High Speed Fragment Trajectories
Yeh, Peter; Attaway, Stephen; Arunajatesan, Srinivasan; Fisher, Travis
2017-11-01
Flying shrapnel from an explosion are capable of traveling at supersonic speeds and distances much farther than expected due to aerodynamic interactions. Predicting the trajectories and stable tumbling modes of arbitrary shaped fragments is a fundamental problem applicable to range safety calculations, damage assessment, and military technology. Traditional approaches rely on characterizing fragment flight using a single drag coefficient, which may be inaccurate for fragments with large aspect ratios. In our work we develop a procedure to simulate trajectories of arbitrary shaped fragments with higher fidelity using high performance computing. We employ a two-step approach in which the force and moment coefficients are first computed as a function of orientation using compressible computational fluid dynamics. The force and moment data are then input into a six-degree-of-freedom rigid body dynamics solver to integrate trajectories in time. Results of these high fidelity simulations allow us to further understand the flight dynamics and tumbling modes of a single fragment. Furthermore, we use these results to determine the validity and uncertainty of inexpensive methods such as the single drag coefficient model.
A practical discrete-adjoint method for high-fidelity compressible turbulence simulations
International Nuclear Information System (INIS)
Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.
2015-01-01
Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that
Kumari, Komal; Donzis, Diego
2017-11-01
Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.
Simulations of depleted CMOS sensors for high-radiation environments
Liu, J.; Bhat, S.; Breugnon, P.; Caicedo, I.; Chen, Z.; Degerli, Y.; Godiot-Basolo, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Hügging, F.; Krüger, H.; Moustakas, K.; Pangaud, P.; Rozanov, A.; Rymaszewski, P.; Schwemling, P.; Wang, M.; Wang, T.; Wermes, N.; Zhang, L.
2017-01-01
After the Phase II upgrade for the Large Hadron Collider (LHC), the increased luminosity requests a new upgraded Inner Tracker (ITk) for the ATLAS experiment. As a possible option for the ATLAS ITk, a new pixel detector based on High Voltage/High Resistivity CMOS (HV/HR CMOS) technology is under study. Meanwhile, a new CMOS pixel sensor is also under development for the tracker of Circular Electron Position Collider (CEPC). In order to explore the sensor electric properties, such as the breakdown voltage and charge collection efficiency, 2D/3D Technology Computer Aided Design (TCAD) simulations have been performed carefully for the above mentioned both of prototypes. In this paper, the guard-ring simulation for a HV/HR CMOS sensor developed for the ATLAS ITk and the charge collection efficiency simulation for a CMOS sensor explored for the CEPC tracker will be discussed in details. Some comparisons between the simulations and the latest measurements will also be addressed.
Modified sine bar device measures small angles with high accuracy
Thekaekara, M.
1968-01-01
Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
Energy Technology Data Exchange (ETDEWEB)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G} for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient
Systematic review of discharge coding accuracy
Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.
2012-01-01
Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302
Gao, Xiaohui; Liu, Yongguang
2018-01-01
There is a serious nonlinear relationship between input and output in the giant magnetostrictive actuator (GMA) and how to establish mathematical model and identify its parameters is very important to study characteristics and improve control accuracy. The current-displacement model is firstly built based on Jiles-Atherton (J-A) model theory, Ampere loop theorem and stress-magnetism coupling model. And then laws between unknown parameters and hysteresis loops are studied to determine the data-taking scope. The modified simulated annealing differential evolution algorithm (MSADEA) is proposed by taking full advantage of differential evolution algorithm's fast convergence and simulated annealing algorithm's jumping property to enhance the convergence speed and performance. Simulation and experiment results shows that this algorithm is not only simple and efficient, but also has fast convergence speed and high identification accuracy.
Energy Technology Data Exchange (ETDEWEB)
Taleei, R; Peeler, C; Qin, N; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)
2016-06-15
Purpose: One of the most accurate methods for radiation transport is Monte Carlo (MC) simulation. Long computation time prevents its wide applications in clinic. We have recently developed a fast MC code for carbon ion therapy called GPU-based OpenCL Carbon Monte Carlo (goCMC) and its accuracy in physical dose has been established. Since radiobiology is an indispensible aspect of carbon ion therapy, this study evaluates accuracy of goCMC in biological dose and microdosimetry by benchmarking it with FLUKA. Methods: We performed simulations of a carbon pencil beam with 150, 300 and 450 MeV/u in a homogeneous water phantom using goCMC and FLUKA. Dose and energy spectra for primary and secondary ions on the central beam axis were recorded. Repair-misrepair-fixation model was employed to calculate Relative Biological Effectiveness (RBE). Monte Carlo Damage Simulation (MCDS) tool was used to calculate microdosimetry parameters. Results: Physical dose differences on the central axis were <1.6% of the maximum value. Before the Bragg peak, differences in RBE and RBE-weighted dose were <2% and <1%. At the Bragg peak, the differences were 12.5% caused by small range discrepancy and sensitivity of RBE to beam spectra. Consequently, RBE-weighted dose difference was 11%. Beyond the peak, RBE differences were <20% and primarily caused by differences in the Helium-4 spectrum. However, the RBE-weighted dose agreed within 1% due to the low physical dose. Differences in microdosimetric quantities were small except at the Bragg peak. The simulation time per source particle with FLUKA was 0.08 sec, while goCMC was approximately 1000 times faster. Conclusion: Physical doses computed by FLUKA and goCMC were in good agreement. Although relatively large RBE differences were observed at and beyond the Bragg peak, the RBE-weighted dose differences were considered to be acceptable.
Accuracy Assessment and Analysis for GPT2
Directory of Open Access Journals (Sweden)
YAO Yibin
2015-07-01
Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.
International Nuclear Information System (INIS)
Kazuyuki, Takase; Hiroyuki, Yoshida; Hidesada, Tamai; Hajime, Akimoto; Yasuo, Ose
2003-01-01
Fluid flow characteristics in a fuel bundle of a reduced-moderation light water reactor (RMWR) with a tight-lattice core were analyzed numerically using a newly developed two-phase flow analysis code under the full bundle size condition. Conventional analysis methods such as sub-channel codes need composition equations based on the experimental data. In case that there are no experimental data regarding to the thermal-hydraulics in the tight-lattice core, therefore, it is difficult to obtain high prediction accuracy on the thermal design of the RMWR. Then the direct numerical simulations with the earth simulator were chosen. The axial velocity distribution in a fuel bundle changed sharply around a grid spacer and its quantitative evaluation was obtained from the present preliminary numerical study. The high prospect was acquired on the possibility of establishment of the thermal design procedure of the RMWR by large-scale direct simulations. (authors)
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
High Accuracy Nonlinear Control and Estimation for Machine Tool Systems
DEFF Research Database (Denmark)
Papageorgiou, Dimitrios
Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...
An efficient CMOS bridging fault simulator with SPICE accuracy
Di, C.; Jess, J.A.G.
1996-01-01
This paper presents an alternative modeling and simulation method for CMOS bridging faults. The significance of the method is the introduction of a set of generic-bridge tables which characterize the bridged outputs for each bridge and a set of generic-cell tables which characterize how each cell
Highly immersive virtual reality laparoscopy simulation: development and future aspects.
Huber, Tobias; Wunderling, Tom; Paschold, Markus; Lang, Hauke; Kneist, Werner; Hansen, Christian
2018-02-01
Virtual reality (VR) applications with head-mounted displays (HMDs) have had an impact on information and multimedia technologies. The current work aimed to describe the process of developing a highly immersive VR simulation for laparoscopic surgery. We combined a VR laparoscopy simulator (LapSim) and a VR-HMD to create a user-friendly VR simulation scenario. Continuous clinical feedback was an essential aspect of the development process. We created an artificial VR (AVR) scenario by integrating the simulator video output with VR game components of figures and equipment in an operating room. We also created a highly immersive VR surrounding (IVR) by integrating the simulator video output with a [Formula: see text] video of a standard laparoscopy scenario in the department's operating room. Clinical feedback led to optimization of the visualization, synchronization, and resolution of the virtual operating rooms (in both the IVR and the AVR). Preliminary testing results revealed that individuals experienced a high degree of exhilaration and presence, with rare events of motion sickness. The technical performance showed no significant difference compared to that achieved with the standard LapSim. Our results provided a proof of concept for the technical feasibility of an custom highly immersive VR-HMD setup. Future technical research is needed to improve the visualization, immersion, and capability of interacting within the virtual scenario.
Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.
Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua
2011-05-15
High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.
High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li
Le scornet, G
2002-01-01
The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.
Read margin analysis of crossbar arrays using the cell-variability-aware simulation method
Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon
2018-02-01
This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.
High Speed Simulation Framework for Reliable Logic Programs
International Nuclear Information System (INIS)
Lee, Wan-Bok; Kim, Seog-Ju
2006-01-01
This paper shows a case study of designing a PLC logic simulator that was developed to simulate and verify PLC control programs for nuclear plant systems. The nuclear control system requires strict restrictions rather than normal process control system does, since it works with nuclear power plants requiring high reliability under severe environment. One restriction is the safeness of the control programs which can be assured by exploiting severe testing. Another restriction is the simulation speed of the control programs, that should be fast enough to control multi devices concurrently in real-time. To cope with these restrictions, we devised a logic compiler which generates C-code programs from given PLC logic programs. Once the logic program was translated into C-code, the program could be analyzed by conventional software analysis tools and could be used to construct a fast logic simulator after cross-compiling, in fact, that is a kind of compiled-code simulation
The Accuracy of RADIANCE Software in Modelling Overcast Sky Condition
Baharuddin
2013-01-01
A validation study of the sky models of RADIANCE simulation software against the overcast sky condition has been carried out in order to test the accuracy of sky model of RADIANCE for modeling the overcast sky condition in Hong Kong. Two sets of data have been analysed. Firstly, data collected from a set of experiments using a physical scale model. In this experiment, the illuminance of four points inside the model was measured under real sky conditions. Secondly, the RADIANCE simulation has ...
De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers
Energy Technology Data Exchange (ETDEWEB)
Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H
2006-09-04
We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM
Solution of partial differential equations by agent-based simulation
International Nuclear Information System (INIS)
Szilagyi, Miklos N
2014-01-01
The purpose of this short note is to demonstrate that partial differential equations can be quickly solved by agent-based simulation with high accuracy. There is no need for the solution of large systems of algebraic equations. This method is especially useful for quick determination of potential distributions and demonstration purposes in teaching electromagnetism. (letters and comments)
Local indicators of geocoding accuracy (LIGA: theory and application
Directory of Open Access Journals (Sweden)
Jacquez Geoffrey M
2009-10-01
Full Text Available Abstract Background Although sources of positional error in geographic locations (e.g. geocoding error used for describing and modeling spatial patterns are widely acknowledged, research on how such error impacts the statistical results has been limited. In this paper we explore techniques for quantifying the perturbability of spatial weights to different specifications of positional error. Results We find that a family of curves describes the relationship between perturbability and positional error, and use these curves to evaluate sensitivity of alternative spatial weight specifications to positional error both globally (when all locations are considered simultaneously and locally (to identify those locations that would benefit most from increased geocoding accuracy. We evaluate the approach in simulation studies, and demonstrate it using a case-control study of bladder cancer in south-eastern Michigan. Conclusion Three results are significant. First, the shape of the probability distributions of positional error (e.g. circular, elliptical, cross has little impact on the perturbability of spatial weights, which instead depends on the mean positional error. Second, our methodology allows researchers to evaluate the sensitivity of spatial statistics to positional accuracy for specific geographies. This has substantial practical implications since it makes possible routine sensitivity analysis of spatial statistics to positional error arising in geocoded street addresses, global positioning systems, LIDAR and other geographic data. Third, those locations with high perturbability (most sensitive to positional error and high leverage (that contribute the most to the spatial weight being considered will benefit the most from increased positional accuracy. These are rapidly identified using a new visualization tool we call the LIGA scatterplot. Herein lies a paradox for spatial analysis: For a given level of positional error increasing sample density
International Nuclear Information System (INIS)
Koethe, Yilun; Xu, Sheng; Velusamy, Gnanasekar; Wood, Bradford J.; Venkatesan, Aradhana M.
2014-01-01
To compare the accuracy of a robotic interventional radiologist (IR) assistance platform with a standard freehand technique for computed-tomography (CT)-guided biopsy and simulated radiofrequency ablation (RFA). The accuracy of freehand single-pass needle insertions into abdominal phantoms was compared with insertions facilitated with the use of a robotic assistance platform (n = 20 each). Post-procedural CTs were analysed for needle placement error. Percutaneous RFA was simulated by sequentially placing five 17-gauge needle introducers into 5-cm diameter masses (n = 5) embedded within an abdominal phantom. Simulated ablations were planned based on pre-procedural CT, before multi-probe placement was executed freehand. Multi-probe placement was then performed on the same 5-cm mass using the ablation planning software and robotic assistance. Post-procedural CTs were analysed to determine the percentage of untreated residual target. Mean needle tip-to-target errors were reduced with use of the IR assistance platform (both P < 0.0001). Reduced percentage residual tumour was observed with treatment planning (P = 0.02). Improved needle accuracy and optimised probe geometry are observed during simulated CT-guided biopsy and percutaneous ablation with use of a robotic IR assistance platform. This technology may be useful for clinical CT-guided biopsy and RFA, when accuracy may have an impact on outcome. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Koethe, Yilun [National Institutes of Health, Center for Interventional Oncology, NIH Clinical Center, Bethesda, MD (United States); National Institutes of Health, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD (United States); Duke University School of Medicine, Durham, NC (United States); Xu, Sheng [National Institutes of Health, Center for Interventional Oncology, NIH Clinical Center, Bethesda, MD (United States); Velusamy, Gnanasekar [Perfint Healthcare Pvt. Ltd., Chennai (India); Wood, Bradford J. [National Institutes of Health, Center for Interventional Oncology, NIH Clinical Center, Bethesda, MD (United States); National Institutes of Health, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD (United States); Venkatesan, Aradhana M. [National Institutes of Health, Center for Interventional Oncology, NIH Clinical Center, Bethesda, MD (United States); National Institutes of Health, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD (United States); National Institutes of Health, Center for Interventional Oncology, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD (United States)
2014-03-15
To compare the accuracy of a robotic interventional radiologist (IR) assistance platform with a standard freehand technique for computed-tomography (CT)-guided biopsy and simulated radiofrequency ablation (RFA). The accuracy of freehand single-pass needle insertions into abdominal phantoms was compared with insertions facilitated with the use of a robotic assistance platform (n = 20 each). Post-procedural CTs were analysed for needle placement error. Percutaneous RFA was simulated by sequentially placing five 17-gauge needle introducers into 5-cm diameter masses (n = 5) embedded within an abdominal phantom. Simulated ablations were planned based on pre-procedural CT, before multi-probe placement was executed freehand. Multi-probe placement was then performed on the same 5-cm mass using the ablation planning software and robotic assistance. Post-procedural CTs were analysed to determine the percentage of untreated residual target. Mean needle tip-to-target errors were reduced with use of the IR assistance platform (both P < 0.0001). Reduced percentage residual tumour was observed with treatment planning (P = 0.02). Improved needle accuracy and optimised probe geometry are observed during simulated CT-guided biopsy and percutaneous ablation with use of a robotic IR assistance platform. This technology may be useful for clinical CT-guided biopsy and RFA, when accuracy may have an impact on outcome. (orig.)
A Network Contention Model for the Extreme-scale Simulator
Energy Technology Data Exchange (ETDEWEB)
Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL
2015-01-01
The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.
Grauer, Jared A.; Morelli, Eugene A.
2013-01-01
A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.
Accuracy assessment of cadastral maps using high resolution aerial photos
Directory of Open Access Journals (Sweden)
Alwan Imzahim
2018-01-01
Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.
Improvement on the accuracy of beam bugs in linear induction accelerator
International Nuclear Information System (INIS)
Xie Yutong; Dai Zhiyong; Han Qing
2002-01-01
In linear induction accelerator the resistive wall monitors known as 'beam bugs' have been used as essential diagnostics of beam current and location. The author presents a new method that can improve the accuracy of these beam bugs used for beam position measurements. With a fine beam simulation set, this method locates the beam position with an accuracy of 0.02 mm and thus can scale the beam bugs very well. Experiment results prove that the precision of beam position measurements can reach submillimeter degree
Multilevel criticality computations in AREVA NP'S core simulation code artemis - 195
International Nuclear Information System (INIS)
Van Geemert, R.
2010-01-01
This paper discusses the multi-level critical boron iteration approach that is applied per default in AREVA NP's whole-core neutronics and thermal hydraulics core simulation program ARTEMIS. This multi-level approach is characterized by the projection of variational boron concentration adjustments to the coarser mesh levels in a multi-level re-balancing hierarchy that is associated with the nodal flux equations to be solved in steady-state core simulation. At each individual re-balancing mesh level, optimized variational criticality tuning formulas are applied. The latter drive the core model to a numerically highly accurate self-sustaining state (i.e. with the neutronic eigenvalue being 1 up to a very high numerical precision) by continuous adjustment of the boron concentration as a system-wide scalar criticality parameter. Due to the default application of this approach in ARTEMIS reactor cycle simulations, an accuracy of all critical boron concentration estimates better than 0.001 ppm is enabled for all burnup time steps in a computationally efficient way. This high accuracy is relevant for precision optimization in industrial core simulation as well as for enabling accurate reactivity perturbation assessments. The developed approach is presented from a numerical methodology point of view with an emphasis on the multi-grid aspect of the concept. Furthermore, an application-relevant verification is presented in terms of achieved coupled iteration convergence efficiency for an application-representative industrial core cycle computation. (authors)
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks
Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.
Zhao, Qin
2012-01-01
The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.
Hand ultrasound: a high-fidelity simulation of lung sliding.
Shokoohi, Hamid; Boniface, Keith
2012-09-01
Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
International Nuclear Information System (INIS)
Moelans, N.; Blanpain, B.; Wollants, P.
2008-01-01
A phase-field approach for quantitative simulations of grain growth in anisotropic systems is introduced, together with a new methodology to derive appropriate model parameters that reproduce given misorientation and inclination dependent grain boundary energy and mobility in the simulations. The proposed model formulation and parameter choice guarantee a constant diffuse interface width and consequently give high controllability of the accuracy in grain growth simulations
Effects of the initial conditions on cosmological $N$-body simulations
L'Huillier, Benjamin; Park, Changbom; Kim, Juhan
2014-01-01
Cosmology is entering an era of percent level precision due to current large observational surveys. This precision in observation is now demanding more accuracy from numerical methods and cosmological simulations. In this paper, we study the accuracy of $N$-body numerical simulations and their dependence on changes in the initial conditions and in the simulation algorithms. For this purpose, we use a series of cosmological $N$-body simulations with varying initial conditions. We test the infl...
Improving the accuracy of dynamic mass calculation
Directory of Open Access Journals (Sweden)
Oleksandr F. Dashchenko
2015-06-01
Full Text Available With the acceleration of goods transporting, cargo accounting plays an important role in today's global and complex environment. Weight is the most reliable indicator of the materials control. Unlike many other variables that can be measured indirectly, the weight can be measured directly and accurately. Using strain-gauge transducers, weight value can be obtained within a few milliseconds; such values correspond to the momentary load, which acts on the sensor. Determination of the w