WorldWideScience

Sample records for high numerical accuracy

  1. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  2. High-accuracy numerical integration of charged particle motion – with application to ponderomotive force

    International Nuclear Information System (INIS)

    Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu

    2016-01-01

    A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)

  3. Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing

    International Nuclear Information System (INIS)

    Bailey, David

    2005-01-01

    In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the

  4. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack

    2013-09-18

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  5. Achieving numerical accuracy and high performance using recursive tile LU factorization with partial pivoting

    KAUST Repository

    Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.

    2013-01-01

    The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example

  6. Learning linear spatial-numeric associations improves accuracy of memory for numbers

    Directory of Open Access Journals (Sweden)

    Clarissa Ann Thompson

    2016-01-01

    Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.

  7. Hybrid RANS-LES using high order numerical methods

    Science.gov (United States)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  8. Broadband EIT borehole measurements with high phase accuracy using numerical corrections of electromagnetic coupling effects

    International Nuclear Information System (INIS)

    Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A

    2013-01-01

    Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now

  9. On mesh refinement and accuracy of numerical solutions

    NARCIS (Netherlands)

    Zhou, Hong; Peters, Maria; van Oosterom, Adriaan

    1993-01-01

    This paper investigates mesh refinement and its relation with the accuracy of the boundary element method (BEM) and the finite element method (FEM). TO this end an isotropic homogeneous spherical volume conductor, for which the analytical solution is available, wag used. The numerical results

  10. Stability, accuracy and numerical diffusion analysis of nodal expansion method for steady convection diffusion equation

    International Nuclear Information System (INIS)

    Zhou, Xiafeng; Guo, Jiong; Li, Fu

    2015-01-01

    Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of

  11. Stability, accuracy and numerical diffusion analysis of nodal expansion method for steady convection diffusion equation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xiafeng, E-mail: zhou-xf11@mails.tsinghua.edu.cn; Guo, Jiong, E-mail: guojiong12@tsinghua.edu.cn; Li, Fu, E-mail: lifu@tsinghua.edu.cn

    2015-12-15

    Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of

  12. Increased-accuracy numerical modeling of electron-optical systems with space-charge

    International Nuclear Information System (INIS)

    Sveshnikov, V.

    2011-01-01

    This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.

  13. Implementation and assessment of high-resolution numerical methods in TRACE

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dean, E-mail: wangda@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley RD 6167, Oak Ridge, TN 37831 (United States); Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G. [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)

    2013-10-15

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency.

  14. Implementation and assessment of high-resolution numerical methods in TRACE

    International Nuclear Information System (INIS)

    Wang, Dean; Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G.

    2013-01-01

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency

  15. Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification

    Energy Technology Data Exchange (ETDEWEB)

    Blottner, F.G.; Lopez, A.R.

    1998-10-01

    This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.

  16. Numerical accuracy of real inversion formulas for the Laplace transform

    NARCIS (Netherlands)

    Masol, V.; Teugels, J.L.

    2008-01-01

    In this paper we investigate and compare a number of real inversion formulas for the Laplace transform. The focus is on the accuracy and applicability of the formulas for numerical inversion. In this contribution, we study the performance of the formulas for measures concentrated on a positive

  17. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang

    2012-01-01

    Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related

  18. Testing the accuracy and stability of spectral methods in numerical relativity

    International Nuclear Information System (INIS)

    Boyle, Michael; Lindblom, Lee; Pfeiffer, Harald P.; Scheel, Mark A.; Kidder, Lawrence E.

    2007-01-01

    The accuracy and stability of the Caltech-Cornell pseudospectral code is evaluated using the Kidder, Scheel, and Teukolsky (KST) representation of the Einstein evolution equations. The basic 'Mexico City tests' widely adopted by the numerical relativity community are adapted here for codes based on spectral methods. Exponential convergence of the spectral code is established, apparently limited only by numerical roundoff error or by truncation error in the time integration. A general expression for the growth of errors due to finite machine precision is derived, and it is shown that this limit is achieved here for the linear plane-wave test

  19. High-order non-uniform grid schemes for numerical simulation of hypersonic boundary-layer stability and transition

    International Nuclear Information System (INIS)

    Zhong Xiaolin; Tatineni, Mahidhar

    2003-01-01

    The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow

  20. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  1. Influence of radiation on predictive accuracy in numerical simulations of the thermal environment in industrial buildings with buoyancy-driven natural ventilation

    International Nuclear Information System (INIS)

    Meng, Xiaojing; Wang, Yi; Liu, Tiening; Xing, Xiao; Cao, Yingxue; Zhao, Jiangping

    2016-01-01

    Highlights: • The effects of radiation on predictive accuracy in numerical simulations were studied. • A scaled experimental model with a high-temperature heat source was set up. • Simulation results were discussed considering with and without radiation model. • The buoyancy force and the ventilation rate were investigated. - Abstract: This paper investigates the effects of radiation on predictive accuracy in the numerical simulations of industrial buildings. A scaled experimental model with a high-temperature heat source is set up and the buoyancy-driven natural ventilation performance is presented. Besides predicting ventilation performance in an industrial building, the scaled model in this paper is also used to generate data to validate the numerical simulations. The simulation results show good agreement with the experiment data. The effects of radiation on predictive accuracy in the numerical simulations are studied for both pure convection model and combined convection and radiation model. Detailed results are discussed regarding the temperature and velocity distribution, the buoyancy force and the ventilation rate. The temperature and velocity distributions through the middle plane are presented for the pure convection model and the combined convection and radiation model. It is observed that the overall temperature and velocity magnitude predicted by the simulations for pure convection were significantly greater than those for the combined convection and radiation model. In addition, the Grashof number and the ventilation rate are investigated. The results show that the Grashof number and the ventilation rate are greater for the pure convection model than for the combined convection and radiation model.

  2. Computer modeling of oil spill trajectories with a high accuracy method

    International Nuclear Information System (INIS)

    Garcia-Martinez, Reinaldo; Flores-Tovar, Henry

    1999-01-01

    This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)

  3. Accuracy requirements for the calculation of gravitational waveforms from coalescing compact binaries in numerical relativity

    International Nuclear Information System (INIS)

    Miller, Mark

    2005-01-01

    I discuss the accuracy requirements on numerical relativity calculations of inspiraling compact object binaries whose extracted gravitational waveforms are to be used as templates for matched filtering signal extraction and physical parameter estimation in modern interferometric gravitational wave detectors. Using a post-Newtonian point particle model for the premerger phase of the binary inspiral, I calculate the maximum allowable errors for the mass and relative velocity and positions of the binary during numerical simulations of the binary inspiral. These maximum allowable errors are compared to the errors of state-of-the-art numerical simulations of multiple-orbit binary neutron star calculations in full general relativity, and are found to be smaller by several orders of magnitude. A post-Newtonian model for the error of these numerical simulations suggests that adaptive mesh refinement coupled with second-order accurate finite difference codes will not be able to robustly obtain the accuracy required for reliable gravitational wave extraction on Terabyte-scale computers. I conclude that higher-order methods (higher-order finite difference methods and/or spectral methods) combined with adaptive mesh refinement and/or multipatch technology will be needed for robustly accurate gravitational wave extraction from numerical relativity calculations of binary coalescence scenarios

  4. Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine

    Directory of Open Access Journals (Sweden)

    Spodniak Miroslav

    2017-01-01

    Full Text Available This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.

  5. High-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects.

    Science.gov (United States)

    Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian

    2015-06-29

    A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.

  6. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  7. European Workshop on High Order Nonlinear Numerical Schemes for Evolutionary PDEs

    CERN Document Server

    Beaugendre, Héloïse; Congedo, Pietro; Dobrzynski, Cécile; Perrier, Vincent; Ricchiuto, Mario

    2014-01-01

    This book collects papers presented during the European Workshop on High Order Nonlinear Numerical Methods for Evolutionary PDEs (HONOM 2013) that was held at INRIA Bordeaux Sud-Ouest, Talence, France in March, 2013. The central topic is high order methods for compressible fluid dynamics. In the workshop, and in this proceedings, greater emphasis is placed on the numerical than the theoretical aspects of this scientific field. The range of topics is broad, extending through algorithm design, accuracy, large scale computing, complex geometries, discontinuous Galerkin, finite element methods, Lagrangian hydrodynamics, finite difference methods and applications and uncertainty quantification. These techniques find practical applications in such fields as fluid mechanics, magnetohydrodynamics, nonlinear solid mechanics, and others for which genuinely nonlinear methods are needed.

  8. Numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp

    Science.gov (United States)

    Hara, Tatsuhiko; Nishimura, Naoki

    2011-12-01

    We perform numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp. We conduct these experiments by measuring Mwp from synthetic seismograms and comparing the resulting values to the moment magnitudes used in the calculation of synthetic seismograms. In the numerical experiments using point sources, we have found that there is a significant dependence of Mwp on focal mechanisms, and that depths phases have a large impact on Mwp estimates, especially for large shallow earthquakes. Numerical experiments using line sources suggest that the effects of source finiteness and rupture propagation on Mwp estimates are on the order of 0.2 magnitude units for vertical fault planes with pure dip-slip mechanisms and 45° dipping fault planes with pure dip-slip (thrust) mechanisms, but that the dependence is small for strike-slip events on a vertical fault plane. Numerical experiments for huge thrust faulting earthquakes on a fault plane with a shallow dip angle suggest that the Mwp estimates do not saturate in the moment magnitude range between 8 and 9, although they are underestimates. Our results are consistent with previous studies that compared Mwp estimates to moment magnitudes calculated from seismic moment tensors obtained by analyses of observed data.

  9. High energy gravitational scattering: a numerical study

    CERN Document Server

    Marchesini, Giuseppe

    2008-01-01

    The S-matrix in gravitational high energy scattering is computed from the region of large impact parameters b down to the regime where classical gravitational collapse is expected to occur. By solving the equation of an effective action introduced by Amati, Ciafaloni and Veneziano we find that the perturbative expansion around the leading eikonal result diverges at a critical value signalling the onset of a new regime. We then discuss the main features of our explicitly unitary S-matrix down to the Schwarzschild's radius R=2G s^(1/2), where it diverges at a critical value b ~ 2.22 R of the impact parameter. The nature of the singularity is studied with particular attention to the scaling behaviour of various observables at the transition. The numerical approach is validated by reproducing the known exact solution in the axially symmetric case to high accuracy.

  10. High-accuracy CFD prediction methods for fluid and structure temperature fluctuations at T-junction for thermal fatigue evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-07-15

    Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are

  11. Cause and Cure - Deterioration in Accuracy of CFD Simulations With Use of High-Aspect-Ratio Triangular Tetrahedral Grids

    Science.gov (United States)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar

    2017-01-01

    Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the

  12. High-accuracy critical exponents for O(N) hierarchical 3D sigma models

    International Nuclear Information System (INIS)

    Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.

    2006-01-01

    The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits

  13. Fission product model for BWR analysis with improved accuracy in high burnup

    International Nuclear Information System (INIS)

    Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira

    1998-01-01

    A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)

  14. High-accuracy measurement and compensation of grating line-density error in a tiled-grating compressor

    Science.gov (United States)

    Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua

    2017-02-01

    The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.

  15. High accuracy FIONA-AFM hybrid imaging

    International Nuclear Information System (INIS)

    Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.

    2011-01-01

    Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.

  16. High Accuracy Transistor Compact Model Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  17. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    Directory of Open Access Journals (Sweden)

    Ricardo Lopes Cardoso

    Full Text Available Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295 of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  18. A Graph is Worth a Thousand Words: How Overconfidence and Graphical Disclosure of Numerical Information Influence Financial Analysts Accuracy on Decision Making.

    Science.gov (United States)

    Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli

    2016-01-01

    Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.

  19. Taylor bubbles at high viscosity ratios: experiments and numerical simulations

    Science.gov (United States)

    Hewakandamby, Buddhika; Hasan, Abbas; Azzopardi, Barry; Xie, Zhihua; Pain, Chris; Matar, Omar

    2015-11-01

    The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube, often occurring in gas-liquid slug flows in many industrial applications, particularly oil and gas production. The objective of this study is to investigate the fluid dynamics of three-dimensional Taylor bubble rising in highly viscous silicone oil in a vertical pipe. An adaptive unstructured mesh modelling framework is adopted here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rising and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a `volume of fluid'-type method for the interface-capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Experimental results for the Taylor bubble shape and rise velocity are presented, together with numerical results for the dynamics of the bubbles. A comparison of the simulation predictions with experimental data available in the literature is also presented to demonstrate the capabilities of our numerical method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  20. Effect of fluid elasticity on the numerical stability of high-resolution schemes for high shearing contraction flows using OpenFOAM

    Directory of Open Access Journals (Sweden)

    T. Chourushi

    2017-01-01

    Full Text Available Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwind-differencing scheme (UDS is often employed for improving the stability of algorithm. To elude this effect, some works been reported in the past, where high-resolution-schemes (HRS were employed and Deborah number was varied. However, these works are limited to creeping flows and do not detail any information on the numerical stability of HRS. Hence, this article presents the numerical study of high shearing contraction flows, where stability of HRS are addressed in reference to fluid elasticity. Results suggest that all HRS show some order of undue oscillations in flow variable profiles, measured along vertical lines placed near contraction region in the upstream section of domain, at varied elasticity number E≈5. Furthermore, by E, a clear relationship between numerical stability of HRS and E was obtained, which states that the order of undue oscillations in flow variable profiles is directly proportional to E.

  1. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    Science.gov (United States)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  2. Fast and high-order numerical algorithms for the solution of multidimensional nonlinear fractional Ginzburg-Landau equation

    Science.gov (United States)

    Mohebbi, Akbar

    2018-02-01

    In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.

  3. Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients

    International Nuclear Information System (INIS)

    Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.

    2016-01-01

    Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.

  4. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2016-06-01

    Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.

  5. Effect of fluid elasticity on the numerical stability of high-resolution schemes for high shearing contraction flows using OpenFOAM

    OpenAIRE

    Chourushi, T.

    2017-01-01

    Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwin...

  6. A generalized polynomial chaos based ensemble Kalman filter with high accuracy

    International Nuclear Information System (INIS)

    Li Jia; Xiu Dongbin

    2009-01-01

    As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.

  7. The study of optimization on process parameters of high-accuracy computerized numerical control polishing

    Science.gov (United States)

    Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu

    2017-09-01

    Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.

  8. Numerical models for high beta magnetohydrodynamic flow

    International Nuclear Information System (INIS)

    Brackbill, J.U.

    1987-01-01

    The fundamentals of numerical magnetohydrodynamics for highly conducting, high-beta plasmas are outlined. The discussions emphasize the physical properties of the flow, and how elementary concepts in numerical analysis can be applied to the construction of finite difference approximations that capture these features. The linear and nonlinear stability of explicit and implicit differencing in time is examined, the origin and effect of numerical diffusion in the calculation of convective transport is described, and a technique for maintaining solenoidality in the magnetic field is developed. Many of the points are illustrated by numerical examples. The techniques described are applicable to the time-dependent, high-beta flows normally encountered in magnetically confined plasmas, plasma switches, and space and astrophysical plasmas. 40 refs

  9. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    Science.gov (United States)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  10. Numerical Simulation of Cyclic Thermodynamic Processes

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård

    2006-01-01

    This thesis is on numerical simulation of cyclic thermodynamic processes. A modelling approach and a method for finding periodic steady state solutions are described. Examples of applications are given in the form of four research papers. Stirling machines and pulse tube coolers are introduced...... and a brief overview of the current state of the art in methods for simulating such machines is presented. It was found that different simulation approaches, which model the machines with different levels of detail, currently coexist. Methods using many simplifications can be easy to use and can provide...... models flexible and easy to modify, and to make simulations fast. A high level of accuracy was achieved for integrations of a model created using the modelling approach; the accuracy depended on the settings for the numerical solvers in a very predictable way. Selection of fast numerical algorithms...

  11. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    Science.gov (United States)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  12. A variational nodal diffusion method of high accuracy; Varijaciona nodalna difuziona metoda visoke tachnosti

    Energy Technology Data Exchange (ETDEWEB)

    Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)

  13. Asymptotic solutions of numerical transport problems in optically thick, diffusive regimes II

    International Nuclear Information System (INIS)

    Larsen, E.W.; Morel, J.E.

    1989-01-01

    In a recent article (Larsen, Morel, and Miller, J. Comput. Phys. 69, 283 (1987)), a theoretical method is described for assessing the accuracy of transport differencing schemes in highly scattering media with optically thick spatial meshes. In the present article, this method is extended to enable one to determine the accuracy of such schemes in the presence of numerically unresolved boundary layers. Numerical results are presented that demonstrate the validity and accuracy of our analysis. copyright 1989 Academic Press, Inc

  14. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    2017-02-01

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.

  15. Numerical solution of one-dimensional transient, two-phase flows with temporal fully implicit high order schemes: Subcooled boiling in pipes

    Energy Technology Data Exchange (ETDEWEB)

    López, R., E-mail: ralope1@ing.uc3m.es; Lecuona, A., E-mail: lecuona@ing.uc3m.es; Nogueira, J., E-mail: goriba@ing.uc3m.es; Vereda, C., E-mail: cvereda@ing.uc3m.es

    2017-03-15

    Highlights: • A two-phase flows numerical algorithm with high order temporal schemes is proposed. • Transient solutions route depends on the temporal high order scheme employed. • ESDIRK scheme for two-phase flows events exhibits high computational performance. • Computational implementation of the ESDIRK scheme can be done in a very easy manner. - Abstract: An extension for 1-D transient two-phase flows of the SIMPLE-ESDIRK method, initially developed for incompressible viscous flows by Ijaz is presented. This extension is motivated by the high temporal order of accuracy demanded to cope with fast phase change events. This methodology is suitable for boiling heat exchangers, solar thermal receivers, etc. The methodology of the solution consist in a finite volume staggered grid discretization of the governing equations in which the transient terms are treated with the explicit first stage singly diagonally implicit Runge-Kutta (ESDIRK) method. It is suitable for stiff differential equations, present in instant boiling or condensation processes. It is combined with the semi-implicit pressure linked equations algorithm (SIMPLE) for the calculation of the pressure field. The case of study consists of the numerical reproduction of the Bartolomei upward boiling pipe flow experiment. The steady-state validation of the numerical algorithm is made against these experimental results and well known numerical results for that experiment. In addition, a detailed study reveals the benefits over the first order Euler Backward method when applying 3rd and 4th order schemes, making emphasis in the behaviour when the system is subjected to periodic square wave wall heat function disturbances, concluding that the use of the ESDIRK method in two-phase calculations presents remarkable accuracy and computational advantages.

  16. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  17. Numerical investigation into the highly nonlinear heat transfer equation with bremsstrahlung emission in the inertial confinement fusion plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Habibi, M.; Oloumi, M.; Hosseinkhani, H.; Magidi, S. [Plasma and Fusion Research School, Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)

    2015-10-15

    A highly nonlinear parabolic partial differential equation that models the electron heat transfer process in laser inertial fusion has been solved numerically. The strong temperature dependence of the electron thermal conductivity and heat loss term (Bremsstrahlung emission) makes this a highly nonlinear process. In this case, an efficient numerical method is developed for the energy transport mechanism from the region of energy deposition into the ablation surface by a combination of the Crank-Nicolson scheme and the Newton-Raphson method. The quantitative behavior of the electron temperature and the comparison between analytic and numerical solutions are also investigated. For more clarification, the accuracy and conservation of energy in the computations are tested. The numerical results can be used to evaluate the nonlinear electron heat conduction, considering the released energy of the laser pulse at the Deuterium-Tritium (DT) targets and preheating by heat conduction ahead of a compression shock in the inertial confinement fusion (ICF) approach. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. Study on the improvement of the convective differencing scheme for the high-accuracy and stable resolution of the numerical solution

    International Nuclear Information System (INIS)

    Shin, J. K.; Choi, Y. D.

    1992-01-01

    QUICKER scheme has several attractive properties. However, under highly convective conditions, it produces overshoots and possibly some oscillations on each side of steps in the dependent variable when the flow is convected at an angle oblique to the grid line. Fortunately, it is possible to modify the QUICKER scheme using non-linear and linear functional relationship. Details of the development of polynomial upwinding scheme are given in this paper, where it is seen that this non-linear scheme has also third order accuracy. This polynomial upwinding scheme is used as the basis for the SHARPER and SMARTER schemes. Another revised scheme was developed by partial modification of QUICKER scheme using CDS and UPWIND schemes (QUICKUP). These revised schemes are tested at the well known bench mark flows, Two-Dimensional Pure Convection Flows in Oblique-Step, Lid Driven Cavity Flows and Buoyancy Driven Cavity Flows. For remain absolutely monotonic without overshoot and oscillation. QUICKUP scheme is more accurate than any other scheme in their relative accuracy. In high Reynolds number Lid Driven Catity Flow, SMARTER and SHARPER schemes retain lower computational cost than QUICKER and QUICKUP schemes, but computed velocity values in the revised schemes produced less predicted values than QUICKER scheme which is strongly effected by overshoot and undershoot values. Also, in Buoyancy Driven Cavity Flow, SMARTER, SHARPER and QUICKUP schemes give acceptable results. (Author)

  19. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    Energy Technology Data Exchange (ETDEWEB)

    Cullum, J. [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States)

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  20. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...

  1. Validation of accuracy and stability of numerical simulation for 2-D heat transfer system by an entropy production approach

    Directory of Open Access Journals (Sweden)

    Brohi Ali Anwar

    2017-01-01

    Full Text Available The entropy production in 2-D heat transfer system has been analyzed systematically by using the finite volume method, to develop new criteria for the numerical simulation in case of multidimensional systems, with the aid of the CFD codes. The steady-state heat conduction problem has been investigated for entropy production, and the entropy production profile has been calculated based upon the current approach. From results for 2-D heat conduction, it can be found that the stability of entropy production profile exhibits a better agreement with the exact solution accordingly, and the current approach is effective for measuring the accuracy and stability of numerical simulations for heat transfer problems.

  2. High accuracy autonomous navigation using the global positioning system (GPS)

    Science.gov (United States)

    Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul

    1997-01-01

    The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.

  3. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  4. A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation

    Directory of Open Access Journals (Sweden)

    Jinsong Hu

    2013-01-01

    Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.

  5. Direct numerical simulation of combustion at high Reynolds numbers; Direkte Numerische Simulation der Verbrennung bei hoeheren Reynoldszahlen

    Energy Technology Data Exchange (ETDEWEB)

    Frouzakis, C. E.; Boulouchos, K.

    2005-12-15

    This comprehensive illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on the work done at the Swiss Federal Institute of Technology in Zurich on the numerical simulation of combustion processes at high Reynolds numbers. The authors note that with appropriate extensive calculation effort, results can be obtained that demonstrate a high degree of accuracy. It is noted that a large part of the project work was devoted to the development of algorithms for the simulation of the combustion processes. Application work is also discussed with research on combustion stability being carried on. The direct numerical simulation (DNS) methods used are described and co-operation with other institutes is noted. The results of experimental work are compared with those provided by simulation and are discussed in detail. Conclusions and an outlook round off the report.

  6. High accuracy wavelength calibration for a scanning visible spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  7. Switched-capacitor techniques for high-accuracy filter and ADC design

    NARCIS (Netherlands)

    Quinn, P.J.; Roermund, van A.H.M.

    2007-01-01

    Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor

  8. Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning

    Science.gov (United States)

    Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes

    2014-05-01

    In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The

  9. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  10. High-accuracy mass spectrometry for fundamental studies.

    Science.gov (United States)

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.

  11. Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement

    Directory of Open Access Journals (Sweden)

    Xianglei Liu

    2018-01-01

    Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.

  12. An angle encoder for super-high resolution and super-high accuracy using SelfA

    Science.gov (United States)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-06-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after

  13. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    International Nuclear Information System (INIS)

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  14. An angle encoder for super-high resolution and super-high accuracy using SelfA

    International Nuclear Information System (INIS)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-01-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period

  15. High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2

    Science.gov (United States)

    Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.

    2016-01-01

    Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.

  16. Fast and High Accuracy Wire Scanner

    CERN Document Server

    Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A

    2009-01-01

    Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...

  17. Numerical solution of High-kappa model of superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Karamikhova, R. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    We present formulation and finite element approximations of High-kappa model of superconductivity which is valid in the high {kappa}, high magnetic field setting and accounts for applied magnetic field and current. Major part of this work deals with steady-state and dynamic computational experiments which illustrate our theoretical results numerically. In our experiments we use Galerkin discretization in space along with Backward-Euler and Crank-Nicolson schemes in time. We show that for moderate values of {kappa}, steady states of the model system, computed using the High-kappa model, are virtually identical with results computed using the full Ginzburg-Landau (G-L) equations. We illustrate numerically optimal rates of convergence in space and time for the L{sup 2} and H{sup 1} norms of the error in the High-kappa solution. Finally, our numerical approximations demonstrate some well-known experimentally observed properties of high-temperature superconductors, such as appearance of vortices, effects of increasing the applied magnetic field and the sample size, and the effect of applied constant current.

  18. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    Science.gov (United States)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  19. Climate change and high-resolution whole-building numerical modelling

    NARCIS (Netherlands)

    Blocken, B.J.E.; Briggen, P.M.; Schellen, H.L.; Hensen, J.L.M.

    2010-01-01

    This paper briefly discusses the need of high-resolution whole-building numerical modelling in the context of climate change. High-resolution whole-building numerical modelling can be used for detailed analysis of the potential consequences of climate change on buildings and to evaluate remedial

  20. On the Numerical Accuracy of Spreadsheets

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2010-10-01

    Full Text Available This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo running on two hardware platforms (i386 and amd64 and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard. The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments.

  1. High-accuracy measurements of the normal specular reflectance

    International Nuclear Information System (INIS)

    Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy

    2008-01-01

    The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity

  2. A high accuracy land use/cover retrieval system

    Directory of Open Access Journals (Sweden)

    Alaa Hefnawy

    2012-03-01

    Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.

  3. High accuracy satellite drag model (HASDM)

    Science.gov (United States)

    Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent

    The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.

  4. Numerical considerations for Lagrangian stochastic dispersion models: Eliminating rogue trajectories, and the importance of numerical accuracy

    Science.gov (United States)

    When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...

  5. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    Science.gov (United States)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  6. Evaluate the accuracy of the numerical solution of hydrogeological problems of mass transfer

    Directory of Open Access Journals (Sweden)

    Yevhrashkina G.P.

    2014-12-01

    Full Text Available In the hydrogeological task on quantifying pollution of aquifers the error are starting add up with moment organization of regime observation network as a source of information on the pollution of groundwater in order to evaluate migration options for future prognosis calculations. Optimum element regime observation network should consist of three drill holes on the groundwater flow at equal distances from one another and transversely to the flow of the three drill holes, and at equal distances. If the target of observation drill holes coincides with the stream line on which will then be decided by direct migration task, the error will be minimal. The theoretical basis and results of numerical experiments to assess the accuracy of direct predictive tasks planned migration of groundwater in the area of full water saturation. For the vadose zone, we consider problems of vertical salt transport moisture. All studies were performed by comparing the results of fundamental and approximate solutions in a wide range of characteristics of the processes, which are discussed in relation to ecological and hydrogeological conditions of mining regions on the example of the Western Donbass.

  7. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  8. Application of Numerical Integration and Data Fusion in Unit Vector Method

    Science.gov (United States)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of

  9. Numerical investigation on exterior conformal mappings with application to airfoils

    International Nuclear Information System (INIS)

    Mohamad Rashidi Md Razali; Hu Laey Nee

    2000-01-01

    A numerical method is described in computing a conformal map from an exterior region onto the exterior of the unit disk. The numerical method is based on a boundary integral equation which is similar to the Kerzman-Stein integral equation for interior mapping. Some examples show that numerical results of high accuracy can be obtained provided that the boundaries are smooth. This numerical method has been applied to the mapping airfoils. However, due to the fact that the parametric representation of an air foil is not known, a cubic spline interpolation method has been used. Some numerical examples with satisfying results have been obtained for the symmetrical and cambered airfoils. (Author)

  10. Three-Dimensional Imaging and Numerical Reconstruction of Graphite/Epoxy Composite Microstructure Based on Ultra-High Resolution X-Ray Computed Tomography

    Science.gov (United States)

    Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.

    2014-01-01

    A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.

  11. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    Science.gov (United States)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  12. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  13. High accuracy digital aging monitor based on PLL-VCO circuit

    International Nuclear Information System (INIS)

    Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong

    2015-01-01

    As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)

  14. A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.

    Science.gov (United States)

    Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G

    2017-02-01

    The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.

  15. High current high accuracy IGBT pulse generator

    International Nuclear Information System (INIS)

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles

  16. Process planning and accuracy distribution of marine power plant modularization

    Directory of Open Access Journals (Sweden)

    ZHANG Jinguo

    2018-02-01

    Full Text Available [Objectives] Modular shipbuilding can shorten the cycle of design and construction, lower production costs and improve the quality of products, but higher shipbuilding capabilities are required, especially for the installation of power plants. Because of such characteristics of modular shipbuilding as the high precision of docking links, long size equipment installation chain and quantitative docking interfaces, docking installation is very difficult due to high docking deviation and low accuracy of docking installation, leading to the abnormal vibration of equipment. In order to solve this problem, [Methods] on the basis of domestic shipbuilding capability, numerical calculation methods are used to analyze the accuracy distribution of modular installation. [Results] The results show that the accuracy distribution of different docking links is reasonable and feasible, and the setting of adjusting allowance matches the requirements of shipbuilding. [Conclusions] This method provides a reference for the modular construction of marine power plants.

  17. Numerical prediction on turbulent heat transfer of a spacer ribbed fuel rod for high temperature gas-cooled reactors

    International Nuclear Information System (INIS)

    Takase, Kazuyuki

    1994-11-01

    The turbulent heat transfer of a fuel rod with three-dimensional trapezoidal spacer ribs for high temperature gas-cooled reactors was analyzed numerically using the k-ε turbulence model, and investigated experimentally using a simulated fuel rod under the helium gas condition of a maximum outlet temperature of 1000degC and pressure of 4MPa. From the experimental results, it found that the turbulent heat transfer coefficients of the fuel rod were 18 to 80% higher than those of a concentric smooth annulus at a region of Reynolds number exceeding 2000. On the other hand, the predicted average Nusselt number of the fuel rod agreed well with the heat transfer correlation obtained from the experimental data within a relative error of 10% with Reynolds number of more than 5000. It was verified that the numerical analysis results had sufficient accuracy. Furthermore, the numerical prediction could clarify quantitatively the effects of the heat transfer augmentation by the spacer rib and the axial velocity increase due to a reduction in the annular channel cross-section. (author)

  18. High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS

    OpenAIRE

    O.S.Kushnir; O.A.Bevz; O.G.Vlokh

    2000-01-01

    Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.

  19. Achieving High Accuracy in Calculations of NMR Parameters

    DEFF Research Database (Denmark)

    Faber, Rasmus

    quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...

  20. High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2017-01-01

    The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...

  1. High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.

    2012-07-01

    We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.

  2. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    International Nuclear Information System (INIS)

    Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T

    2006-01-01

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems

  3. Why is a high accuracy needed in dosimetry

    International Nuclear Information System (INIS)

    Lanzl, L.H.

    1976-01-01

    Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control

  4. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...

  5. High accuracy acoustic relative humidity measurement in duct flow with air.

    Science.gov (United States)

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  6. Two high accuracy digital integrators for Rogowski current transducers

    Science.gov (United States)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  7. Efficient numerical simulations of many-body localized systems

    Energy Technology Data Exchange (ETDEWEB)

    Pollmann, Frank [Max-Planck-Institut fuer Physik komplexer Systeme, 01187 Dresden (Germany); Khemani, Vedika; Sondhi, Shivaji [Physics Department, Princeton University, Princeton, NJ 08544 (United States)

    2016-07-01

    Many-body localization (MBL) occurs in isolated quantum systems when Anderson localization persists in the presence of finite interactions. To understand this phenomenon, the development of new, efficient numerical methods to find highly excited eigenstates is essential. We introduce a variant of the density-matrix renormalization group (DMRG) method that obtains individual highly excited eigenstates of MBL systems to machine precision accuracy at moderate-large disorder. This method explicitly takes advantage of the local spatial structure characterizing MBL eigenstates.

  8. Electron ray tracing with high accuracy

    International Nuclear Information System (INIS)

    Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.

    1986-01-01

    An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens

  9. Numerical Analysis on the High-Strength Concrete Beams Ultimate Behaviour

    Science.gov (United States)

    Smarzewski, Piotr; Stolarski, Adam

    2017-10-01

    Development of technologies of high-strength concrete (HSC) beams production, with the aim of creating a secure and durable material, is closely linked with the numerical models of real objects. The three-dimensional nonlinear finite element models of reinforced high-strength concrete beams with a complex geometry has been investigated in this study. The numerical analysis is performed using the ANSYS finite element package. The arc-length (A-L) parameters and the adaptive descent (AD) parameters are used with Newton-Raphson method to trace the complete load-deflection curves. Experimental and finite element modelling results are compared graphically and numerically. Comparison of these results indicates the correctness of failure criteria assumed for the high-strength concrete and the steel reinforcement. The results of numerical simulation are sensitive to the modulus of elasticity and the shear transfer coefficient for an open crack assigned to high-strength concrete. The full nonlinear load-deflection curves at mid-span of the beams, the development of strain in compressive concrete and the development of strain in tensile bar are in good agreement with the experimental results. Numerical results for smeared crack patterns are qualitatively agreeable as to the location, direction, and distribution with the test data. The model was capable of predicting the introduction and propagation of flexural and diagonal cracks. It was concluded that the finite element model captured successfully the inelastic flexural behaviour of the beams to failure.

  10. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, Eric M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  11. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1996-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed

  12. Numerical evaluation of high energy particle effects in magnetohydrodynamics

    International Nuclear Information System (INIS)

    White, R.B.; Wu, Y.

    1994-03-01

    The interaction of high energy ions with magnetohydrodynamic modes is analyzed. A numerical code is developed which evaluates the contribution of the high energy particles to mode stability using orbit averaging of motion in either analytic or numerically generated equilibria through Hamiltonian guiding center equations. A dispersion relation is then used to evaluate the effect of the particles on the linear mode. Generic behavior of the solutions of the dispersion relation is discussed and dominant contributions of different components of the particle distribution function are identified. Numerical convergence of Monte-Carlo simulations is analyzed. The resulting code ORBIT provides an accurate means of comparing experimental results with the predictions of kinetic magnetohydrodynamics. The method can be extended to include self consistent modification of the particle orbits by the mode, and hence the full nonlinear dynamics of the coupled system

  13. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  14. Tests of numerical simulation algorithms for the Kubo oscillator

    International Nuclear Information System (INIS)

    Fox, R.F.; Roy, R.; Yu, A.W.

    1987-01-01

    Numerical simulation algorithms for multiplicative noise (white or colored) are tested for accuracy against closed-form expressions for the Kubo oscillator. Direct white noise simulations lead to spurious decay of the modulus of the oscillator amplitude. A straightforward colored noise algorithm greatly reduces this decay and also provides highly accurate results in the white noise limit

  15. Accuracy of hiatal hernia detection with esophageal high-resolution manometry

    NARCIS (Netherlands)

    Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.

    2015-01-01

    The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this

  16. High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air

    Directory of Open Access Journals (Sweden)

    Cees van der Geld

    2010-08-01

    Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  17. Numerical soliton-like solutions of the potential Kadomtsev-Petviashvili equation by the decomposition method

    International Nuclear Information System (INIS)

    Kaya, Dogan; El-Sayed, Salah M.

    2003-01-01

    In this Letter we present an Adomian's decomposition method (shortly ADM) for obtaining the numerical soliton-like solutions of the potential Kadomtsev-Petviashvili (shortly PKP) equation. We will prove the convergence of the ADM. We obtain the exact and numerical solitary-wave solutions of the PKP equation for certain initial conditions. Then ADM yields the analytic approximate solution with fast convergence rate and high accuracy through previous works. The numerical solutions are compared with the known analytical solutions

  18. Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2017-01-01

    This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.

  19. Numerically Stable Evaluation of Moments of Random Gram Matrices With Applications

    KAUST Repository

    Elkhalil, Khalil

    2017-07-31

    This paper focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily, but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computes the positive moments in closed-form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment based-approaches. As an application, we show how the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.

  20. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  1. Accuracy of Binary Black Hole waveforms for Advanced LIGO searches

    Science.gov (United States)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.

  2. High accuracy 3D electromagnetic finite element analysis

    International Nuclear Information System (INIS)

    Nelson, E.M.

    1997-01-01

    A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics

  3. Highly uniform parallel microfabrication using a large numerical aperture system

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [School of Electrical Engineering and Automation, Anhui University, Hefei 230601 (China); Zhang, Chen-Chu; Hu, Yan-Lei; Wang, Chao-Wei; Li, Jia-Wen; Chu, Jia-Ru; Wu, Dong, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [CAS Key Laboratory of Mechanical Behavior and Design of Materials, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026 (China)

    2016-07-11

    In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallel processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.

  4. Interleaved numerical renormalization group as an efficient multiband impurity solver

    Science.gov (United States)

    Stadler, K. M.; Mitchell, A. K.; von Delft, J.; Weichselbaum, A.

    2016-06-01

    Quantum impurity problems can be solved using the numerical renormalization group (NRG), which involves discretizing the free conduction electron system and mapping to a "Wilson chain." It was shown recently that Wilson chains for different electronic species can be interleaved by use of a modified discretization, dramatically increasing the numerical efficiency of the RG scheme [Phys. Rev. B 89, 121105(R) (2014), 10.1103/PhysRevB.89.121105]. Here we systematically examine the accuracy and efficiency of the "interleaved" NRG (iNRG) method in the context of the single impurity Anderson model, the two-channel Kondo model, and a three-channel Anderson-Hund model. The performance of iNRG is explicitly compared with "standard" NRG (sNRG): when the average number of states kept per iteration is the same in both calculations, the accuracy of iNRG is equivalent to that of sNRG but the computational costs are significantly lower in iNRG when the same symmetries are exploited. Although iNRG weakly breaks SU(N ) channel symmetry (if present), both accuracy and numerical cost are entirely competitive with sNRG exploiting full symmetries. iNRG is therefore shown to be a viable and technically simple alternative to sNRG for high-symmetry models. Moreover, iNRG can be used to solve a range of lower-symmetry multiband problems that are inaccessible to sNRG.

  5. High-accuracy energy formulas for the attractive two-site Bose-Hubbard model

    Science.gov (United States)

    Ermakov, Igor; Byrnes, Tim; Bogoliubov, Nikolay

    2018-02-01

    The attractive two-site Bose-Hubbard model is studied within the framework of the analytical solution obtained by the application of the quantum inverse scattering method. The structure of the ground and excited states is analyzed in terms of solutions of Bethe equations, and an approximate solution for the Bethe roots is given. This yields approximate formulas for the ground-state energy and for the first excited-state energy. The obtained formulas work with remarkable precision for a wide range of parameters of the model, and are confirmed numerically. An expansion of the Bethe state vectors into a Fock space is also provided for evaluation of expectation values, although this does not have accuracy similar to that of the energies.

  6. Numerical orbit generators of artificial earth satellites

    Science.gov (United States)

    Kugar, H. K.; Dasilva, W. C. C.

    1984-04-01

    A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.

  7. The effect of pattern overlap on the accuracy of high resolution electron backscatter diffraction measurements

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2015-08-15

    High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.

  8. Read-only high accuracy volume holographic optical correlator

    Science.gov (United States)

    Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2011-10-01

    A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.

  9. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  10. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  11. Formal Solutions for Polarized Radiative Transfer. II. High-order Methods

    Energy Technology Data Exchange (ETDEWEB)

    Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch [Istituto Ricerche Solari Locarno (IRSOL), 6605 Locarno-Monti (Switzerland)

    2017-08-20

    When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.

  12. Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...

  13. Calculating qP-wave traveltimes in 2-D TTI media by high-order fast sweeping methods with a numerical quartic equation solver

    Science.gov (United States)

    Han, Song; Zhang, Wei; Zhang, Jie

    2017-09-01

    A fast sweeping method (FSM) determines the first arrival traveltimes of seismic waves by sweeping the velocity model in different directions meanwhile applying a local solver. It is an efficient way to numerically solve Hamilton-Jacobi equations for traveltime calculations. In this study, we develop an improved FSM to calculate the first arrival traveltimes of quasi-P (qP) waves in 2-D tilted transversely isotropic (TTI) media. A local solver utilizes the coupled slowness surface of qP and quasi-SV (qSV) waves to form a quartic equation, and solve it numerically to obtain possible traveltimes of qP-wave. The proposed quartic solver utilizes Fermat's principle to limit the range of the possible solution, then uses the bisection procedure to efficiently determine the real roots. With causality enforced during sweepings, our FSM converges fast in a few iterations, and the exact number depending on the complexity of the velocity model. To improve the accuracy, we employ high-order finite difference schemes and derive the second-order formulae. There is no weak anisotropy assumption, and no approximation is made to the complex slowness surface of qP-wave. In comparison to the traveltimes calculated by a horizontal slowness shooting method, the validity and accuracy of our FSM is demonstrated.

  14. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    Science.gov (United States)

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  15. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    Science.gov (United States)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  16. A study on temporal accuracy of OpenFOAM

    Directory of Open Access Journals (Sweden)

    Sang Bong Lee

    2017-07-01

    Full Text Available Crank–Nicolson scheme in native OpenFOAM source libraries was not able to provide 2nd order temporal accuracy of velocity and pressure since the volume flux of convective nonlinear terms was 1st accurate in time. In the present study the simplest way of getting the volume flux with 2nd order accuracy was proposed by using old fluxes. A possible numerical instability originated from an explicit estimation of volume fluxes could be handled by introducing a weighting factor which was determined by observing the ratio of the finally corrected volume flux to the intermediate volume flux at the previous step. The new calculation of volume fluxes was able to provide temporally accurate velocity and pressure with 2nd order. The improvement of temporal accuracy was validated by performing numerical simulations of 2D Taylor–Green vortex of which an exact solution was known and 2D vortex shedding from a circular cylinder.

  17. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Directory of Open Access Journals (Sweden)

    Majda Paweł

    2017-11-01

    Full Text Available This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe’s numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  18. Accuracy and repeatability positioning of high-performancel athe for non-circular turning

    Science.gov (United States)

    Majda, Paweł; Powałka, Bartosz

    2017-11-01

    This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.

  19. High accuracy interface characterization of three phase material systems in three dimensions

    DEFF Research Database (Denmark)

    Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus

    2010-01-01

    Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...

  20. Reactions, accuracy and response complexity of numerical typing on touch screens.

    Science.gov (United States)

    Lin, Cheng-Jhe; Wu, Changxu

    2013-01-01

    Touch screens are popular nowadays as seen on public kiosks, industrial control panels and personal mobile devices. Numerical typing is one frequent task performed on touch screens, but this task on touch screen is subject to human errors and slow responses. This study aims to find innate differences of touch screens from standard physical keypads in the context of numerical typing by eliminating confounding issues. Effects of precise visual feedback and urgency of numerical typing were also investigated. The results showed that touch screens were as accurate as physical keyboards, but reactions were indeed executed slowly on touch screens as signified by both pre-motor reaction time and reaction time. Provision of precise visual feedback caused more errors, and the interaction between devices and urgency was not found on reaction time. To improve usability of touch screens, designers should focus more on reducing response complexity and be cautious about the use of visual feedback. The study revealed that slower responses on touch screens involved more complex human cognition to formulate motor responses. Attention should be given to designing precise visual feedback appropriately so that distractions or visual resource competitions can be avoided to improve human performance on touch screens.

  1. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  2. Cause and Cure-Deterioration in Accuracy of CFD Simulations with Use of High-Aspect-Ratio Triangular/Tetrahedral Grids

    Science.gov (United States)

    Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji

    2017-01-01

    In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.

  3. High-order dynamic lattice method for seismic simulation in anisotropic media

    Science.gov (United States)

    Hu, Xiaolin; Jia, Xiaofeng

    2018-03-01

    The discrete particle-based dynamic lattice method (DLM) offers an approach to simulate elastic wave propagation in anisotropic media by calculating the anisotropic micromechanical interactions between these particles based on the directions of the bonds that connect them in the lattice. To build such a lattice, the media are discretized into particles. This discretization inevitably leads to numerical dispersion. The basic lattice unit used in the original DLM only includes interactions between the central particle and its nearest neighbours; therefore, it represents the first-order form of a particle lattice. The first-order lattice suffers from numerical dispersion compared with other numerical methods, such as high-order finite-difference methods, in terms of seismic wave simulation. Due to its unique way of discretizing the media, the particle-based DLM no longer solves elastic wave equations; this means that one cannot build a high-order DLM by simply creating a high-order discrete operator to better approximate a partial derivative operator. To build a high-order DLM, we carry out a thorough dispersion analysis of the method and discover that by adding more neighbouring particles into the lattice unit, the DLM will yield different spatial accuracy. According to the dispersion analysis, the high-order DLM presented here can adapt the requirement of spatial accuracy for seismic wave simulations. For any given spatial accuracy, we can design a corresponding high-order lattice unit to satisfy the accuracy requirement. Numerical tests show that the high-order DLM improves the accuracy of elastic wave simulation in anisotropic media.

  4. Spatial and temporal accuracy of asynchrony-tolerant finite difference schemes for partial differential equations at extreme scales

    Science.gov (United States)

    Kumari, Komal; Donzis, Diego

    2017-11-01

    Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.

  5. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    Directory of Open Access Journals (Sweden)

    Guanwu Zhou

    2014-07-01

    Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.

  6. Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems

    Energy Technology Data Exchange (ETDEWEB)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-08-03

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.

  7. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Science.gov (United States)

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  8. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    International Nuclear Information System (INIS)

    Ko, P; Kurosawa, S

    2014-01-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine

  9. Numerical simulation of turbulence flow in a Kaplan turbine -Evaluation on turbine performance prediction accuracy-

    Science.gov (United States)

    Ko, P.; Kurosawa, S.

    2014-03-01

    The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.

  10. Numerical approach to one-loop integrals

    International Nuclear Information System (INIS)

    Fujimoto, Junpei; Shimizu, Yoshimitsu; Kato, Kiyoshi; Oyanagi, Yoshio.

    1992-01-01

    Two numerical methods are proposed for the calculation of one-loop scalar integrals. In the first method, the singularity is cancelled by the symmetrization of the integrand and the integration is done by a Monte-Carlo method. In the second one, after the transform of the integrand into a standard form, the integral is reduced into a regular numerical integral. These methods provide us practical tools to evaluate one-loop Feynman diagrams with desired numerical accuracy. They are extended to the integral with numerator and the treatment of the one-loop virtual correction to the cross section is also presented. (author)

  11. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  12. Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation

    Science.gov (United States)

    Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter

    1996-01-01

    The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.

  13. Identification and delineation of areas flood hazard using high accuracy of DEM data

    Science.gov (United States)

    Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.

    2018-05-01

    Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.

  14. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    Science.gov (United States)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  15. Modeling of Coaxial Slot Waveguides Using Analytical and Numerical Approaches: Revisited

    Directory of Open Access Journals (Sweden)

    Kok Yeow You

    2012-01-01

    Full Text Available Our reviews of analytical methods and numerical methods for coaxial slot waveguides are presented. The theories, background, and physical principles related to frequency-domain electromagnetic equations for coaxial waveguides are reassessed. Comparisons of the accuracies of various types of admittance and impedance equations and numerical simulations are made, and the fringing field at the aperture sensor, which is represented by the lumped capacitance circuit, is evaluated. The accuracy and limitations of the analytical equations are explained in detail. The reasons for the replacement of analytical methods by numerical methods are outlined.

  16. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    Science.gov (United States)

    2009-01-30

    Modelling and Numerical Analysis, 40(5):815-841, 2006. [31] Michael Dumbser, Martin Kaser, and Eleuterio F. Toro. An arbitrary high-order Discontinuous...proximation of PML, SIAM J. Numer. Anal., 41 (2003), pp. 287-305. [60] E. BECACHE, S. FAUQUEUX, AND P. JOLY , Stability of perfectly matched layers, group...time-domain performance analysis, IEEE Trans, on Magnetics, 38 (2002), pp. 657- 660. [64] J. DIAZ AND P. JOLY , An analysis of higher-order boundary

  17. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    Energy Technology Data Exchange (ETDEWEB)

    LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  18. Numerical analysis for thermal waves in gas generated by impulsive heating of a boundary surface

    International Nuclear Information System (INIS)

    Utsumi, Takayuki; Kunugi, Tomoaki

    1996-01-01

    Thermal wave in gas generated by an impulsive heating of a solid boundary was analyzed numerically by the Differential Algebraic CIP (Cubic Interpolated Propagation) scheme. Numerical results for the ordinary heat conduction equation were obtained with a high accuracy. As for the hyperbolic thermal fluid dynamics equation, the fundamental feature of the experimental results by Brown and Churchill with regard to thermoacoustic convection was qualitatively reproduced by the DA-CIP scheme. (author)

  19. Mining the multigroup-discrete ordinates algorithm for high quality solutions

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Kornreich, D.E.

    2005-01-01

    A novel approach to the numerical solution of the neutron transport equation via the discrete ordinates (SN) method is presented. The new technique is referred to as 'mining' low order (SN) numerical solutions to obtain high order accuracy. The new numerical method, called the Multigroup Converged SN (MGCSN) algorithm, is a combination of several sequence accelerators: Romberg and Wynn-epsilon. The extreme accuracy obtained by the method is demonstrated through self consistency and comparison to the independent semi-analytical benchmark BLUE. (authors)

  20. High accuracy positioning using carrier-phases with the opensource GPSTK software

    OpenAIRE

    Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume

    2008-01-01

    The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP

  1. Numerical modeling of a snow cover on Hooker Island (Franz Josef Land archipelago

    Directory of Open Access Journals (Sweden)

    V. S. Sokratov

    2013-01-01

    Full Text Available Results obtained by simulating snow characteristics with a numerical model of surface heat and moisture exchange SPONSOR are presented. The numerical experiments are carried out for Franz Josef Land with typical Arctic climate conditions. The blizzard evaporation parameter is shown to have great influence on snow depth on territories with high wind speed. This parameter significantly improves the simulation quality of the numerical model. Some discrepancies between evaluated and observed snow depth values can be explained by inaccuracies in precipitation measurements (at least in certain cases and errors in calculations of incoming radiation, mostly due to low accuracy in the cloudiness observations.

  2. A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-12-01

    Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.

  3. A new ultra-high-accuracy angle generator: current status and future direction

    Science.gov (United States)

    Guertin, Christian F.; Geckeler, Ralf D.

    2017-09-01

    Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.

  4. Are computer numerical control (CNC)-manufactured patient-specific metal templates available for posterior thoracic pedicle screw insertion? Feasibility and accuracy evaluation.

    Science.gov (United States)

    Kong, Xiangxue; Tang, Lei; Ye, Qiang; Huang, Wenhua; Li, Jianyi

    2017-11-01

    Accurate and safe posterior thoracic pedicle insertion (PTPI) remains a challenge. Patient-specific drill templates (PDTs) created by rapid prototyping (RP) can assist in posterior thoracic pedicle insertion, but pose biocompatibility risks. The aims of this study were to develop alternative PDTs with computer numerical control (CNC) and assess their feasibility and accuracy in assisting PTPI. Preoperative CT images of 31 cadaveric thoracic vertebras were obtained and then the optimal pedicle screw trajectories were planned. The PDTs with optimal screw trajectories were randomly assigned to be designed and manufactured by CNC or RP in each vertebra. With the guide of the CNC- or RP-manufactured PDTs, the appropriate screws were inserted into the pedicles. Postoperative CT scans were performed to analyze any deviations at entry point and midpoint of the pedicles. The CNC group was found to be significant manufacture-time-shortening, and cost-decreasing, when compared with the RP group (P  0.05). The screw positions were grade 0 in 90.3% and grade 1 in 9.7% of the cases in the CNC group and grade 0 in 93.5% and grade 1 in 6.5% of the cases in the RP group (P = 0.641). CNC-manufactured PDTs are viable for assisting in PTPI with good feasibility and accuracy.

  5. Fast numerical upscaling of heat equation for fibrous materials

    KAUST Repository

    Iliev, Oleg; Lazarov, Raytcho; Willems, Joerg

    2010-01-01

    We are interested in numerical methods for computing the effective heat conductivities of fibrous insulation materials, such as glass or mineral wool, characterized by low solid volume fractions and high contrasts, i.e., high ratios between the thermal conductivities of the fibers and the surrounding air. We consider a fast numerical method for solving some auxiliary cell problems appearing in this upscaling procedure. The auxiliary problems are boundary value problems of the steady-state heat equation in a representative elementary volume occupied by fibers and air. We make a simplification by replacing these problems with appropriate boundary value problems in the domain occupied by the fibers only. Finally, the obtained problems are further simplified by taking advantage of the slender shape of the fibers and assuming that they form a network. A discretization on the graph defined by the fibers is presented and error estimates are provided. The resulting algorithm is discussed and the accuracy and the performance of the method are illusrated on a number of numerical experiments. © Springer-Verlag 2010.

  6. Fast numerical upscaling of heat equation for fibrous materials

    KAUST Repository

    Iliev, Oleg

    2010-08-01

    We are interested in numerical methods for computing the effective heat conductivities of fibrous insulation materials, such as glass or mineral wool, characterized by low solid volume fractions and high contrasts, i.e., high ratios between the thermal conductivities of the fibers and the surrounding air. We consider a fast numerical method for solving some auxiliary cell problems appearing in this upscaling procedure. The auxiliary problems are boundary value problems of the steady-state heat equation in a representative elementary volume occupied by fibers and air. We make a simplification by replacing these problems with appropriate boundary value problems in the domain occupied by the fibers only. Finally, the obtained problems are further simplified by taking advantage of the slender shape of the fibers and assuming that they form a network. A discretization on the graph defined by the fibers is presented and error estimates are provided. The resulting algorithm is discussed and the accuracy and the performance of the method are illusrated on a number of numerical experiments. © Springer-Verlag 2010.

  7. Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems

    International Nuclear Information System (INIS)

    Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa

    2005-01-01

    Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date

  8. Inference of Altimeter Accuracy on Along-track Gravity Anomaly Recovery

    Directory of Open Access Journals (Sweden)

    LI Yang

    2015-04-01

    Full Text Available A correlation model between along-track gravity anomaly accuracy, spatial resolution and altimeter accuracy is proposed. This new model is based on along-track gravity anomaly recovery and resolution estimation. Firstly, an error propagation formula of along-track gravity anomaly is derived from the principle of satellite altimetry. Then the mathematics between the SNR (signal to noise ratio and cross spectral coherence is deduced. The analytical correlation between altimeter accuracy and spatial resolution is finally obtained from the results above. Numerical simulation results show that along-track gravity anomaly accuracy is proportional to altimeter accuracy, while spatial resolution has a power relation with altimeter accuracy. e.g., with altimeter accuracy improving m times, gravity anomaly accuracy improves m times while spatial resolution improves m0.4644 times. This model is verified by real-world data.

  9. An investigation into the accuracy, stability and parallel performance of a highly stable explicit technique for stiff reaction-transport PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Franz, A., LLNL

    1998-02-17

    The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented

  10. Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.

    Science.gov (United States)

    Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing

    2016-10-01

    The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.

  11. Lattice Boltzmann model for numerical relativity.

    Science.gov (United States)

    Ilseven, E; Mendoza, M

    2016-02-01

    In the Z4 formulation, Einstein equations are written as a set of flux conservative first-order hyperbolic equations that resemble fluid dynamics equations. Based on this formulation, we construct a lattice Boltzmann model for numerical relativity and validate it with well-established tests, also known as "apples with apples." Furthermore, we find that by increasing the relaxation time, we gain stability at the cost of losing accuracy, and by decreasing the lattice spacings while keeping a constant numerical diffusivity, the accuracy and stability of our simulations improve. Finally, in order to show the potential of our approach, a linear scaling law for parallelization with respect to number of CPU cores is demonstrated. Our model represents the first step in using lattice kinetic theory to solve gravitational problems.

  12. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    Science.gov (United States)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  13. Advancement of compressible multiphase flows and sodium-water reaction analysis program SERAPHIM. Validation of a numerical method for the simulation of highly underexpanded jets

    International Nuclear Information System (INIS)

    Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira

    2010-01-01

    SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)

  14. High-accuracy user identification using EEG biometrics.

    Science.gov (United States)

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  15. Accuracy verification methods theory and algorithms

    CERN Document Server

    Mali, Olli; Repin, Sergey

    2014-01-01

    The importance of accuracy verification methods was understood at the very beginning of the development of numerical analysis. Recent decades have seen a rapid growth of results related to adaptive numerical methods and a posteriori estimates. However, in this important area there often exists a noticeable gap between mathematicians creating the theory and researchers developing applied algorithms that could be used in engineering and scientific computations for guaranteed and efficient error control.   The goals of the book are to (1) give a transparent explanation of the underlying mathematical theory in a style accessible not only to advanced numerical analysts but also to engineers and students; (2) present detailed step-by-step algorithms that follow from a theory; (3) discuss their advantages and drawbacks, areas of applicability, give recommendations and examples.

  16. Modeling hemodynamics in intracranial aneurysms: Comparing accuracy of CFD solvers based on finite element and finite volume schemes.

    Science.gov (United States)

    Botti, Lorenzo; Paliwal, Nikhil; Conti, Pierangelo; Antiga, Luca; Meng, Hui

    2018-06-01

    Image-based computational fluid dynamics (CFD) has shown potential to aid in the clinical management of intracranial aneurysms (IAs) but its adoption in the clinical practice has been missing, partially due to lack of accuracy assessment and sensitivity analysis. To numerically solve the flow-governing equations CFD solvers generally rely on two spatial discretization schemes: Finite Volume (FV) and Finite Element (FE). Since increasingly accurate numerical solutions are obtained by different means, accuracies and computational costs of FV and FE formulations cannot be compared directly. To this end, in this study we benchmark two representative CFD solvers in simulating flow in a patient-specific IA model: (1) ANSYS Fluent, a commercial FV-based solver and (2) VMTKLab multidGetto, a discontinuous Galerkin (dG) FE-based solver. The FV solver's accuracy is improved by increasing the spatial mesh resolution (134k, 1.1m, 8.6m and 68.5m tetrahedral element meshes). The dGFE solver accuracy is increased by increasing the degree of polynomials (first, second, third and fourth degree) on the base 134k tetrahedral element mesh. Solutions from best FV and dGFE approximations are used as baseline for error quantification. On average, velocity errors for second-best approximations are approximately 1cm/s for a [0,125]cm/s velocity magnitude field. Results show that high-order dGFE provide better accuracy per degree of freedom but worse accuracy per Jacobian non-zero entry as compared to FV. Cross-comparison of velocity errors demonstrates asymptotic convergence of both solvers to the same numerical solution. Nevertheless, the discrepancy between under-resolved velocity fields suggests that mesh independence is reached following different paths. This article is protected by copyright. All rights reserved.

  17. Application of symplectic integrator to numerical fluid analysis

    International Nuclear Information System (INIS)

    Tanaka, Nobuatsu

    2000-01-01

    This paper focuses on application of the symplectic integrator to numerical fluid analysis. For the purpose, we introduce Hamiltonian particle dynamics to simulate fluid behavior. The method is based on both the Hamiltonian formulation of a system and the particle methods, and is therefore called Hamiltonian Particle Dynamics (HPD). In this paper, an example of HPD applications, namely the behavior of incompressible inviscid fluid, is solved. In order to improve accuracy of HPD with respect to space, CIVA, which is a highly accurate interpolation method, is combined, but the combined method is subject to problems in that the invariants of the system are not conserved in a long-time computation. For solving the problems, symplectic time integrators are introduced and the effectiveness is confirmed by numerical analyses. (author)

  18. Experimental and numerical study of the accuracy of flame-speed measurements for methane/air combustion in a slot burner

    Energy Technology Data Exchange (ETDEWEB)

    Selle, L.; Ferret, B. [Universite de Toulouse, INPT, UPS, IMFT, Institut de Mecanique des Fluides de Toulouse (France); CNRS, IMFT, Toulouse (France); Poinsot, T. [Universite de Toulouse, INPT, UPS, IMFT, Institut de Mecanique des Fluides de Toulouse (France); CNRS, IMFT, Toulouse (France); CERFACS, Toulouse (France)

    2011-01-15

    Measuring the velocities of premixed laminar flames with precision remains a controversial issue in the combustion community. This paper studies the accuracy of such measurements in two-dimensional slot burners and shows that while methane/air flame speeds can be measured with reasonable accuracy, the method may lack precision for other mixtures such as hydrogen/air. Curvature at the flame tip, strain on the flame sides and local quenching at the flame base can modify local flame speeds and require corrections which are studied using two-dimensional DNS. Numerical simulations also provide stretch, displacement and consumption flame speeds along the flame front. For methane/air flames, DNS show that the local stretch remains small so that the local consumption speed is very close to the unstretched premixed flame speed. The only correction needed to correctly predict flame speeds in this case is due to the finite aspect ratio of the slot used to inject the premixed gases which induces a flow acceleration in the measurement region (this correction can be evaluated from velocity measurement in the slot section or from an analytical solution). The method is applied to methane/air flames with and without water addition and results are compared to experimental data found in the literature. The paper then discusses the limitations of the slot-burner method to measure flame speeds for other mixtures and shows that it is not well adapted to mixtures with a Lewis number far from unity, such as hydrogen/air flames. (author)

  19. High Accuracy Attitude Control System Design for Satellite with Flexible Appendages

    Directory of Open Access Journals (Sweden)

    Wenya Zhou

    2014-01-01

    Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.

  20. Numerical simulation of permanent magnet method: Influence of experimental conditions on accuracy of j{sub C}-distribution

    Energy Technology Data Exchange (ETDEWEB)

    Takayama, T., E-mail: takayama@yz.yamagata-u.ac.j [Faculty of Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan); Kamitani, A.; Tanaka, A. [Graduate School of Science and Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan)

    2010-11-01

    Influence of the magnet position on the determination of the distribution of the critical current density in a high-temperature superconducting (HTS) thin film has been investigated numerically. For this purpose, a numerical code has been developed for analyzing the shielding current density in a HTS sample. By using the code, the permanent magnet method is reproduced. The results of computations show that, even if the center of the permanent magnet is located near the film edge, the maximum repulsive force is roughly proportional to the critical current density. This means that the distribution of the critical current density in the HTS film can be estimated from the proportionality constants determined by using the relations between the maximum repulsive force and the critical current density.

  1. Various types of numerical schema for the one-dimensional spherical geometry transport equation

    International Nuclear Information System (INIS)

    Jaber, Abdelouhab.

    1981-07-01

    Mathematical and numerical studies of new schemas possessing high accuracy spatial variable properties are described and the corresponding studies presented. In order to do this, the [0,R] x [-1,+1] rectangle is decomposad into Ksub(ij) = [rsub(i),rsub(i+1)] x [μsub(j),μsub(j+1) ] rectangles. Continuous finite element methods employing polynominals of degree 1 in μ and degree 2 in r are defined for each elements. In chapter I, different ways of rendering the particular equation (for μ = -1) discrete are studied. In chapter II, numerical schemas are described and their stability investigated. In chapter III, error estimation theories are exposed and numerical results for different second members, S, given [fr

  2. Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel

    Science.gov (United States)

    Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie

    2011-05-01

    Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.

  3. High accuracy subwavelength distance measurements: A variable-angle standing-wave total-internal-reflection optical microscope

    International Nuclear Information System (INIS)

    Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.

    2009-01-01

    We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.

  4. Numerical simulation of realistic high-temperature superconductors

    International Nuclear Information System (INIS)

    1997-01-01

    One of the main obstacles in the development of practical high-temperature superconducting (HTS) materials is dissipation, caused by the motion of magnetic flux quanta called vortices. Numerical simulations provide a promising new approach for studying these vortices. By exploiting the extraordinary memory and speed of massively parallel computers, researchers can obtain the extremely fine temporal and spatial resolution needed to model complex vortex behavior. The results may help identify new mechanisms to increase the current-capability capabilities and to predict the performance characteristics of HTS materials intended for industrial applications

  5. Evaluation of Callable Bonds: Finite Difference Methods, Stability and Accuracy.

    OpenAIRE

    Buttler, Hans-Jurg

    1995-01-01

    The purpose of this paper is to evaluate numerically the semi-American callable bond by means of finite difference methods. This study implies three results. First, the numerical error is greater for the callable bond price than for the straight bond price, and too large for real applications Secondly, the numerical accuracy of the callable bond price computed for the relevant range of interest rates depends entirely on the finite difference scheme which is chosen for the boundary points. Thi...

  6. Numerical solution of special ultra-relativistic Euler equations using central upwind scheme

    Science.gov (United States)

    Ghaffar, Tayabia; Yousaf, Muhammad; Qamar, Shamsul

    2018-06-01

    This article is concerned with the numerical approximation of one and two-dimensional special ultra-relativistic Euler equations. The governing equations are coupled first-order nonlinear hyperbolic partial differential equations. These equations describe perfect fluid flow in terms of the particle density, the four-velocity and the pressure. A high-resolution shock-capturing central upwind scheme is employed to solve the model equations. To avoid excessive numerical diffusion, the considered scheme avails the specific information of local propagation speeds. By using Runge-Kutta time stepping method and MUSCL-type initial reconstruction, we have obtained 2nd order accuracy of the proposed scheme. After discussing the model equations and the numerical technique, several 1D and 2D test problems are investigated. For all the numerical test cases, our proposed scheme demonstrates very good agreement with the results obtained by well-established algorithms, even in the case of highly relativistic 2D test problems. For validation and comparison, the staggered central scheme and the kinetic flux-vector splitting (KFVS) method are also implemented to the same model. The robustness and efficiency of central upwind scheme is demonstrated by the numerical results.

  7. New high accuracy super stable alternating direction implicit methods for two and three dimensional hyperbolic damped wave equations

    Directory of Open Access Journals (Sweden)

    R.K. Mohanty

    2014-01-01

    Full Text Available In this paper, we report new three level implicit super stable methods of order two in time and four in space for the solution of hyperbolic damped wave equations in one, two and three space dimensions subject to given appropriate initial and Dirichlet boundary conditions. We use uniform grid points both in time and space directions. Our methods behave like fourth order accurate, when grid size in time-direction is directly proportional to the square of grid size in space-direction. The proposed methods are super stable. The resulting system of algebraic equations is solved by the Gauss elimination method. We discuss new alternating direction implicit (ADI methods for two and three dimensional problems. Numerical results and the graphical representation of numerical solution are presented to illustrate the accuracy of the proposed methods.

  8. Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments

    International Nuclear Information System (INIS)

    Sutherland, B.; Hanlon, P.; Charles, P.

    2011-01-01

    Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.

  9. Numerical modelling of steel arc welding

    International Nuclear Information System (INIS)

    Hamide, M.

    2008-07-01

    Welding is a highly used assembly technique. Welding simulation software would give access to residual stresses and information about the weld's microstructure, in order to evaluate the mechanical resistance of a weld. It would also permit to evaluate the process feasibility when complex geometrical components are to be made, and to optimize the welding sequences in order to minimize defects. This work deals with the numerical modelling of arc welding process of steels. After describing the industrial context and the state of art, the models implemented in TransWeld (software developed at CEMEF) are presented. The set of macroscopic equations is followed by a discussion on their numerical implementation. Then, the theory of re-meshing and our adaptive anisotropic re-meshing strategy are explained. Two welding metal addition techniques are investigated and are compared in terms of the joint size and transient temperature and stresses. The accuracy of the finite element model is evaluated based on experimental results and the results of the analytical solution. Comparative analysis between experimental and numerical results allows the assessment of the ability of the numerical code to predict the thermomechanical and metallurgical response of the welded structure. The models limitations and the phenomena identified during this study are finally discussed and permit to define interesting orientations for future developments. (author)

  10. Numerical solution of the Black-Scholes equation using cubic spline wavelets

    Science.gov (United States)

    Černá, Dana

    2016-12-01

    The Black-Scholes equation is used in financial mathematics for computation of market values of options at a given time. We use the θ-scheme for time discretization and an adaptive scheme based on wavelets for discretization on the given time level. Advantages of the proposed method are small number of degrees of freedom, high-order accuracy with respect to variables representing prices and relatively small number of iterations needed to resolve the problem with a desired accuracy. We use several cubic spline wavelet and multi-wavelet bases and discuss their advantages and disadvantages. We also compare an isotropic and anisotropic approach. Numerical experiments are presented for the two-dimensional Black-Scholes equation.

  11. Development and Application of a Numerical Framework for Improving Building Foundation Heat Transfer Calculations

    Science.gov (United States)

    Kruis, Nathanael J. F.

    Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer

  12. Global communication schemes for the numerical solution of high-dimensional PDEs

    DEFF Research Database (Denmark)

    Hupp, Philipp; Heene, Mario; Jacob, Riko

    2016-01-01

    The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...

  13. High accuracy of family history of melanoma in Danish melanoma cases

    DEFF Research Database (Denmark)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-01-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...

  14. The numerical solution of boundary value problems over an infinite domain

    International Nuclear Information System (INIS)

    Shepherd, M.; Skinner, R.

    1976-01-01

    A method is presented for the numerical solution of boundary value problems over infinite domains. An example that illustrates also the strength and accuracy of a numerical procedure for calculating Green's functions is described in detail

  15. A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.

    Science.gov (United States)

    Thalhammer, Mechthild; Abhau, Jochen

    2012-08-15

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that

  16. Convergence order vs. parallelism in the numerical simulation of the bidomain equations

    International Nuclear Information System (INIS)

    Sharomi, Oluwaseun; Spiteri, Raymond J

    2012-01-01

    The propagation of electrical activity in the human heart can be modelled mathematically by the bidomain equations. The bidomain equations represent a multi-scale reaction-diffusion model that consists of a set of ordinary differential equations governing the dynamics at the cellular level coupled with a set of partial differential equations governing the dynamics at the tissue level. Significant computation is generally required to generate clinically useful data from the bidomain equations. Contemporary developments in computer architecture, in particular multi- and many-core computers and graphics processing units, have made such computations feasible. However, the zeal to take advantage to parallel architectures has typically caused another important aspect of numerical methods for the solution of differential equations to be overlooked, namely the convergence order. It is well known that higher-order methods are generally more efficient than lower-order ones when solutions are smooth and relatively high accuracy is desired. In these situations, serial implementations of high-order methods may remain surprisingly competitive with parallel implementations of low-order methods. In this paper, we examine the effect of order on the numerical solution of the bidomain equations in parallel. We find that high-order methods, in particular high-order time-integration methods with relatively better stability properties, tend to outperform their low-order counterparts, even when the latter are run in parallel. In other words, increasing integration order often trumps increasing available computational resources, especially when relatively high accuracy is desired.

  17. Accuracy of MFCC-Based Speaker Recognition in Series 60 Device

    Directory of Open Access Journals (Sweden)

    Pasi Fränti

    2005-10-01

    Full Text Available A fixed point implementation of speaker recognition based on MFCC signal processing is considered. We analyze the numerical error of the MFCC and its effect on the recognition accuracy. Techniques to reduce the information loss in a converted fixed point implementation are introduced. We increase the signal processing accuracy by adjusting the ratio of presentation accuracy of the operators and the signal. The signal processing error is found out to be more important to the speaker recognition accuracy than the error in the classification algorithm. The results are verified by applying the alternative technique to speech data. We also discuss the specific programming requirements set up by the Symbian and Series 60.

  18. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  19. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  20. High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry

    Directory of Open Access Journals (Sweden)

    Guy Jean-Pierre Schumann

    2016-01-01

    Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.

  1. Accuracy of the solution of the transfer equation for a plane layer of high optical thickness with strongly anisotropic scattering

    International Nuclear Information System (INIS)

    Konovalov, N.V.

    The accuracy of the calculation of the characteristics of a radiation field in a plane layer is investigated by solving the transfer equation in dependence on the error in the specification of the scattering indicatrix. It is shown that a small error in the specification of the indicatrix can lead to a large error in the solution at large optical depths. An estimate is given for the region of optical thicknesses for which the emission field can be determined with sufficient degree of accuracy from the transfer equation with a known error in the specification of the indicatrix. For an estimation of the error involved in various numerical methods, and also for a determination of the region of their applicability, the results of calculations of problems with strongly anisotropic indicatrix are given

  2. Hyperbolic Method for Dispersive PDEs: Same High-Order of Accuracy for Solution, Gradient, and Hessian

    Science.gov (United States)

    Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki

    2016-01-01

    In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.

  3. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    Science.gov (United States)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  4. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    Science.gov (United States)

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  5. Numerical methods for the Lévy LIBOR model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    2010-01-01

    but the methods are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure....... This enables simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the L\\'evy LIBOR model of Eberlein and \\"Ozkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates...

  6. Numerical Methods for the Lévy LIBOR Model

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    are generally slow. We propose an alternative approximation scheme based on Picard iterations. Our approach is similar in accuracy to the full numerical solution, but with the feature that each rate is, unlike the standard method, evolved independently of the other rates in the term structure. This enables...... simultaneous calculation of derivative prices of different maturities using parallel computing. We include numerical illustrations of the accuracy and speed of our method pricing caplets.......The aim of this work is to provide fast and accurate approximation schemes for the Monte-Carlo pricing of derivatives in the Lévy LIBOR model of Eberlein and Özkan (2005). Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods...

  7. Numerical analysis for multi-group neutron-diffusion equation using Radial Point Interpolation Method (RPIM)

    International Nuclear Information System (INIS)

    Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong

    2017-01-01

    Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and

  8. Numerical Study of Transonic Axial Flow Rotating Cascade Aerodynamics – Part 1: 2D Case

    Directory of Open Access Journals (Sweden)

    Irina Carmen ANDREI

    2014-06-01

    Full Text Available The purpose of this paper is to present a 2D study regarding the numerical simulation of flow within a transonic highly-loaded rotating cascade from an axial compressor. In order to describe an intricate flow pattern of a complex geometry and given specific conditions of cascade’s loading and operation, an appropriate accurate flow model is a must. For such purpose, the Navier-Stokes equations system was used as flow model; from the computational point of view, the mathematical support is completed by a turbulence model. A numerical comparison has been performed for different turbulence models (e.g. KE, KO, Reynolds Stress and Spallart-Allmaras models. The convergence history was monitored in order to focus on the numerical accuracy. The force vector has been reported in order to express the aerodynamics of flow within the rotating cascade at the running regime, in terms of Lift and Drag. The numerical results, expressed by plots of the most relevant flow parameters, have been compared. It comes out that the selecting of complex flow models and appropriate turbulence models, in conjunction with CFD techniques, allows to obtain the best computational accuracy of the numerical results. This paper aims to carry on a 2D study and a prospective 3D will be intended for the same architecture.

  9. A mathematical model and numerical solution of interface problems for steady state heat conduction

    Directory of Open Access Journals (Sweden)

    Z. Muradoglu Seyidmamedov

    2006-01-01

    (isolation Ωδ tends to zero. For each case, the local truncation errors of the used conservative finite difference scheme are estimated on the nonuniform grid. A fast direct solver has been applied for the interface problems with piecewise constant but discontinuous coefficient k=k(x. The presented numerical results illustrate high accuracy and show applicability of the given approach.

  10. A new entropy condition for increasing accuracy and convergence rate of TVD scheme

    International Nuclear Information System (INIS)

    Rashidi, M.M.; Esfahanian, V.

    2005-01-01

    In this paper, a TVD method is applied to the numerical solution of the flow over axisymmetric steady hypersonic viscous flow using TLNS equations over blunt cone. In the TVD schemes, the artificial viscosity (AV) is implemented using entropy condition. For hypersonic flow, Yee entropy condition shows relatively a better stability and convergence rate than others. This paper presents a new entropy condition for increasing the accuracy and convergence rate of the TVD scheme which does not have the difficulty associated with Yee entropy condition for viscous flow in the hypersonic regime. The entropy condition increases the AV in the shocks and decreases AV in the smooth region. The numerical solution has been compared with the Beam and Warming shock fitting approach indicating a better numerical accuracy. (author)

  11. High accuracy of family history of melanoma in Danish melanoma cases.

    Science.gov (United States)

    Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie

    2015-12-01

    The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.

  12. A time-spectral approach to numerical weather prediction

    Science.gov (United States)

    Scheffel, Jan; Lindvall, Kristoffer; Yik, Hiu Fai

    2018-05-01

    Finite difference methods are traditionally used for modelling the time domain in numerical weather prediction (NWP). Time-spectral solution is an attractive alternative for reasons of accuracy and efficiency and because time step limitations associated with causal CFL-like criteria, typical for explicit finite difference methods, are avoided. In this work, the Lorenz 1984 chaotic equations are solved using the time-spectral algorithm GWRM (Generalized Weighted Residual Method). Comparisons of accuracy and efficiency are carried out for both explicit and implicit time-stepping algorithms. It is found that the efficiency of the GWRM compares well with these methods, in particular at high accuracy. For perturbative scenarios, the GWRM was found to be as much as four times faster than the finite difference methods. A primary reason is that the GWRM time intervals typically are two orders of magnitude larger than those of the finite difference methods. The GWRM has the additional advantage to produce analytical solutions in the form of Chebyshev series expansions. The results are encouraging for pursuing further studies, including spatial dependence, of the relevance of time-spectral methods for NWP modelling.

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    Science.gov (United States)

    Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration

    2016-03-01

    Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.

  14. Diagnostic accuracy of cone-beam computed tomography scans with high- and low-resolution modes for the detection of root perforations.

    Science.gov (United States)

    Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza

    2018-03-01

    This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.

  15. Factors Determining the Inter-observer Variability and Diagnostic Accuracy of High-resolution Manometry for Esophageal Motility Disorders.

    Science.gov (United States)

    Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung

    2018-01-30

    Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.

  16. DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2012-07-01

    Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.

  17. Coordinate metrology accuracy of systems and measurements

    CERN Document Server

    Sładek, Jerzy A

    2016-01-01

    This book focuses on effective methods for assessing the accuracy of both coordinate measuring systems and coordinate measurements. It mainly reports on original research work conducted by Sladek’s team at Cracow University of Technology’s Laboratory of Coordinate Metrology. The book describes the implementation of different methods, including artificial neural networks, the Matrix Method, the Monte Carlo method and the virtual CMM (Coordinate Measuring Machine), and demonstrates how these methods can be effectively used in practice to gauge the accuracy of coordinate measurements. Moreover, the book includes an introduction to the theory of measurement uncertainty and to key techniques for assessing measurement accuracy. All methods and tools are presented in detail, using suitable mathematical formulations and illustrated with numerous examples. The book fills an important gap in the literature, providing readers with an advanced text on a topic that has been rapidly developing in recent years. The book...

  18. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  19. High-accuracy identification and bioinformatic analysis of in vivo protein phosphorylation sites in yeast

    DEFF Research Database (Denmark)

    Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen

    2009-01-01

    Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...

  20. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  1. Solution of the main problem of the lunar physical libration by a numerical method

    Science.gov (United States)

    Zagidullin, Arthur; Petrova, Natalia; Nefediev, Yurii

    2016-10-01

    Series of the lunar programs requires highly accurate ephemeris of the Moon at any given time. In the light of the new requirements on the accuracy the requirements to the lunar physical libration theory increase.At the Kazan University there is the experience of constructing the lunar rotation theory in the analytical approach. Analytical theory is very informative in terms of the interpretation of the observed data, but inferior to the accuracy of numerical theories. The most accurate numerical ephemeris of the Moon is by far the ephemeris DE430 / 431 built in the USA. It takes into account a large number of subtle effects both in external perturbations of the Moon, and in its internal structure. Before the Russian scientists the task is to create its own numerical theory that would be consistent with the American ephemeris. On the other hand, even the practical application of the american ephemeris requires a deep understanding of the principles of their construction and the intelligent application.As the first step, we constructed a theory in the framework of the main problem. Because we compare our theory with the analytical theory of Petrova (1996), all the constants and the theory of orbital motion are taken identical to the analytical theory. The maximum precision, which the model can provide is 0.01 seconds of arc, which is insufficient to meet the accuracy of modern observations, but this model provides the necessary basis for further development.We have constructed the system of the libration equations, for which the numerical integrator was developed. The internal accuracy of the software integrator is several nanoseconds. When compared with the data of Petrova the differences of order of 1 second are observed at the resonant frequencies. The reason, we believe, in the inaccuracy of the analytical theory. We carried out a comparison with the Eroshkin's data [2], which gave satisfactory agreement, and with Rambaux data. In the latter case, as expected

  2. Topics in the numerical simulation of high temperature flows

    International Nuclear Information System (INIS)

    Cheret, R.; Dautray, R.; Desgraz, J.C.; Mercier, B.; Meurant, G.; Ovadia, J.; Sitt, B.

    1984-06-01

    In the fields of inertial confinement fusion, astrophysics, detonation, or other high energy phenomena, one has to deal with multifluid flows involving high temperatures, high speeds and strong shocks initiated e.g. by chemical reactions or even by thermonuclear reactions. The simulation of multifluid flows is reviewed: first are Lagrangian methods which have been successfully applied in the past. Then we describe our experience with newer adaptive mesh methods, originally designed to increase the accuracy of Lagrangian methods. Finally, some facts about Eulerian methods are recalled, with emphasis on the EAD scheme which has been recently extended to the elasto-plastic case. High temperature flows is then considered, described by the equations of radiation hydrodynamics. We show how conservation of energy can be preserved while solving the radiative transfer equation via the Monte Carlo method. For detonation, some models, introduced to describe the initiation of detonation in heterogeneous explosives. Finally we say a few words about instability of these flows

  3. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    Science.gov (United States)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  4. High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be

    CERN Multimedia

    2002-01-01

    State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...

  5. High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A

    International Nuclear Information System (INIS)

    J. Denard; A. Saha; G. Lavessiere

    2001-01-01

    CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described

  6. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  7. High-accuracy determination of the neutron flux at n{sub T}OF

    Energy Technology Data Exchange (ETDEWEB)

    Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)

    2013-12-15

    The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)

  8. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    International Nuclear Information System (INIS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-01-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  9. Accuracy of binary black hole waveform models for aligned-spin binaries

    Science.gov (United States)

    Kumar, Prayush; Chu, Tony; Fong, Heather; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-05-01

    Coalescing binary black holes are among the primary science targets for second generation ground-based gravitational wave detectors. Reliable gravitational waveform models are central to detection of such systems and subsequent parameter estimation. This paper performs a comprehensive analysis of the accuracy of recent waveform models for binary black holes with aligned spins, utilizing a new set of 84 high-accuracy numerical relativity simulations. Our analysis covers comparable mass binaries (mass-ratio 1 ≤q ≤3 ), and samples independently both black hole spins up to a dimensionless spin magnitude of 0.9 for equal-mass binaries and 0.85 for unequal mass binaries. Furthermore, we focus on the high-mass regime (total mass ≳50 M⊙ ). The two most recent waveform models considered (PhenomD and SEOBNRv2) both perform very well for signal detection, losing less than 0.5% of the recoverable signal-to-noise ratio ρ , except that SEOBNRv2's efficiency drops slightly for both black hole spins aligned at large magnitude. For parameter estimation, modeling inaccuracies of the SEOBNRv2 model are found to be smaller than systematic uncertainties for moderately strong GW events up to roughly ρ ≲15 . PhenomD's modeling errors are found to be smaller than SEOBNRv2's, and are generally irrelevant for ρ ≲20 . Both models' accuracy deteriorates with increased mass ratio, and when at least one black hole spin is large and aligned. The SEOBNRv2 model shows a pronounced disagreement with the numerical relativity simulation in the merger phase, for unequal masses and simultaneously both black hole spins very large and aligned. Two older waveform models (PhenomC and SEOBNRv1) are found to be distinctly less accurate than the more recent PhenomD and SEOBNRv2 models. Finally, we quantify the bias expected from all four waveform models during parameter estimation for several recovered binary parameters: chirp mass, mass ratio, and effective spin.

  10. High accuracy microwave frequency measurement based on single-drive dual-parallel Mach-Zehnder modulator

    DEFF Research Database (Denmark)

    Zhao, Ying; Pang, Xiaodan; Deng, Lei

    2011-01-01

    A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....

  11. Innovative Technique for High-Accuracy Remote Monitoring of Surface Water

    Science.gov (United States)

    Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.

    2016-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  12. Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method

    Science.gov (United States)

    Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.

    2017-02-01

    Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.

  13. Numerical optimization of circulation control airfoil at high subsonic speed

    Science.gov (United States)

    Tai, T. C.; Kidwell, G. H., Jr.

    1984-01-01

    A numerical procedure for optimizing the design of the circulation control airfoil for use at high subsonic speeds is presented. The procedure consists of an optimization scheme coupled with a viscous potential flow analysis for the blowing jet. The desired airfoil is defined by a combination of three baseline shapes (cambered ellipse and cambered ellipse with drooped and spiraled trailing edges). The coefficients of these shapes are used as design variables in the optimization process. Under the constraints of lift augmentation and lift-to-drag ratios, the airfoil, optimized at free-stream Mach 0.54 and alpha = -2 degrees can be characterized as a cambered ellipse with a drooped trailing edge. Experimental tests support the performance improvement predicted by numerical optimization.

  14. Accuracy of clinical tests in the diagnosis of anterior cruciate ligament injury: A systematic review

    NARCIS (Netherlands)

    M.S. Swain (Michael S.); N. Henschke (Nicholas); S.J. Kamper (Steven); A.S. Downie (Aron S.); B.W. Koes (Bart); C. Maher (Chris)

    2014-01-01

    textabstractBackground: Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury.Methods: Study Design: Systematic

  15. High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique

    International Nuclear Information System (INIS)

    Atkinson, I.

    1995-01-01

    Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)

  16. Numerical simulation of transient, incongruent vaporization induced by high power laser

    International Nuclear Information System (INIS)

    Tsai, C.H.

    1981-01-01

    A mathematical model and numerical calculations were developed to solve the heat and mass transfer problems specifically for uranum oxide subject to laser irradiation. It can easily be modified for other heat sources or/and other materials. In the uranium-oxygen system, oxygen is the preferentially vaporizing component, and as a result of the finite mobility of oxygen in the solid, an oxygen deficiency is set up near the surface. Because of the bivariant behavior of uranium oxide, the heat transfer problem and the oxygen diffusion problem are coupled and a numerical method of simultaneously solving the two boundary value problems is studied. The temperature dependence of the thermal properties and oxygen diffusivity, as well as the highly ablative effect on the surface, leads to considerable non-linearities in both the governing differential equations and the boundary conditions. Based on the earlier work done in this laboratory by Olstad and Olander on Iron and on Zirconium hydride, the generality of the problem is expanded and the efficiency of the numerical scheme is improved. The finite difference method, along with some advanced numerical techniques, is found to be an efficient way to solve this problem

  17. Measurement system with high accuracy for laser beam quality.

    Science.gov (United States)

    Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming

    2015-05-20

    Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.

  18. Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.

    Science.gov (United States)

    Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E

    2016-02-01

    Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and

  19. Numerical solution of distributed order fractional differential equations

    Science.gov (United States)

    Katsikadelis, John T.

    2014-02-01

    In this paper a method for the numerical solution of distributed order FDEs (fractional differential equations) of a general form is presented. The method applies to both linear and nonlinear equations. The Caputo type fractional derivative is employed. The distributed order FDE is approximated with a multi-term FDE, which is then solved by adjusting appropriately the numerical method developed for multi-term FDEs by Katsikadelis. Several example equations are solved and the response of mechanical systems described by such equations is studied. The convergence and the accuracy of the method for linear and nonlinear equations are demonstrated through well corroborated numerical results.

  20. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  1. Numerical and experimental study of blowing jet on a high lift airfoil

    Science.gov (United States)

    Bobonea, A.; Pricop, M. V.

    2013-10-01

    Active manipulation of separated flows over airfoils at moderate and high angles of attack in order to improve efficiency or performance has been the focus of a number of numerical and experimental investigations for many years. One of the main methods used in active flow control is the usage of blowing devices with constant and pulsed blowing. Through CFD simulation over a 2D high-lift airfoil, this study is trying to highlight the impact of pulsed blowing over its aerodynamic characteristics. The available wind tunnel data from INCAS low speed facility are also beneficial for the validation of the numerical analysis. This study intends to analyze the impact of the blowing jet velocity and slot geometry on the efficiency of an active flow control.

  2. A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations

    Science.gov (United States)

    Thalhammer, Mechthild; Abhau, Jochen

    2012-01-01

    As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively

  3. Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography

    Directory of Open Access Journals (Sweden)

    Jonathan Jones

    2018-03-01

    Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.

  4. High-precision numerical integration of equations in dynamics

    Science.gov (United States)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  5. Technical accuracy of a neuronavigation system measured with a high-precision mechanical micromanipulator.

    Science.gov (United States)

    Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R

    1997-12-01

    This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.

  6. Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests

    Science.gov (United States)

    Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.

    2018-03-01

    Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.

  7. Experimental and numerical studies of high-velocity impact fragmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kipp, M.E.; Grady, D.E.; Swegle, J.W.

    1993-08-01

    Developments are reported in both experimental and numerical capabilities for characterizing the debris spray produced in penetration events. We have performed a series of high-velocity experiments specifically designed to examine the fragmentation of the projectile during impact. High-strength, well-characterized steel spheres (6.35 mm diameter) were launched with a two-stage light-gas gun to velocities in the range of 3 to 5 km/s. Normal impact with PMMA plates, thicknesses of 0.6 to 11 mm, applied impulsive loads of various amplitudes and durations to the steel sphere. Multiple flash radiography diagnostics and recovery techniques were used to assess size, velocity, trajectory and statistics of the impact-induced fragment debris. Damage modes to the primary target plate (plastic) and to a secondary target plate (aluminum) were also evaluated. Dynamic fragmentation theories, based on energy-balance principles, were used to evaluate local material deformation and fracture state information from CTH, a three-dimensional Eulerian solid dynamics shock wave propagation code. The local fragment characterization of the material defines a weighted fragment size distribution, and the sum of these distributions provides a composite particle size distribution for the steel sphere. The calculated axial and radial velocity changes agree well with experimental data, and the calculated fragment sizes are in qualitative agreement with the radiographic data. A secondary effort involved the experimental and computational analyses of normal and oblique copper ball impacts on steel target plates. High-resolution radiography and witness plate diagnostics provided impact motion and statistical fragment size data. CTH simulations were performed to test computational models and numerical methods.

  8. Accuracy and high-speed technique for autoprocessing of Young's fringes

    Science.gov (United States)

    Chen, Wenyi; Tan, Yushan

    1991-12-01

    In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.

  9. QUALITY LOSS FUNCTION FOR MACHINING PROCESS ACCURACY

    Directory of Open Access Journals (Sweden)

    Adrian Stere PARIS

    2017-05-01

    Full Text Available The main goal of the paper is to propose new quality loss models for machining process accuracy in the classical case “zero the best”, MMF and Harris type. In addition a numerical example illustrates that the choose regression functions are directly linked with the quality loss of manufacturing process. The proposed models can be adapted for the “maximal the best” and “nominal the best” cases.

  10. Component-oriented approach to the development and use of numerical models in high energy physics

    International Nuclear Information System (INIS)

    Amelin, N.S.; Komogorov, M.Eh.

    2002-01-01

    We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users

  11. Numerical Solution of Diffusion Models in Biomedical Imaging on Multicore Processors

    Directory of Open Access Journals (Sweden)

    Luisa D'Amore

    2011-01-01

    Full Text Available In this paper, we consider nonlinear partial differential equations (PDEs of diffusion/advection type underlying most problems in image analysis. As case study, we address the segmentation of medical structures. We perform a comparative study of numerical algorithms arising from using the semi-implicit and the fully implicit discretization schemes. Comparison criteria take into account both the accuracy and the efficiency of the algorithms. As measure of accuracy, we consider the Hausdorff distance and the residuals of numerical solvers, while as measure of efficiency we consider convergence history, execution time, speedup, and parallel efficiency. This analysis is carried out in a multicore-based parallel computing environment.

  12. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  13. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  14. THE EFFECT OF MODERATE AND HIGH-INTENSITY FATIGUE ON GROUNDSTROKE ACCURACY IN EXPERT AND NON-EXPERT TENNIS PLAYERS

    Directory of Open Access Journals (Sweden)

    Mark Lyons

    2013-06-01

    Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on

  15. Numerical Simulations Of Flagellated Micro-Swimmers

    Science.gov (United States)

    Rorai, Cecilia; Markesteijn, Anton; Zaitstev, Mihail; Karabasov, Sergey

    2017-11-01

    We study flagellated microswimmers locomotion by representing the entire swimmer body. We discuss and contrast the accuracy and computational cost of different numerical approaches including the Resistive Force Theory, the Regularized Stokeslet Method and the Finite Element Method. We focus on how the accuracy of the methods in reproducing the swimming trajectories, velocities and flow field, compares to the sensitivity of these quantities to certain physical parameters, such as the body shape and the location of the center of mass. We discuss the opportunity and physical relevance of retaining inertia in our models. Finally, we present some preliminary results toward collective motion simulations. Marie Skodowska-Curie Individual Fellowship.

  16. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Evaluation of wave runup predictions from numerical and parametric models

    Science.gov (United States)

    Stockdon, Hilary F.; Thompson, David M.; Plant, Nathaniel G.; Long, Joseph W.

    2014-01-01

    Wave runup during storms is a primary driver of coastal evolution, including shoreline and dune erosion and barrier island overwash. Runup and its components, setup and swash, can be predicted from a parameterized model that was developed by comparing runup observations to offshore wave height, wave period, and local beach slope. Because observations during extreme storms are often unavailable, a numerical model is used to simulate the storm-driven runup to compare to the parameterized model and then develop an approach to improve the accuracy of the parameterization. Numerically simulated and parameterized runup were compared to observations to evaluate model accuracies. The analysis demonstrated that setup was accurately predicted by both the parameterized model and numerical simulations. Infragravity swash heights were most accurately predicted by the parameterized model. The numerical model suffered from bias and gain errors that depended on whether a one-dimensional or two-dimensional spatial domain was used. Nonetheless, all of the predictions were significantly correlated to the observations, implying that the systematic errors can be corrected. The numerical simulations did not resolve the incident-band swash motions, as expected, and the parameterized model performed best at predicting incident-band swash heights. An assimilated prediction using a weighted average of the parameterized model and the numerical simulations resulted in a reduction in prediction error variance. Finally, the numerical simulations were extended to include storm conditions that have not been previously observed. These results indicated that the parameterized predictions of setup may need modification for extreme conditions; numerical simulations can be used to extend the validity of the parameterized predictions of infragravity swash; and numerical simulations systematically underpredict incident swash, which is relatively unimportant under extreme conditions.

  18. Very high-accuracy calibration of radiation pattern and gain of a near-field probe

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav

    2014-01-01

    In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...

  19. Numerical Simulation on Flow Field of Oilfield Three-Phase Separator

    Directory of Open Access Journals (Sweden)

    Yong-tu Liang

    2013-01-01

    Full Text Available The conventional measurement method can no longer guarantee the accuracy requirement after the oilfield development entering high water cut stage, due to the water content and gas phase in the flow. In order to overcome the impact of measurement deviation the oilfield production management, the flow field of three-phase separator is studied numerically in this paper using Fluent 6.3.26. Taking into consideration the production situation of PetroChina Huabei Oilfield and the characteristics of three-phase separator, the effect of internal flow status as well as other factors such as varying flow rate, gas fraction, and water content on the separation efficiency is analyzed. The results show that the separation efficiencies under all operation conditions are larger than 95%, which satisfy the accuracy requirement and also provide the theoretical foundation for the application of three-phase separators at oilfields.

  20. Sensitivity of a numerical wave model on wind re-analysis datasets

    Science.gov (United States)

    Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel

    2017-03-01

    Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.

  1. Wave fields simulation in difficult terrain using numerical grid method; Hyoko henka no aru chiiki deno suchi koshi wo mochiita hado simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jung, W; Ogawa, T [Yokohama National University, Yokohama (Japan); Tamagawa, T; Matsuoka, T [Japan Petroleum Exploration Corp., Tokyo (Japan)

    1997-10-22

    This paper describes that a high-accuracy simulation can be made on seismic exploration by using the numerical grid method. When applying a wave field simulation using the difference calculus to an area subjected to seismic exploration, a problem occurs as to how a boundary of the velocity structure including the ground surface should be dealt with. Simply applying grids to a boundary changing continuously makes accuracy of the simulation worse. The difference calculus using a numerical grid is a method to solve the problem by imaging a certain region into a rectangular region through use of variable conversion, which can impose the boundary condition more accurately. The wave field simulation was carried out on a simple two-layer inclined structure and a two-layer waved structure. It was revealed that amplitudes of direct waves and reflection waves are disturbed in the case where no numerical grid method is applied, and the amplitudes are more disperse in the reflection waves than those obtained by using the numerical grid method. 7 refs., 10 figs.

  2. Numerical study of water entry supercavitating flow around a vertical circular cylinder influenced by turbulent drag-reducing additives

    International Nuclear Information System (INIS)

    Jiang, C X; Cheng, J P; Li, F C

    2015-01-01

    This paper attempts to introduce a numerical simulation procedure to simulate water-entry problems influenced by turbulent drag-reducing additives in a viscous incompressible medium. Firstly we performed a numerical investigation on water-entry supercavities in water and turbulent drag-reducing solution at the impact velocity of 28.4 m/s to confirm the accuracy of the numerical method. Based on the verification, projectile entering water and turbulent drag-reducing solution at relatively high velocity of 142.7 m/s (phase transition is considered) is simulated. The cross viscosity equation was adopted to represent the shear-thinning characteristic of aqueous solution of drag-reducing additives. The configuration and dynamic characteristics of water entry supercavity, flow resistance were discussed respectively. It was obtained that the numerical simulation results are in consistence with experimental data. Numerical results show that the supercavity length in drag-reducing solution is larger than one in water and the velocity attenuates faster at high velocity than at low velocity; the influence of drag-reducing solution is more obvious at high impact velocity. Turbulent drag-reducing additives have the great potential for enhancement of supercavity

  3. Numerical study of criticality of the slab reactors with three regions in one-group transport theory

    International Nuclear Information System (INIS)

    Santos, A. dos.

    1979-01-01

    The criticality of slab reactors consisting of core, blanket, and reflector is studied numerically based on the singular-eigenfunction-expansion method in one-group transport theory. The purpose of this work is three-fold: (1) it is shown that the three-media problem can be converted, using a recently developed method, to a set of regular integral equations for the expansion coefficients, such that numerical solutions can be obtained for the first time based on an exact theory; (2) highly accurate numerical results that can serve as standards of comparison for various approximate methods are reported for representative sets of parameters; and (3) the accuracy of the P sub(N) approximation, one of the more often used methods, is analyzed compared to the exact results [pt

  4. Investigation of accuracy and computation time of a hierarchy of growth rate definitions

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Borg, R.C.; Ott, K.O.

    1977-07-01

    A numerical illustration of the hierarchy of four logically different procedures for the calculation of the asymptotic growth of fast breeder fuel is presented. Each hierarchy level is analyzed in terms of accuracy and computational effort. Using the first procedure as reference, the fourth procedure, which incorporates the isotopic breeding worths, w vector*, requires a minimum amount of effort with a negligible decrease in accuracy

  5. Numerical solution of boundary-integral equations for molecular electrostatics.

    Science.gov (United States)

    Bardhan, Jaydeep P

    2009-03-07

    Numerous molecular processes, such as ion permeation through channel proteins, are governed by relatively small changes in energetics. As a result, theoretical investigations of these processes require accurate numerical methods. In the present paper, we evaluate the accuracy of two approaches to simulating boundary-integral equations for continuum models of the electrostatics of solvation. The analysis emphasizes boundary-element method simulations of the integral-equation formulation known as the apparent-surface-charge (ASC) method or polarizable-continuum model (PCM). In many numerical implementations of the ASC/PCM model, one forces the integral equation to be satisfied exactly at a set of discrete points on the boundary. We demonstrate in this paper that this approach to discretization, known as point collocation, is significantly less accurate than an alternative approach known as qualocation. Furthermore, the qualocation method offers this improvement in accuracy without increasing simulation time. Numerical examples demonstrate that electrostatic part of the solvation free energy, when calculated using the collocation and qualocation methods, can differ significantly; for a polypeptide, the answers can differ by as much as 10 kcal/mol (approximately 4% of the total electrostatic contribution to solvation). The applicability of the qualocation discretization to other integral-equation formulations is also discussed, and two equivalences between integral-equation methods are derived.

  6. Meditation experience predicts introspective accuracy.

    Directory of Open Access Journals (Sweden)

    Kieran C R Fox

    Full Text Available The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1-15,000 hrs experience. Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a 'body-scanning' meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices.

  7. High numerical aperture imaging by using multimode fibers with micro-fabricated optics

    KAUST Repository

    Bianchi, Silvio; Rajamanickam, V.; Ferrara, Lorenzo; Di Fabrizio, Enzo M.; Di Leonardo, Roberto; Liberale, Carlo

    2014-01-01

    Controlling light propagation into multimode optical fibers through spatial light modulators provides highly miniaturized endoscopes and optical micromanipulation probes. We increase the numerical aperture up to nearly 1 by micro-optics fabricated on the fiber-end.

  8. Reusable Object-Oriented Solutions for Numerical Simulation of PDEs in a High Performance Environment

    Directory of Open Access Journals (Sweden)

    Andrea Lani

    2006-01-01

    Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.

  9. Aerothermal and aeroelastic response prediction of aerospace structures in high-speed flows using direct numerical simulation

    Science.gov (United States)

    Ostoich, Christopher Mark

    due to a dome-induced horseshoe vortex scouring the panel's surface. Comparisons with reduced-order models of heat transfer indicate that they perform with varying levels of accuracy around some portions of the geometry while completely failing to predict significant heat loads in re- gions where the dome-influenced flow impacts the ceramic panel. Cumulative effects of flow-thermal coupling at later simulation times on the reduction of panel drag and surface heat transfer are quantified. The second fluid-structure study investigates the interaction between a thin metallic panel and a Mach 2.25 turbulent boundary layer with an ini- tial momentum thickness Reynolds number of 1200. A transient, non-linear, large deformation, 3D finite element solver is developed to compute the dynamic response of the panel. The solver is coupled at the fluid-structure interface with the compressible Navier-Stokes solver, the latter of which is used for a direct numerical simulation of the turbulent boundary layer. In this approach, no simplifying assumptions regarding the structural solution or turbulence modeling are made in order to get detailed solution data. It is found that the thin panel state evolves into a flutter type response char- acterized by high-amplitude, high-frequency oscillations into the flow. The oscillating panel disturbs the supersonic flow by introducing compression waves, modifying the turbulence, and generating fluctuations in the power exiting the top of the flow domain. The work in this thesis serves as a step forward in structural response prediction in high-speed flows. The results demonstrate the ability of high- fidelity numerical approaches to serve as a guide for reduced-order model improvement and as well as provide accurate and detailed solution data in scenarios where experimental approaches are difficult or impossible.

  10. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  11. Numerical and experimental validation of a particle Galerkin method for metal grinding simulation

    Science.gov (United States)

    Wu, C. T.; Bui, Tinh Quoc; Wu, Youcai; Luo, Tzui-Liang; Wang, Morris; Liao, Chien-Chih; Chen, Pei-Yin; Lai, Yu-Sheng

    2018-03-01

    In this paper, a numerical approach with an experimental validation is introduced for modelling high-speed metal grinding processes in 6061-T6 aluminum alloys. The derivation of the present numerical method starts with an establishment of a stabilized particle Galerkin approximation. A non-residual penalty term from strain smoothing is introduced as a means of stabilizing the particle Galerkin method. Additionally, second-order strain gradients are introduced to the penalized functional for the regularization of damage-induced strain localization problem. To handle the severe deformation in metal grinding simulation, an adaptive anisotropic Lagrangian kernel is employed. Finally, the formulation incorporates a bond-based failure criterion to bypass the prospective spurious damage growth issues in material failure and cutting debris simulation. A three-dimensional metal grinding problem is analyzed and compared with the experimental results to demonstrate the effectiveness and accuracy of the proposed numerical approach.

  12. From journal to headline: the accuracy of climate science news in Danish high quality newspapers

    DEFF Research Database (Denmark)

    Vestergård, Gunver Lystbæk

    2011-01-01

    analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...

  13. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    International Nuclear Information System (INIS)

    Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo

    2014-01-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting

  14. Accuracy of High-Resolution MRI with Lumen Distention in Rectal Cancer Staging and Circumferential Margin Involvement Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)

    2014-07-01

    To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.

  15. Gene masking - a technique to improve accuracy for cancer classification with high dimensionality in microarray data.

    Science.gov (United States)

    Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok

    2016-12-05

    High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.

  16. Numerical analysis of the slipstream development around a high-speed train in a double-track tunnel.

    Science.gov (United States)

    Fu, Min; Li, Peng; Liang, Xi-Feng

    2017-01-01

    Analysis of the slipstream development around the high-speed trains in tunnels would provide references for assessing the transient gust loads on trackside workers and trackside furniture in tunnels. This paper focuses on the computational analysis of the slipstream caused by high-speed trains passing through double-track tunnels with a cross-sectional area of 100 m2. Three-dimensional unsteady compressible Reynolds-averaged Navier-Stokes equations and a realizable k-ε turbulence model were used to describe the airflow characteristics around a high-speed train in the tunnel. The moving boundary problem was treated using the sliding mesh technology. Three cases were simulated in this paper, including two tunnel lengths and two different configurations of the train. The train speed in these three cases was 250 km/h. The accuracy of the numerical method was validated by the experimental data from full-scale tests, and reasonable consistency was obtained. The results show that the flow field around the high-speed trains can be divided into three distinct regions: the region in front of the train nose, the annular region and the wake region. The slipstream development along the two sides of train is not in balance and offsets to the narrow side in the double-track tunnels. Due to the piston effect, the slipstream has a larger peak value in the tunnel than in open air. The tunnel length, train length and length ratio affect the slipstream velocities; in particular, the velocities increase with longer trains. Moreover, the propagation of pressure waves also induces the slipstream fluctuations: substantial velocity fluctuations mainly occur in front of the train, and weaken with the decrease in amplitude of the pressure wave.

  17. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  18. Numerical method for the nonlinear Fokker-Planck equation

    International Nuclear Information System (INIS)

    Zhang, D.S.; Wei, G.W.; Kouri, D.J.; Hoffman, D.K.

    1997-01-01

    A practical method based on distributed approximating functionals (DAFs) is proposed for numerically solving a general class of nonlinear time-dependent Fokker-Planck equations. The method relies on a numerical scheme that couples the usual path-integral concept to the DAF idea. The high accuracy and reliability of the method are illustrated by applying it to an exactly solvable nonlinear Fokker-Planck equation, and the method is compared with the accurate K-point Stirling interpolation formula finite-difference method. The approach is also used successfully to solve a nonlinear self-consistent dynamic mean-field problem for which both the cumulant expansion and scaling theory have been found by Drozdov and Morillo [Phys. Rev. E 54, 931 (1996)] to be inadequate to describe the occurrence of a long-lived transient bimodality. The standard interpretation of the transient bimodality in terms of the flat region in the kinetic potential fails for the present case. An alternative analysis based on the effective potential of the Schroedinger-like Fokker-Planck equation is suggested. Our analysis of the transient bimodality is strongly supported by two examples that are numerically much more challenging than other examples that have been previously reported for this problem. copyright 1997 The American Physical Society

  19. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    International Nuclear Information System (INIS)

    Wang, Yongbo; Wu, Huapeng; Handroos, Heikki

    2013-01-01

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device

  20. Accuracy improvement of a hybrid robot for ITER application using POE modeling method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu, Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2013-10-15

    Highlights: ► The product of exponential (POE) formula for error modeling of hybrid robot. ► Differential Evolution (DE) algorithm for parameter identification. ► Simulation results are given to verify the effectiveness of the method. -- Abstract: This paper focuses on the kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial–parallel hybrid robot to improve its accuracy. The robot was designed to perform the assembling and repairing tasks of the vacuum vessel (VV) of the international thermonuclear experimental reactor (ITER). By employing the product of exponentials (POEs) formula, we extended the POE-based calibration method from serial robot to redundant serial–parallel hybrid robot. The proposed method combines the forward and inverse kinematics together to formulate a hybrid calibration method for serial–parallel hybrid robot. Because of the high nonlinear characteristics of the error model and too many error parameters need to be identified, the traditional iterative linear least-square algorithms cannot be used to identify the parameter errors. This paper employs a global optimization algorithm, Differential Evolution (DE), to identify parameter errors by solving the inverse kinematics of the hybrid robot. Furthermore, after the parameter errors were identified, the DE algorithm was adopted to numerically solve the forward kinematics of the hybrid robot to demonstrate the accuracy improvement of the end-effector. Numerical simulations were carried out by generating random parameter errors at the allowed tolerance limit and generating a number of configuration poses in the robot workspace. Simulation of the real experimental conditions shows that the accuracy of the end-effector can be improved to the same precision level of the given external measurement device.

  1. Circular orbits of corotating binary black holes: Comparison between analytical and numerical results

    International Nuclear Information System (INIS)

    Damour, Thibault; Gourgoulhon, Eric; Grandclement, Philippe

    2002-01-01

    We compare recent numerical results, obtained within a 'helical Killing vector' approach, on circular orbits of corotating binary black holes to the analytical predictions made by the effective one-body (EOB) method (which has been recently extended to the case of spinning bodies). On the scale of the differences between the results obtained by different numerical methods, we find good agreement between numerical data and analytical predictions for several invariant functions describing the dynamical properties of circular orbits. This agreement is robust against the post-Newtonian accuracy used for the analytical estimates, as well as under choices of the resummation method for the EOB 'effective potential', and gets better as one uses a higher post-Newtonian accuracy. These findings open the way to a significant 'merging' of analytical and numerical methods, i.e. to matching an EOB-based analytical description of the (early and late) inspiral, up to the beginning of the plunge, to a numerical description of the plunge and merger. We illustrate also the 'flexibility' of the EOB approach, i.e. the possibility of determining some 'best fit' values for the analytical parameters by comparison with numerical data

  2. Elements of calculation of reactivity by numerical processing

    International Nuclear Information System (INIS)

    Hedde, J.

    1968-01-01

    In order to explore the new opportunities provided by numerical techniques, the author describes the theoretical optimal conditions of a calculation in real time of reactivity from counting samples produced by a nuclear reactor. These optimal conditions can be the better approached if a more complex processing is adopted. A compromise is to be searched between the desired precision and simplicity of the numerical processing hardware. An example is reported to assess result accuracy on a wide power evolution range with a structure of reduced complexity [fr

  3. High speed numerical integration algorithm using FPGA | Razak ...

    African Journals Online (AJOL)

    Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...

  4. Geoid undulation accuracy

    Science.gov (United States)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  5. Weight Multispectral Reconstruction Strategy for Enhanced Reconstruction Accuracy and Stability With Cerenkov Luminescence Tomography.

    Science.gov (United States)

    Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian

    2017-06-01

    Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.

  6. Evidence for two numerical systems that are similar in humans and guppies.

    Directory of Open Access Journals (Sweden)

    Christian Agrillo

    Full Text Available BACKGROUND: Humans and non-human animals share an approximate non-verbal system for representing and comparing numerosities that has no upper limit and for which accuracy is dependent on the numerical ratio. Current evidence indicates that the mechanism for keeping track of individual objects can also be used for numerical purposes; if so, its accuracy will be independent of numerical ratio, but its capacity is limited to the number of items that can be tracked, about four. There is, however, growing controversy as to whether two separate number systems are present in other vertebrate species. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we compared the ability of undergraduate students and guppies to discriminate the same numerical ratios, both within and beyond the small number range. In both students and fish the performance was ratio-independent for the numbers 1-4, while it steadily increased with numerical distance when larger numbers were presented. CONCLUSIONS/SIGNIFICANCE: Our results suggest that two distinct systems underlie quantity discrimination in both humans and fish, implying that the building blocks of uniquely human mathematical abilities may be evolutionarily ancient, dating back to before the divergence of bony fish and tetrapod lineages.

  7. An output amplitude configurable wideband automatic gain control with high gain step accuracy

    International Nuclear Information System (INIS)

    He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan

    2012-01-01

    An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)

  8. Numerical investigation of a heat transfer within the prismatic fuel assembly of a very high temperature reactor

    International Nuclear Information System (INIS)

    Tak, Nam-il; Kim, Min-Hwan; Lee, Won Jae

    2008-01-01

    The complex geometry of the hexagonal fuel blocks of the prismatic fuel assembly in a very high temperature reactor (VHTR) hinders accurate evaluations of the temperature profile within the fuel assembly without elaborate numerical calculations. Therefore, simplified models such as a unit cell model have been widely applied for the analyses and designs of prismatic VHTRs since they have been considered as effective approaches reducing the computational efforts. In a prismatic VHTR, however, the simplified models cannot consider a heat transfer within a fuel assembly as well as a coolant flow through a bypass gap between the fuel assemblies, which may significantly affect the maximum fuel temperature. In this paper, a three-dimensional computational fluid dynamics (CFD) analysis has been carried out on a typical fuel assembly of a prismatic VHTR. Thermal behaviours and heat transfer within the fuel assembly are intensively investigated using the CFD solutions. In addition, the accuracy of the unit cell approach is assessed against the CFD solutions. Two example situations are illustrated to demonstrate the deficiency of the unit cell model caused by neglecting the effects of the bypass gap flow and the radial power distribution within the fuel assembly

  9. High accuracy magnetic field mapping of the LEP spectrometer magnet

    CERN Document Server

    Roncarolo, F

    2000-01-01

    The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...

  10. Determination of UAV position using high accuracy navigation platform

    Directory of Open Access Journals (Sweden)

    Ireneusz Kubicki

    2016-07-01

    Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration

  11. A high accuracy algorithm of displacement measurement for a micro-positioning stage

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    2017-05-01

    Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.

  12. New Numerical Treatment for Solving the KDV Equation

    Directory of Open Access Journals (Sweden)

    khalid ali

    2017-01-01

    Full Text Available In the present article, a numerical method is proposed for the numerical solution of the KdV equation by using collocation method with the modified exponential cubic B-spline. In this paper we convert the KdV equation to system of two equations. The method is shown to be unconditionally stable using von-Neumann technique. To test accuracy the error norms2L, ?L are computed. Three invariants of motion are predestined to determine the preservation properties of the problem, and the numerical scheme leads to careful and active results. Furthermore, interaction of two and three solitary waves is shown. These results show that the technique introduced here is easy to apply.

  13. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  14. Numerical solution of the polymer system

    Energy Technology Data Exchange (ETDEWEB)

    Haugse, V.; Karlsen, K.H.; Lie, K.-A.; Natvig, J.R.

    1999-05-01

    The paper describes the application of front tracking to the polymer system, an example of a nonstrictly hyperbolic system. Front tracking computes piecewise constant approximations based on approximate Remain solutions and exact tracking of waves. It is well known that the front tracking method may introduce a blow-up of the initial total variation for initial data along the curve where the two eigenvalues of the hyperbolic system are identical. It is demonstrated by numerical examples that the method converges to the correct solution after a finite time that decreases with the discretization parameter. For multidimensional problems, front tracking is combined with dimensional splitting and numerical experiments indicate that large splitting steps can be used without loss of accuracy. Typical CFL numbers are in the range of 10 to 20 and comparisons with the Riemann free, high-resolution method confirm the high efficiency of front tracking. The polymer system, coupled with an elliptic pressure equation, models two-phase, tree-component polymer flooding in an oil reservoir. Two examples are presented where this model is solved by a sequential time stepping procedure. Because of the approximate Riemann solver, the method is non-conservative and CFL members must be chosen only moderately larger than unity to avoid substantial material balance errors generated in near-well regions after water breakthrough. Moreover, it is demonstrated that dimensional splitting may introduce severe grid orientation effects for unstable displacements that are accentuated for decreasing discretization parameters. 9 figs., 2 tabs., 26 refs.

  15. Numerical simulations on a high-temperature particle moving in coolant

    International Nuclear Information System (INIS)

    Li Xiaoyan; Shang Zhi; Xu Jijun

    2006-01-01

    This study considers the coupling effect between film boiling heat transfer and evaporation drag around a hot-particle in cold liquid. Taking momentum and energy equations of the vapor film into account, a transient single particle model under FCI conditions has been established. The numerical simulations on a high-temperature particle moving in coolant have been performed using Gear algorithm. Adaptive dynamic boundary method is adopted during simulating to matching the dynamic boundary that is caused by vapor film changing. Based on the method presented above, the transient process of high-temperature particles moving in coolant can be simulated. The experimental results prove the validity of the HPMC model. (authors)

  16. Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.

    Directory of Open Access Journals (Sweden)

    Christopher Y Park

    Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics

  17. Exact solutions, numerical relativity and gravitational radiation

    International Nuclear Information System (INIS)

    Winicour, J.

    1986-01-01

    In recent years, there has emerged a new use for exact solutions to Einstein's equation as checks on the accuracy of numerical relativity codes. Much has already been written about codes based upon the space-like Cauchy problem. In the case of two Killing vectors, a numerical characteristic initial value formulation based upon two intersecting families of null hypersurfaces has successfully evolved the Schwarzschild and the colliding plane wave vacuum solutions. Here the author discusses, in the context of exact solutions, numerical studies of gravitational radiation based upon the null cone initial value problem. Every stage of progress in the null cone approach has been associated with exact solutions in some sense. He begins by briefly recapping this history. Then he presents two new examples illustrating how exact solutions can be useful

  18. High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS

    Science.gov (United States)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu

    2017-05-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.

  19. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    Science.gov (United States)

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  20. Numerical modeling of the radiative transfer in a turbid medium using the synthetic iteration.

    Science.gov (United States)

    Budak, Vladimir P; Kaloshin, Gennady A; Shagalov, Oleg V; Zheltov, Victor S

    2015-07-27

    In this paper we propose the fast, but the accurate algorithm for numerical modeling of light fields in the turbid media slab. For the numerical solution of the radiative transfer equation (RTE) it is required its discretization based on the elimination of the solution anisotropic part and the replacement of the scattering integral by a finite sum. The solution regular part is determined numerically. A good choice of the method of the solution anisotropic part elimination determines the high convergence of the algorithm in the mean square metric. The method of synthetic iterations can be used to improve the convergence in the uniform metric. A significant increase in the solution accuracy with the use of synthetic iterations allows applying the two-stream approximation for the regular part determination. This approach permits to generalize the proposed method in the case of an arbitrary 3D geometry of the medium.

  1. Numerical Modeling of Ablation Heat Transfer

    Science.gov (United States)

    Ewing, Mark E.; Laker, Travis S.; Walker, David T.

    2013-01-01

    A unique numerical method has been developed for solving one-dimensional ablation heat transfer problems. This paper provides a comprehensive description of the method, along with detailed derivations of the governing equations. This methodology supports solutions for traditional ablation modeling including such effects as heat transfer, material decomposition, pyrolysis gas permeation and heat exchange, and thermochemical surface erosion. The numerical scheme utilizes a control-volume approach with a variable grid to account for surface movement. This method directly supports implementation of nontraditional models such as material swelling and mechanical erosion, extending capabilities for modeling complex ablation phenomena. Verifications of the numerical implementation are provided using analytical solutions, code comparisons, and the method of manufactured solutions. These verifications are used to demonstrate solution accuracy and proper error convergence rates. A simple demonstration of a mechanical erosion (spallation) model is also provided to illustrate the unique capabilities of the method.

  2. Improvements in numerical modelling of highly injected crystalline silicon solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Altermatt, P.P. [University of New South Wales, Centre for Photovoltaic Engineering, 2052 Sydney (Australia); Sinton, R.A. [Sinton Consulting, 1132 Green Circle, 80303 Boulder, CO (United States); Heiser, G. [University of NSW, School of Computer Science and Engineering, 2052 Sydney (Australia)

    2001-01-01

    We numerically model crystalline silicon concentrator cells with the inclusion of band gap narrowing (BGN) caused by injected free carriers. In previous studies, the revised room-temperature value of the intrinsic carrier density, n{sub i}=1.00x10{sup 10}cm{sup -3}, was inconsistent with the other material parameters of highly injected silicon. In this paper, we show that high-injection experiments can be described consistently with the revised value of n{sub i} if free-carrier induced BGN is included, and that such BGN is an important effect in silicon concentrator cells. The new model presented here significantly improves the ability to model highly injected silicon cells with a high level of precision.

  3. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  4. Numerical simulation of GEW equation using RBF collocation method

    Directory of Open Access Journals (Sweden)

    Hamid Panahipour

    2012-08-01

    Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.

  5. Design and analysis for thematic map accuracy assessment: Fundamental principles

    Science.gov (United States)

    Stephen V. Stehman; Raymond L. Czaplewski

    1998-01-01

    Land-cover maps are used in numerous natural resource applications to describe the spatial distribution and pattern of land-cover, to estimate areal extent of various cover classes, or as input into habitat suitability models, land-cover change analyses, hydrological models, and risk analyses. Accuracy assessment quantifies data quality so that map users may evaluate...

  6. Numerical investigation of a high head Francis turbine under steady operating conditions using foam-extend

    International Nuclear Information System (INIS)

    Lenarcic, M; Eichhorn, M; Schoder, S J; Bauer, Ch

    2015-01-01

    In this work the incompressible turbulent flow in a high head Francis turbine under steady operating conditions is investigated using the open source CFD software package FOAM-extend- 3.1. By varying computational domains (cyclic model, full model), coupling methods between stationary and rotating frames (mixing-plane, frozen-rotor) and turbulence models (kω-SST, κε), numerical flow simulations are performed at the best efficiency point as well as at operating points in part load and high load. The discretization is adjusted according the y + -criterion with y + mean > 30. A grid independence study quantifies the discretization error and the corresponding computational costs for the appropriate simulations, reaching a GCI < 1% for the chosen grid. Specific quantities such as efficiency, head, runner shaft torque as well as static pressure and velocity components are computed and compared with experimental data and commercial code. Focusing on the computed results of integral quantities and static pressures, the highest level of accuracy is obtained using FOAM in combination with the full model discretization, the mixing-plane coupling method and the κω-SST turbulence model. The corresponding relative deviations regarding the efficiency reach values of Δη rel ∼ 7% at part load, Δη rel ∼ 0.5% at best efficiency point and Δη rel ∼ 5.6% at high load. The computed static pressures deviate from the measurements by a maximum of Δp rel = 9.3% at part load, Δp rel = 4.3% at best efficiency point and Δp rel = 6.7% at high load. Commercial code in turn yields slightly better predictions for the velocity components in the draft tube cone, reaching a good accordance with the measurements at part load. Although FOAM also shows an adequate correspondence to the experimental data at part load, local effects near the runner hub are captured less accurate at best efficiency point and high load. Nevertheless, FOAM is a reasonable alternative to commercial code

  7. Study on the Accuracy Improvement of the Second-Kind Fredholm Integral Equations by Using the Buffa-Christiansen Functions with MLFMA

    Directory of Open Access Journals (Sweden)

    Yue-Qian Wu

    2016-01-01

    Full Text Available Former works show that the accuracy of the second-kind integral equations can be improved dramatically by using the rotated Buffa-Christiansen (BC functions as the testing functions, and sometimes their accuracy can be even better than the first-kind integral equations. When the rotated BC functions are used as the testing functions, the discretization error of the identity operators involved in the second-kind integral equations can be suppressed significantly. However, the sizes of spherical objects which were analyzed are relatively small. Numerical capability of the method of moments (MoM for solving integral equations with the rotated BC functions is severely limited. Hence, the performance of BC functions for accuracy improvement of electrically large objects is not studied. In this paper, the multilevel fast multipole algorithm (MLFMA is employed to accelerate iterative solution of the magnetic-field integral equation (MFIE. Then a series of numerical experiments are performed to study accuracy improvement of MFIE in perfect electric conductor (PEC cases with the rotated BC as testing functions. Numerical results show that the effect of accuracy improvement by using the rotated BC as the testing functions is greatly different with curvilinear or plane triangular elements but falls off when the size of the object is large.

  8. ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow

    Science.gov (United States)

    Leonard, B. P.; Mokhtari, Simin

    1990-01-01

    For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.

  9. Structural Health Monitoring of Tall Buildings with Numerical Integrator and Convex-Concave Hull Classification

    Directory of Open Access Journals (Sweden)

    Suresh Thenozhi

    2012-01-01

    Full Text Available An important objective of health monitoring systems for tall buildings is to diagnose the state of the building and to evaluate its possible damage. In this paper, we use our prototype to evaluate our data-mining approach for the fault monitoring. The offset cancellation and high-pass filtering techniques are combined effectively to solve common problems in numerical integration of acceleration signals in real-time applications. The integration accuracy is improved compared with other numerical integrators. Then we introduce a novel method for support vector machine (SVM classification, called convex-concave hull. We use the Jarvis march method to decide the concave (nonconvex hull for the inseparable points. Finally the vertices of the convex-concave hull are applied for SVM training.

  10. Impacts of land use/cover classification accuracy on regional climate simulations

    Science.gov (United States)

    Ge, Jianjun; Qi, Jiaguo; Lofgren, Brent M.; Moore, Nathan; Torbick, Nathan; Olson, Jennifer M.

    2007-03-01

    Land use/cover change has been recognized as a key component in global change. Various land cover data sets, including historically reconstructed, recently observed, and future projected, have been used in numerous climate modeling studies at regional to global scales. However, little attention has been paid to the effect of land cover classification accuracy on climate simulations, though accuracy assessment has become a routine procedure in land cover production community. In this study, we analyzed the behavior of simulated precipitation in the Regional Atmospheric Modeling System (RAMS) over a range of simulated classification accuracies over a 3 month period. This study found that land cover accuracy under 80% had a strong effect on precipitation especially when the land surface had a greater control of the atmosphere. This effect became stronger as the accuracy decreased. As shown in three follow-on experiments, the effect was further influenced by model parameterizations such as convection schemes and interior nudging, which can mitigate the strength of surface boundary forcings. In reality, land cover accuracy rarely obtains the commonly recommended 85% target. Its effect on climate simulations should therefore be considered, especially when historically reconstructed and future projected land covers are employed.

  11. Study on applicability of numerical simulation to evaluation of gas entrainment due to free surface vortex

    International Nuclear Information System (INIS)

    Ito, Kei; Kunugi, Tomoaki; Ohshima, Hiroyuki

    2008-01-01

    An onset condition of gas entrainment (GE) due to free surface vortex has been studied to establish a design of sodium-cooled fast reactor with a higher coolant velocity than conventional designs. Numerous investigations have been conducted experimentally and theoretically; however, the universal onset condition of the GE has not been determined yet due to the nonlinear characteristics of the GE. Recently, we have been studying numerical simulation methods as a promising method to evaluate GE, instead of the reliable but costly real-scale tests. In this paper, the applicability of the numerical simulation methods to the evaluation of the GE is discussed. For the purpose, a quasi-steady vortex in a cylindrical tank and a wake vortex (unsteady vortex) in a rectangular channel were numerically simulated using the volume-of-fluid type two-phase flow calculation method. The simulated velocity distributions and free surface shapes of the quasi-steady vortex showed good (not perfect, however) agreements with experimental results when a fine mesh subdivision and a high-order discretization scheme were employed. The unsteady behavior of the wake vortex was also simulated with high accuracy. Although the onset condition of the GE was slightly underestimated in the simulation results, the applicability of the numerical simulation methods to the GE evaluation was confirmed. (author)

  12. Numerical simulation of four-field extended magnetohydrodynamics in dynamically adaptive curvilinear coordinates via Newton-Krylov-Schwarz

    KAUST Repository

    Yuan, Xuefei

    2012-07-01

    Numerical simulations of the four-field extended magnetohydrodynamics (MHD) equations with hyper-resistivity terms present a difficult challenge because of demanding spatial resolution requirements. A time-dependent sequence of . r-refinement adaptive grids obtained from solving a single Monge-Ampère (MA) equation addresses the high-resolution requirements near the . x-point for numerical simulation of the magnetic reconnection problem. The MHD equations are transformed from Cartesian coordinates to solution-defined curvilinear coordinates. After the application of an implicit scheme to the time-dependent problem, the parallel Newton-Krylov-Schwarz (NKS) algorithm is used to solve the system at each time step. Convergence and accuracy studies show that the curvilinear solution requires less computational effort than a pure Cartesian treatment. This is due both to the more optimal placement of the grid points and to the improved convergence of the implicit solver, nonlinearly and linearly. The latter effect, which is significant (more than an order of magnitude in number of inner linear iterations for equivalent accuracy), does not yet seem to be widely appreciated. © 2012 Elsevier Inc.

  13. Numerical simulation of four-field extended magnetohydrodynamics in dynamically adaptive curvilinear coordinates via Newton-Krylov-Schwarz

    KAUST Repository

    Yuan, Xuefei; Jardin, Stephen C.; Keyes, David E.

    2012-01-01

    Numerical simulations of the four-field extended magnetohydrodynamics (MHD) equations with hyper-resistivity terms present a difficult challenge because of demanding spatial resolution requirements. A time-dependent sequence of . r-refinement adaptive grids obtained from solving a single Monge-Ampère (MA) equation addresses the high-resolution requirements near the . x-point for numerical simulation of the magnetic reconnection problem. The MHD equations are transformed from Cartesian coordinates to solution-defined curvilinear coordinates. After the application of an implicit scheme to the time-dependent problem, the parallel Newton-Krylov-Schwarz (NKS) algorithm is used to solve the system at each time step. Convergence and accuracy studies show that the curvilinear solution requires less computational effort than a pure Cartesian treatment. This is due both to the more optimal placement of the grid points and to the improved convergence of the implicit solver, nonlinearly and linearly. The latter effect, which is significant (more than an order of magnitude in number of inner linear iterations for equivalent accuracy), does not yet seem to be widely appreciated. © 2012 Elsevier Inc.

  14. Synchrotron accelerator technology for proton beam therapy with high accuracy

    International Nuclear Information System (INIS)

    Hiramoto, Kazuo

    2009-01-01

    Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)

  15. Artificial Neural Network-Based Constitutive Relationship of Inconel 718 Superalloy Construction and Its Application in Accuracy Improvement of Numerical Simulation

    Directory of Open Access Journals (Sweden)

    Junya Lv

    2017-01-01

    Full Text Available The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which play critical roles in process design and optimization. In this investigation, the true stress-strain data of an Inconel 718 superalloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1153–1353 K and strain rate range of 0.01–10 s−1 on a Gleeble 3500 testing machine (DSI, St. Paul, DE, USA. Then the constitutive relationship was modeled by an optimally-constructed and well-trained back-propagation artificial neural network (ANN. The evaluation of the ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of Inconel 718 superalloy. Consequently, the developed ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the continuous mapping relationship for temperature, strain rate, strain and stress. Finally, the constructed ANN was implanted in a finite element solver though the interface of “URPFLO” subroutine to simulate the isothermal compression tests. The results show that the integration of finite element method with ANN model can significantly promote the accuracy improvement of numerical simulations for hot forming processes.

  16. Dimensional accuracy of aluminium extrusions in mechanical calibration

    Science.gov (United States)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  17. Accuracy of spectral and finite difference schemes in 2D advection problems

    DEFF Research Database (Denmark)

    Naulin, V.; Nielsen, A.H.

    2003-01-01

    In this paper we investigate the accuracy of two numerical procedures commonly used to solve 2D advection problems: spectral and finite difference (FD) schemes. These schemes are widely used, simulating, e.g., neutral and plasma flows. FD schemes have long been considered fast, relatively easy...... that the accuracy of FD schemes can be significantly improved if one is careful in choosing an appropriate FD scheme that reflects conservation properties of the nonlinear terms and in setting up the grid in accordance with the problem....

  18. Comparing numerical methods for the solutions of the Chen system

    International Nuclear Information System (INIS)

    Noorani, M.S.M.; Hashim, I.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.

    2007-01-01

    In this paper, the Adomian decomposition method (ADM) is applied to the Chen system which is a three-dimensional system of ODEs with quadratic nonlinearities. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the classical fourth-order Runge-Kutta (RK4) numerical solutions are made. In particular we look at the accuracy of the ADM as the Chen system changes from a non-chaotic system to a chaotic one. To highlight some computational difficulties due to a high Lyapunov exponent, a comparison with the Lorenz system is given

  19. Direct Numerical Simulation of Turbulent Flow Over Complex Bathymetry

    Science.gov (United States)

    Yue, L.; Hsu, T. J.

    2017-12-01

    Direct numerical simulation (DNS) is regarded as a powerful tool in the investigation of turbulent flow featured with a wide range of time and spatial scales. With the application of coordinate transformation in a pseudo-spectral scheme, a parallelized numerical modeling system was created aiming at simulating flow over complex bathymetry with high numerical accuracy and efficiency. The transformed governing equations were integrated in time using a third-order low-storage Runge-Kutta method. For spatial discretization, the discrete Fourier expansion was adopted in the streamwise and spanwise direction, enforcing the periodic boundary condition in both directions. The Chebyshev expansion on Chebyshev-Gauss-Lobatto points was used in the wall-normal direction, assuming there is no-slip on top and bottom walls. The diffusion terms were discretized with a Crank-Nicolson scheme, while the advection terms dealiased with the 2/3 rule were discretized with an Adams-Bashforth scheme. In the prediction step, the velocity was calculated in physical domain by solving the resulting linear equation directly. However, the extra terms introduced by coordinate transformation impose a strict limitation to time step and an iteration method was applied to overcome this restriction in the correction step for pressure by solving the Helmholtz equation. The numerical solver is written in object-oriented C++ programing language utilizing Armadillo linear algebra library for matrix computation. Several benchmarking cases in laminar and turbulent flow were carried out to verify/validate the numerical model and very good agreements are achieved. Ongoing work focuses on implementing sediment transport capability for multiple sediment classes and parameterizations for flocculation processes.

  20. Updating flood maps efficiently using existing hydraulic models, very-high-accuracy elevation data, and a geographic information system; a pilot study on the Nisqually River, Washington

    Science.gov (United States)

    Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.

    2001-01-01

    cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.

  1. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  2. Canonical momenta and numerical instabilities in particle codes

    International Nuclear Information System (INIS)

    Godfrey, B.B.

    1975-01-01

    A set of warm plasma dispersion relations appropriate to a large class of electromagnetic plasma simulation codes is derived. The numerical Cherenkov instability is shown by analytic and numerical analysis of these dispersion relations to be the most significant nonphysical effect involving transverse electromagnetic waves. The instability arises due to a spurious phase shift between resonant particles and light waves, caused by a basic incompatibility between the Lagrangian treatment of particle positions and the Eulerian treatment of particle velocities characteristic of most PIC--CIC algorithms. It is demonstrated that, through the use of canonical momentum, this mismatch is alleviated sufficiently to completely eliminate the Cherenkov instability. Collateral effects on simulation accuracy and on other numerical instabilities appear to be minor

  3. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  4. Numerical consistency check between two approaches to radiative ...

    Indian Academy of Sciences (India)

    approaches for a consistency check on numerical accuracy, and find out the stabil- ... ln(MR/1 GeV) to top-quark mass scale t0(= ln(mt/1 GeV)) where t0 ≤ t ≤ tR, we ..... It is in general to tone down the solar mixing angle through further fine.

  5. A detailed survey of numerical methods for unconstrained minimization. Pt. 1

    International Nuclear Information System (INIS)

    Mika, K.; Chaves, T.

    1980-01-01

    A detailed description of numerical methods for unconstrained minimization is presented. This first part surveys in particular conjugate direction and gradient methods, whereas variable metric methods will be the subject of the second part. Among the results of special interest we quote the following. The conjugate direction methods of Powell, Zangwill and Sutti can be best interpreted if the Smith approach is adopted. The conditions for quadratic termination of Powell's first procedure are analyzed. Numerical results based on nonlinear least squares problems are presented for the following conjugate direction codes: VA04AD from Harwell Subroutine Library and ZXPOW from IMSL, both implementations of Powell's second procedure, DFMND from IBM-SILMATH (Zangwill's method) and Brent's algorithm PRAXIS. VA04AD turns out to be superior in all cases, PRAXIS improves for high-dimensional problems. All codes clearly exhibit superlinear convergence. Akaike's result for the method of steepest descent is derived directly from a set of nonlinear recurrence relations. Numerical results obtained with the highly ill conditioned Hilbert function confirm the theoretical predictions. Several properties of the conjugate gradient method are presented and a new derivation of the equivalence of steepest descent partan and the CG method is given. A comparison of numerical results from the CG codes VA08AD (Fletcher-Reeves), DFMCG (the SSP version of the Fletcher-Reevens algorithm) and VA14AD (Powell's implementation of the Polak-Ribiere formula) reveals that VA14AD is clearly superior in all cases, but that the convergence rate of these codes is only weakly superlinear such that high accuracy solutions require extremely large numbers of function calls. (orig.)

  6. The application of the large particles method of numerical modeling of the process of carbonic nanostructures synthesis in plasma

    Science.gov (United States)

    Abramov, G. V.; Gavrilov, A. N.

    2018-03-01

    The article deals with the numerical solution of the mathematical model of the particles motion and interaction in multicomponent plasma by the example of electric arc synthesis of carbon nanostructures. The high order of the particles and the number of their interactions requires a significant input of machine resources and time for calculations. Application of the large particles method makes it possible to reduce the amount of computation and the requirements for hardware resources without affecting the accuracy of numerical calculations. The use of technology of GPGPU parallel computing using the Nvidia CUDA technology allows organizing all General purpose computation on the basis of the graphical processor graphics card. The comparative analysis of different approaches to parallelization of computations to speed up calculations with the choice of the algorithm in which to calculate the accuracy of the solution shared memory is used. Numerical study of the influence of particles density in the macro particle on the motion parameters and the total number of particle collisions in the plasma for different modes of synthesis has been carried out. The rational range of the coherence coefficient of particle in the macro particle is computed.

  7. PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components

    CERN Document Server

    Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia

    2016-01-01

    The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...

  8. Towards standard testbeds for numerical relativity

    International Nuclear Information System (INIS)

    Alcubierre, Miguel; Allen, Gabrielle; Bona, Carles; Fiske, David; Goodale, Tom; Guzman, F Siddhartha; Hawke, Ian; Hawley, Scott H; Husa, Sascha; Koppitz, Michael; Lechner, Christiane; Pollney, Denis; Rideout, David; Salgado, Marcelo; Schnetter, Erik; Seidel, Edward; Shinkai, Hisa-aki; Shoemaker, Deirdre; Szilagyi, Bela; Takahashi, Ryoji; Winicour, Jeff

    2004-01-01

    In recent years, many different numerical evolution schemes for Einstein's equations have been proposed to address stability and accuracy problems that have plagued the numerical relativity community for decades. Some of these approaches have been tested on different spacetimes, and conclusions have been drawn based on these tests. However, differences in results originate from many sources, including not only formulations of the equations, but also gauges, boundary conditions, numerical methods and so on. We propose to build up a suite of standardized testbeds for comparing approaches to the numerical evolution of Einstein's equations that are designed to both probe their strengths and weaknesses and to separate out different effects, and their causes, seen in the results. We discuss general design principles of suitable testbeds, and we present an initial round of simple tests with periodic boundary conditions. This is a pivotal first step towards building a suite of testbeds to serve the numerical relativists and researchers from related fields who wish to assess the capabilities of numerical relativity codes. We present some examples of how these tests can be quite effective in revealing various limitations of different approaches, and illustrating their differences. The tests are presently limited to vacuum spacetimes, can be run on modest computational resources and can be used with many different approaches used in the relativity community

  9. Towards standard testbeds for numerical relativity

    Energy Technology Data Exchange (ETDEWEB)

    Alcubierre, Miguel [Inst. de Ciencias Nucleares, Univ. Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico Distrito Federal 04510 (Mexico); Allen, Gabrielle; Goodale, Tom; Guzman, F Siddhartha; Hawke, Ian; Husa, Sascha; Koppitz, Michael; Lechner, Christiane; Pollney, Denis; Rideout, David [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Institut, 14476 Golm (Germany); Bona, Carles [Departament de Fisica, Universitat de les Illes Balears, Ctra de Valldemossa km 7.5, 07122 Palma de Mallorca (Spain); Fiske, David [Dept. of Physics, Univ. of Maryland, College Park, MD 20742-4111 (United States); Hawley, Scott H [Center for Relativity, Univ. of Texas at Austin, Austin, Texas 78712 (United States); Salgado, Marcelo [Inst. de Ciencias Nucleares, Univ. Nacional Autonoma de Mexico, Apartado Postal 70-543, Mexico Distrito Federal 04510 (Mexico); Schnetter, Erik [Inst. fuer Astronomie und Astrophysik, Universitaet Tuebingen, 72076 Tuebingen (Germany); Seidel, Edward [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Inst., 14476 Golm (Germany); Shinkai, Hisa-aki [Computational Science Div., Inst. of Physical and Chemical Research (RIKEN), Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Shoemaker, Deirdre [Center for Radiophysics and Space Research, Cornell Univ., Ithaca, NY 14853 (United States); Szilagyi, Bela [Dept. of Physics and Astronomy, Univ. of Pittsburgh, Pittsburgh, PA 15260 (United States); Takahashi, Ryoji [Theoretical Astrophysics Center, Juliane Maries Vej 30, 2100 Copenhagen, (Denmark); Winicour, Jeff [Max-Planck-Inst. fuer Gravitationsphysik, Albert-Einstein-Institut, 14476 Golm (Germany)

    2004-01-21

    In recent years, many different numerical evolution schemes for Einstein's equations have been proposed to address stability and accuracy problems that have plagued the numerical relativity community for decades. Some of these approaches have been tested on different spacetimes, and conclusions have been drawn based on these tests. However, differences in results originate from many sources, including not only formulations of the equations, but also gauges, boundary conditions, numerical methods and so on. We propose to build up a suite of standardized testbeds for comparing approaches to the numerical evolution of Einstein's equations that are designed to both probe their strengths and weaknesses and to separate out different effects, and their causes, seen in the results. We discuss general design principles of suitable testbeds, and we present an initial round of simple tests with periodic boundary conditions. This is a pivotal first step towards building a suite of testbeds to serve the numerical relativists and researchers from related fields who wish to assess the capabilities of numerical relativity codes. We present some examples of how these tests can be quite effective in revealing various limitations of different approaches, and illustrating their differences. The tests are presently limited to vacuum spacetimes, can be run on modest computational resources and can be used with many different approaches used in the relativity community.

  10. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    Science.gov (United States)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  11. Numerical simulations of helium flow through prismatic fuel elements of very high temperature reactors

    International Nuclear Information System (INIS)

    Ribeiro, Felipe Lopes; Pinto, Joao Pedro C.T.A.

    2013-01-01

    The 4 th generation Very High Temperature Reactor (VHTR) most popular concept uses a graphite-moderated and helium cooled core with an outlet gas temperature of approximately 1000 deg C. The high output temperature allows the use of the process heat and the production of hydrogen through the thermochemical iodine-sulfur process as well as highly efficient electricity generation. There are two concepts of VHTR core: the prismatic block and the pebble bed core. The prismatic block core has two popular concepts for the fuel element: multihole and annular. In the multi-hole fuel element, prismatic graphite blocks contain cylindrical flow channels where the helium coolant flows removing heat from cylindrical fuel rods positioned in the graphite. In the other hand, the annular type fuel element has annular channels around the fuel. This paper shows the numerical evaluations of prismatic multi-hole and annular VHTR fuel elements and does a comparison between the results of these assembly reactors. In this study the analysis were performed using the CFD code ANSYS CFX 14.0. The simulations were made in 1/12 fuel element models. A numerical validation was performed through the energy balance, where the theoretical and the numerical generated heat were compared for each model. (author)

  12. Large-scale numerical simulations on two-phase flow behavior in a fuel bundle of RMWR with the earth simulator

    International Nuclear Information System (INIS)

    Kazuyuki, Takase; Hiroyuki, Yoshida; Hidesada, Tamai; Hajime, Akimoto; Yasuo, Ose

    2003-01-01

    Fluid flow characteristics in a fuel bundle of a reduced-moderation light water reactor (RMWR) with a tight-lattice core were analyzed numerically using a newly developed two-phase flow analysis code under the full bundle size condition. Conventional analysis methods such as sub-channel codes need composition equations based on the experimental data. In case that there are no experimental data regarding to the thermal-hydraulics in the tight-lattice core, therefore, it is difficult to obtain high prediction accuracy on the thermal design of the RMWR. Then the direct numerical simulations with the earth simulator were chosen. The axial velocity distribution in a fuel bundle changed sharply around a grid spacer and its quantitative evaluation was obtained from the present preliminary numerical study. The high prospect was acquired on the possibility of establishment of the thermal design procedure of the RMWR by large-scale direct simulations. (authors)

  13. Numerical Analysis of Dusty-Gas Flows

    Science.gov (United States)

    Saito, T.

    2002-02-01

    This paper presents the development of a numerical code for simulating unsteady dusty-gas flows including shock and rarefaction waves. The numerical results obtained for a shock tube problem are used for validating the accuracy and performance of the code. The code is then extended for simulating two-dimensional problems. Since the interactions between the gas and particle phases are calculated with the operator splitting technique, we can choose numerical schemes independently for the different phases. A semi-analytical method is developed for the dust phase, while the TVD scheme of Harten and Yee is chosen for the gas phase. Throughout this study, computations are carried out on SGI Origin2000, a parallel computer with multiple of RISC based processors. The efficient use of the parallel computer system is an important issue and the code implementation on Origin2000 is also described. Flow profiles of both the gas and solid particles behind the steady shock wave are calculated by integrating the steady conservation equations. The good agreement between the pseudo-stationary solutions and those from the current numerical code validates the numerical approach and the actual coding. The pseudo-stationary shock profiles can also be used as initial conditions of unsteady multidimensional simulations.

  14. Horizontal Positional Accuracy of Google Earth’s High-Resolution Imagery Archive

    Directory of Open Access Journals (Sweden)

    David Potere

    2008-12-01

    Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the Earth’s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the world’s peri-urban areas.

  15. A new device for liver cancer biomarker detection with high accuracy

    Directory of Open Access Journals (Sweden)

    Shuaipeng Wang

    2015-06-01

    Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.

  16. WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method

    Science.gov (United States)

    Crevoisier, David; Voltz, Marc

    2013-04-01

    To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute

  17. Isolating Numerical Error Effects in LES Using DNS-Derived Sub-Grid Closures

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann

    2017-11-01

    The prospect of employing an explicitly-defined filter in Large-Eddy Simulations (LES) provides the opportunity to reduce the interaction of numerical/modeling errors and offers the chance to carry out grid-converged assessments, important for model development. By utilizing a quasi a priori evaluation method - wherein the LES is assisted by closures derived from a fully-resolved computation - it then becomes possible to understand the combined impacts of filter construction (e.g., filter width, spectral sharpness) and discretization choice on the solution accuracy. The present work looks at calculations of the compressible LES Navier-Stokes system and considers discrete filtering formulations in conjunction with high-order finite differencing schemes. Accuracy of the overall method construction is compared to a consistently-filtered exact solution, and lessons are extended to a posteriori (i.e., non-assisted) evaluations. Supported by ERC, Inc. (PS150006) and AFOSR (Dr. Chiping Li).

  18. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  19. Experimental characterization and numerical simulation of riveted lap-shear joints using Rivet Element

    Science.gov (United States)

    Vivio, Francesco; Fanelli, Pierluigi; Ferracci, Michele

    2018-03-01

    In aeronautical and automotive industries the use of rivets for applications requiring several joining points is now very common. In spite of a very simple shape, a riveted junction has many contact surfaces and stress concentrations that make the local stiffness very difficult to be calculated. To overcome this difficulty, commonly finite element models with very dense meshes are performed for single joint analysis because the accuracy is crucial for a correct structural analysis. Anyhow, when several riveted joints are present, the simulation becomes computationally too heavy and usually significant restrictions to joint modelling are introduced, sacrificing the accuracy of local stiffness evaluation. In this paper, we tested the accuracy of a rivet finite element presented in previous works by the authors. The structural behaviour of a lap joint specimen with a rivet joining is simulated numerically and compared to experimental measurements. The Rivet Element, based on a closed-form solution of a reference theoretical model of the rivet joint, simulates local and overall stiffness of the junction combining high accuracy with low degrees of freedom contribution. In this paper the Rivet Element performances are compared to that of a FE non-linear model of the rivet, built with solid elements and dense mesh, and to experimental data. The promising results reported allow to consider the Rivet Element able to simulate, with a great accuracy, actual structures with several rivet connections.

  20. Cadastral Database Positional Accuracy Improvement

    Science.gov (United States)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  1. High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer

    CERN Multimedia

    2002-01-01

    The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \

  2. Numerical simulation of water quality in Yangtze Estuary

    Directory of Open Access Journals (Sweden)

    Xi Li

    2009-12-01

    Full Text Available In order to monitor water quality in the Yangtze Estuary, water samples were collected and field observation of current and velocity stratification was carried out using a shipboard acoustic Doppler current profiler (ADCP. Results of two representative variables, the temporal and spatial variation of new point source sewage discharge as manifested by chemical oxygen demand (COD and the initial water quality distribution as manifested by dissolved oxygen (DO, were obtained by application of the Environmental Fluid Dynamics Code (EFDC with solutions for hydrodynamics during tides. The numerical results were compared with field data, and the field data provided verification of numerical application: this numerical model is an effective tool for water quality simulation. For point source discharge, COD concentration was simulated with an initial value in the river of zero. The simulated increments and distribution of COD in the water show acceptable agreement with field data. The concentration of DO is much higher in the North Branch than in the South Branch due to consumption of oxygen in the South Branch resulting from discharge of sewage from Shanghai. The DO concentration is greater in the surface layer than in the bottom layer. The DO concentration is low in areas with a depth of less than 20 m, and high in areas between the 20-m and 30-m isobaths. It is concluded that the numerical model is valuable in simulation of water quality in the case of specific point source pollutant discharge. The EFDC model is also of satisfactory accuracy in water quality simulation of the Yangtze Estuary.

  3. Measurement and numerical simulation of high intensity focused ultrasound field in water

    Science.gov (United States)

    Lee, Kang Il

    2017-11-01

    In the present study, the acoustic field of a high intensity focused ultrasound (HIFU) transducer in water was measured by using a commercially available needle hydrophone intended for HIFU use. To validate the results of hydrophone measurements, numerical simulations of HIFU fields were performed by integrating the axisymmetric Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation from the frequency-domain perspective with the help of a MATLAB-based software package developed for HIFU simulation. Quantitative values for the focal waveforms, the peak pressures, and the size of the focal spot were obtained in various regimes of linear, quasilinear, and nonlinear propagation up to the source pressure levels when the shock front was formed in the waveform. The numerical results with the HIFU simulator solving the KZK equation were compared with the experimental data and found to be in good agreement. This confirms that the numerical simulation based on the KZK equation is capable of capturing the nonlinear pressure field of therapeutic HIFU transducers well enough to make it suitable for HIFU treatment planning.

  4. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    Science.gov (United States)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  5. Accuracy optimization of high-speed AFM measurements using Design of Experiments

    DEFF Research Database (Denmark)

    Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard

    2010-01-01

    Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...

  6. Direct numerical simulation of bluff-body-stabilized premixed flames

    KAUST Repository

    Arias, Paul G.

    2014-01-10

    To enable high fidelity simulation of combustion phenomena in realistic devices, an embedded boundary method is implemented into direct numerical simulations (DNS) of reacting flows. One of the additional numerical issues associated with reacting flows is the stable treatment of the embedded boundaries in the presence of multicomponent species and reactions. The implemented method is validated in two test con gurations: a pre-mixed hydrogen/air flame stabilized in a backward-facing step configuration, and reactive flows around a square prism. The former is of interest in practical gas turbine combustor applications in which the thermo-acoustic instabilities are a strong concern, and the latter serves as a good model problem to capture the vortex shedding behind a bluff body. In addition, a reacting flow behind the square prism serves as a model for the study of flame stabilization in a micro-channel combustor. The present study utilizes fluid-cell reconstruction methods in order to capture important flame-to-solid wall interactions that are important in confined multicomponent reacting flows. Results show that the DNS with embedded boundaries can be extended to more complex geometries without loss of accuracy and the high fidelity simulation data can be used to develop and validate turbulence and combustion models for the design of practical combustion devices.

  7. A highly accurate spectral method for the Navier–Stokes equations in a semi-infinite domain with flexible boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Matsushima, Toshiki; Ishioka, Keiichi, E-mail: matsushima@kugi.kyoto-u.ac.jp, E-mail: ishioka@gfd-dennou.org [Graduate School of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan)

    2017-04-15

    This paper presents a spectral method for numerically solving the Navier–Stokes equations in a semi-infinite domain bounded by a flat plane: the aim is to obtain high accuracy with flexible boundary conditions. The proposed use is for numerical simulations of small-scale atmospheric phenomena near the ground. We introduce basis functions that fit the semi-infinite domain, and an integral condition for vorticity is used to reduce the computational cost when solving the partial differential equations that appear when the viscosity term is treated implicitly. Furthermore, in order to ensure high accuracy, two iteration techniques are applied when solving the system of linear equations and in determining boundary values. This significantly reduces numerical errors, and the proposed method enables high-resolution numerical experiments. This is demonstrated by numerical experiments showing the collision of a vortex ring into a wall; these were performed using numerical models based on the proposed method. It is shown that the time evolution of the flow field is successfully obtained not only near the boundary, but also in a region far from the boundary. The applicability of the proposed method and the integral condition is discussed. (paper)

  8. The accuracy of nurse performance of the triage process in a tertiary ...

    African Journals Online (AJOL)

    The accuracy of nurse performance of the triage process in a tertiary hospital emergency department in Gauteng Province, South Africa. ... discriminator use, numerical miscalculations and other human errors. Quality control and quality assurance measures must target training in these areas to minimise mis-triage in the ED.

  9. Numerical Simulation of Transitional, Hypersonic Flows using a Hybrid Particle-Continuum Method

    Science.gov (United States)

    Verhoff, Ashley Marie

    Analysis of hypersonic flows requires consideration of multiscale phenomena due to the range of flight regimes encountered, from rarefied conditions in the upper atmosphere to fully continuum flow at low altitudes. At transitional Knudsen numbers there are likely to be localized regions of strong thermodynamic nonequilibrium effects that invalidate the continuum assumptions of the Navier-Stokes equations. Accurate simulation of these regions, which include shock waves, boundary and shear layers, and low-density wakes, requires a kinetic theory-based approach where no prior assumptions are made regarding the molecular distribution function. Because of the nature of these types of flows, there is much to be gained in terms of both numerical efficiency and physical accuracy by developing hybrid particle-continuum simulation approaches. The focus of the present research effort is the continued development of the Modular Particle-Continuum (MPC) method, where the Navier-Stokes equations are solved numerically using computational fluid dynamics (CFD) techniques in regions of the flow field where continuum assumptions are valid, and the direct simulation Monte Carlo (DSMC) method is used where strong thermodynamic nonequilibrium effects are present. Numerical solutions of transitional, hypersonic flows are thus obtained with increased physical accuracy relative to CFD alone, and improved numerical efficiency is achieved in comparison to DSMC alone because this more computationally expensive method is restricted to those regions of the flow field where it is necessary to maintain physical accuracy. In this dissertation, a comprehensive assessment of the physical accuracy of the MPC method is performed, leading to the implementation of a non-vacuum supersonic outflow boundary condition in particle domains, and more consistent initialization of DSMC simulator particles along hybrid interfaces. The relative errors between MPC and full DSMC results are greatly reduced as a

  10. Numerical simulations of stripping effects in high-intensity hydrogen ion linacs

    Directory of Open Access Journals (Sweden)

    J.-P. Carneiro

    2009-04-01

    Full Text Available Numerical simulations of H^{-} stripping losses from blackbody radiation, electromagnetic fields, and residual gas have been implemented into the beam dynamics code TRACK. Estimates of the stripping losses along two high-intensity H^{-} linacs are presented: the Spallation Neutron Source linac currently being operated at Oak Ridge National Laboratory and an 8 GeV superconducting linac currently being designed at Fermi National Accelerator Laboratory.

  11. Influence of accuracy of thermal property data of a phase change material on the result of a numerical model of a packed bed latent heat storage with spheres

    Energy Technology Data Exchange (ETDEWEB)

    Arkar, C.; Medved, S. [University of Ljubljana, Faculty of Mechanical Engineering, Askerceva 6, 1000 Ljubljana (Slovenia)

    2005-11-01

    With the integration of latent-heat thermal energy storage (LHTES) in building services, solar energy and the coldness of ambient air can be efficiently used to reduce the energy used for heating and cooling and to improve the level of living comfort. For this purpose, a cylindrical LHTES containing spheres filled with paraffin was developed. For the proper modelling of the LHTES thermal response the thermal properties of the phase change material (PCM) must be accurately known. This article presents the influence of the accuracy of thermal property data of the PCM on the result of the prediction of the LHTES's thermal response. A packed bed numerical model was adapted to take into account the non-uniformity of the PCM's porosity and the fluid's velocity. Both are the consequence of a small tube-to-sphere diameter ratio, which is characteristic of the developed LHTES. The numerical model can also take into account the PCM's temperature-dependent thermal properties. The temperature distribution of the latent heat of the paraffin (RT20) used in the experiment in the form of apparent heat capacity was determined using a differential scanning calorimeter (DSC) at different heating and cooling rates. A comparison of the numerical and experimental results confirmed our hypothesis relating to the important role that the PCM's thermal properties play, especially during slow running processes, which are characteristic for our application.

  12. Social Power Increases Interoceptive Accuracy

    Directory of Open Access Journals (Sweden)

    Mehrad Moeini-Jazani

    2017-08-01

    Full Text Available Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research.

  13. Theoretical and numerical study of highly anisotropic turbulent flows

    NARCIS (Netherlands)

    Biferale, L.; Daumont, I.; Lanotte, A.; Toschi, F.

    2004-01-01

    We present a detailed numerical study of anisotropic statistical fluctuations in stationary, homogeneous turbulent flows. We address both problems of intermittency in anisotropic sectors, and the relative importance of isotropic and anisotropic fluctuations at different scales on a direct numerical

  14. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  15. Behavioral modeling of SRIM tables for numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Martinie, S., E-mail: sebastien.martinie@cea.fr; Saad-Saoud, T.; Moindjie, S.; Munteanu, D.; Autran, J.L., E-mail: jean-luc.autran@univ-amu.fr

    2014-03-01

    Highlights: • Behavioral modeling of SRIM data is performed on the basis of power polynomial fitting functions. • Fast and continuous numerical functions are proposed for the stopping power and projected range. • Functions have been successfully tested for a wide variety of ions and targets. • Typical accuracies below the percent have been obtained in the range 1 keV–1 GeV. - Abstract: This work describes a simple way to implement SRIM stopping power and range tabulated data in the form of fast and continuous numerical functions for intensive simulation. We provide here the methodology of this behavioral modeling as well as the details of the implementation and some numerical examples for ions in silicon target. Developed functions have been successfully tested and used for the simulation of soft errors in microelectronics circuits.

  16. Behavioral modeling of SRIM tables for numerical simulation

    International Nuclear Information System (INIS)

    Martinie, S.; Saad-Saoud, T.; Moindjie, S.; Munteanu, D.; Autran, J.L.

    2014-01-01

    Highlights: • Behavioral modeling of SRIM data is performed on the basis of power polynomial fitting functions. • Fast and continuous numerical functions are proposed for the stopping power and projected range. • Functions have been successfully tested for a wide variety of ions and targets. • Typical accuracies below the percent have been obtained in the range 1 keV–1 GeV. - Abstract: This work describes a simple way to implement SRIM stopping power and range tabulated data in the form of fast and continuous numerical functions for intensive simulation. We provide here the methodology of this behavioral modeling as well as the details of the implementation and some numerical examples for ions in silicon target. Developed functions have been successfully tested and used for the simulation of soft errors in microelectronics circuits

  17. On the numerical simulation of population dynamics with density-dependent migrations and the Allee effects

    International Nuclear Information System (INIS)

    Sweilam, H N; Khader, M M; Al-Bar, F R

    2008-01-01

    In this paper, the variational iteration method (VIM) and the Adomian decomposition method (ADM) are presented for the numerical simulation of the population dynamics model with density-dependent migrations and the Allee effects. The convergence of ADM is proved for the model problem. The results obtained by these methods are compared to the exact solution. It is found that these methods are always converges to the right solutions with high accuracy. Furthermore, VIM needs relative less computational work than ADM

  18. High Fidelity, Numerical Investigation of Cross Talk in a Multi-Qubit Xmon Processor

    Science.gov (United States)

    Najafi-Yazdi, Alireza; Kelly, Julian; Martinis, John

    Unwanted electromagnetic interference between qubits, transmission lines, flux lines and other elements of a superconducting quantum processor poses a challenge in engineering such devices. This problem is exacerbated with scaling up the number of qubits. High fidelity, massively parallel computational toolkits, which can simulate the 3D electromagnetic environment and all features of the device, are instrumental in addressing this challenge. In this work, we numerically investigated the crosstalk between various elements of a multi-qubit quantum processor designed and tested by the Google team. The processor consists of 6 superconducting Xmon qubits with flux lines and gatelines. The device also consists of a Purcell filter for readout. The simulations are carried out with a high fidelity, massively parallel EM solver. We will present our findings regarding the sources of crosstalk in the device, as well as numerical model setup, and a comparison with available experimental data.

  19. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li Zan; Braun Torsten; Dimitrova Desislava

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...

  20. Methodology for GPS Synchronization Evaluation with High Accuracy

    OpenAIRE

    Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova

    2015-01-01

    Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...

  1. GPU based numerical simulation of core shooting process

    Directory of Open Access Journals (Sweden)

    Yi-zhong Zhang

    2017-11-01

    Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model (TFM and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit (GPU has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture (CUDA platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications.

  2. Numerical computations of interior transmission eigenvalues for scattering objects with cavities

    International Nuclear Information System (INIS)

    Peters, Stefan; Kleefeld, Andreas

    2016-01-01

    In this article we extend the inside-outside duality for acoustic transmission eigenvalue problems by allowing scattering objects that may contain cavities. In this context we provide the functional analytical framework necessary to transfer the techniques that have been used in Kirsch and Lechleiter (2013 Inverse Problems, 29 104011) to derive the inside-outside duality. Additionally, extensive numerical results are presented to show that we are able to successfully detect interior transmission eigenvalues with the inside-outside duality approach for a variety of obstacles with and without cavities in three dimensions. In this context, we also discuss the advantages and disadvantages of the inside-outside duality approach from a numerical point of view. Furthermore we derive the integral equations necessary to extend the algorithm in Kleefeld (2013 Inverse Problems, 29 104012) to compute highly accurate interior transmission eigenvalues for scattering objects with cavities, which we will then use as reference values to examine the accuracy of the inside-outside duality algorithm. (paper)

  3. A high-accuracy optical linear algebra processor for finite element applications

    Science.gov (United States)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  4. Numerical modeling of suspended sediment tansfers at the catchment scale with TELEMAC

    Science.gov (United States)

    Taccone, Florent; Antoine, Germain; Delestre, Olivier; Goutal, Nicole

    2017-04-01

    In the mountainous regions, the filling of reservoirs is an important issue in terms of efficiency and environmental acceptability for producing hydro-electricity. Thus, the modelling of the sediment transfers on highly erodible watershed is a key challenge from both economic and scientific points of view. The sediment transfers at the watershed scale involve different local flow regimes due to the complex topography of the field and the time and space variability of the meteorological conditions, as well as several physical processes, because of the heterogeneity of the soil composition and cover. A physically-based modelling approach, associated with a fine discretization of the domain, provides an explicit representation of the hydraulic and sedimentary variables, and gives the opportunity to river managers to simulate the global effects of local solutions for decreasing erosion. On the other hand, this approach is time consuming, and needs both detailed data set for validation and robust numerical schemes for simulating various hydraulic and sediment transport conditions. The erosion processes being heavily reliant on the flow characteristics, this paper focus on a robust and accurate numerical resolution of the Shallow Water equations using TELEMAC 2D (www.opentelemac.org). One of the main difficulties is to have a numerical scheme able to represent correctly the hydraulic transfers, preserving the positivity of the water depths, dealing with the wet/dry interface and being well-balanced. Few schemes verifying these properties exist, and their accuracy still needs to be evaluated in the case of rain induced runoff on steep slopes. First, a straight channel test case with a variable slope (Kirstetter et al., 2015) is used to qualify the properties of several Finite Volume numerical schemes. For this test case, a steady rain applied on a dry domain has been performed experimentally in laboratory, and this configuration gives an analytical solution of the Shallow

  5. Numerical evaluation of methods for computing tomographic projections

    International Nuclear Information System (INIS)

    Zhuang, W.; Gopal, S.S.; Hebert, T.J.

    1994-01-01

    Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived

  6. High-accuracy measurement of ship velocities by DGPS; DGPS ni yoru sensoku keisoku no koseidoka ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1996-04-10

    The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.

  7. Accuracy of cell calculation methods used for analysis of high conversion light water reactor lattice

    International Nuclear Information System (INIS)

    Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi

    1990-01-01

    Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)

  8. Estimation of state and material properties during heat-curing molding of composite materials using data assimilation: A numerical study

    Directory of Open Access Journals (Sweden)

    Ryosuke Matsuzaki

    2018-03-01

    Full Text Available Accurate simulations of carbon fiber-reinforced plastic (CFRP molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult. Keywords: Engineering, Materials science, Applied mathematics

  9. Numerical solution of the Navier--Stokes equations at high Reynolds numbers

    International Nuclear Information System (INIS)

    Shestakov, A.I.

    1974-01-01

    A numerical method is presented which is designed to solve the Navier-Stokes equations for two-dimensional, incompressible flow. The method is intended for use on problems with high Reynolds numbers for which calculations via finite difference methods have been unattainable or unreliable. The proposed scheme is a hybrid utilizing a time-splitting finite difference method in areas away from the boundaries. In areas neighboring the boundaries, the equations of motion are solved by the newly proposed vortex method by Chorin. The major accomplishment of the new scheme is that it contains a simple way for merging the two methods at the interface of the two subdomains. The proposed algorithm is designed for use on the time-dependent equations but can be used on steady state problems as well. The method is tested on the popular, time-independent, square cavity problem, an example of a separated flow with closed streamlines. Numerical results are presented for a Reynolds number of 10 3 . (auth)

  10. Direct numerical simulation of MHD heat transfer in high Reynolds number turbulent channel flows for Prandtl number of 25

    International Nuclear Information System (INIS)

    Yamamoto, Yoshinobu; Kunugi, Tomoaki

    2015-01-01

    Graphical abstract: - Highlights: • For the first time, the MHD heat transfer DNS database corresponding to the typical nondimensional parameters of the fusion blanket design using molten salt, were established. • MHD heat transfer correlation was proposed and about 20% of the heat transfer degradation was evaluated under the design conditions. • The contribution of the turbulent diffusion to heat transfer is increased drastically with increasing Hartmann number. - Abstract: The high-Prandtl number passive scalar transport of the turbulent channel flow imposed a wall-normal magnetic field is investigated through the large-scale direct numerical simulation (DNS). All essential turbulence scales of velocities and temperature are resolved by using 2048 × 870 × 1024 computational grid points in stream, vertical, and spanwise directions. The heat transfer phenomena for a Prandtl number of 25 were observed under the following flow conditions: the bulk Reynolds number of 14,000 and Hartman number of up to 28. These values were equivalent to the typical nondimensional parameters of the fusion blanket design proposed by Wong et al. As a result, a high-accuracy DNS database for the verification of magnetohydrodynamic turbulent heat transfer models was established for the first time, and it was confirmed that the heat transfer correlation for a Prandtl number of 5.25 proposed by Yamamoto and Kunugi was applicable to the Prandtl number of 25 used in this study

  11. Direct numerical simulation of MHD heat transfer in high Reynolds number turbulent channel flows for Prandtl number of 25

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Yoshinobu, E-mail: yamamotoy@yamanashi.ac.jp [Department of Mechanical Systems Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu 400-8511 (Japan); Kunugi, Tomoaki [Department of Nuclear Engineering, Kyoto University Yoshida, Sakyo, Kyoto 606-8501 (Japan)

    2015-01-15

    Graphical abstract: - Highlights: • For the first time, the MHD heat transfer DNS database corresponding to the typical nondimensional parameters of the fusion blanket design using molten salt, were established. • MHD heat transfer correlation was proposed and about 20% of the heat transfer degradation was evaluated under the design conditions. • The contribution of the turbulent diffusion to heat transfer is increased drastically with increasing Hartmann number. - Abstract: The high-Prandtl number passive scalar transport of the turbulent channel flow imposed a wall-normal magnetic field is investigated through the large-scale direct numerical simulation (DNS). All essential turbulence scales of velocities and temperature are resolved by using 2048 × 870 × 1024 computational grid points in stream, vertical, and spanwise directions. The heat transfer phenomena for a Prandtl number of 25 were observed under the following flow conditions: the bulk Reynolds number of 14,000 and Hartman number of up to 28. These values were equivalent to the typical nondimensional parameters of the fusion blanket design proposed by Wong et al. As a result, a high-accuracy DNS database for the verification of magnetohydrodynamic turbulent heat transfer models was established for the first time, and it was confirmed that the heat transfer correlation for a Prandtl number of 5.25 proposed by Yamamoto and Kunugi was applicable to the Prandtl number of 25 used in this study.

  12. Cost-effective improvements of a rotating platform by integration of a high-accuracy inclinometer and encoders for attitude evaluation

    International Nuclear Information System (INIS)

    Wen, Chenyang; He, Shengyang; Hu, Peida; Bu, Changgen

    2017-01-01

    Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy. (paper)

  13. Accuracy assessment of high-rate GPS measurements for seismology

    Science.gov (United States)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  14. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    Directory of Open Access Journals (Sweden)

    Francisco J Valverde-Albacete

    Full Text Available The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA, a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT, a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers.

  15. Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields

    Science.gov (United States)

    Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo

    The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.

  16. Development of high velocity gas gun with a new trigger system-numerical analysis

    Science.gov (United States)

    Husin, Z.; Homma, H.

    2018-02-01

    In development of high performance armor vests, we need to carry out well controlled experiments using bullet speed of more than 900 m/sec. After reviewing trigger systems used for high velocity gas guns, this research intends to develop a new trigger system, which can realize precise and reproducible impact tests at impact velocity of more than 900 m/sec. A new trigger system developed here is called a projectile trap. A projectile trap is placed between a reservoir and a barrel. A projectile trap has two functions of a sealing disk and triggering. Polyamidimide is selected for the trap material and dimensions of the projectile trap are determined by numerical analysis for several levels of launching pressure to change the projectile velocity. Numerical analysis results show that projectile trap designed here can operate reasonably and stresses caused during launching operation are less than material strength. It means a projectile trap can be reused for the next shooting.

  17. Preliminary Study of 1D Thermal-Hydraulic System Analysis Code Using the Higher-Order Numerical Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Woong; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of)

    2016-05-15

    The existing nuclear system analysis codes such as RELAP5, TRAC, MARS and SPACE use the first-order numerical scheme in both space and time discretization. However, the first-order scheme is highly diffusive and less accurate due to the first order of truncation error. So, the numerical diffusion problem which makes the gradients to be smooth in the regions where the gradients should be steep can occur during the analysis, which often predicts less conservatively than the reality. Therefore, the first-order scheme is not always useful in many applications such as boron solute transport. RELAP7 which is an advanced nuclear reactor system safety analysis code using the second-order numerical scheme in temporal and spatial discretization is being developed by INL (Idaho National Laboratory) since 2011. Therefore, for better predictive performance of the safety of nuclear reactor systems, more accurate nuclear reactor system analysis code is needed for Korea too to follow the global trend of nuclear safety analysis. Thus, this study will evaluate the feasibility of applying the higher-order numerical scheme to the next generation nuclear system analysis code to provide the basis for the better nuclear system analysis code development. The accuracy is enhanced in the spatial second-order scheme and the numerical diffusion problem is alleviated while indicates significantly lower maximum Courant limit and the numerical dispersion issue which produces spurious oscillation and non-physical results in the higher-order scheme. If the spatial scheme is the first order scheme then the temporal second-order scheme provides almost the same result with the temporal firstorder scheme. However, when the temporal second order scheme and the spatial second-order scheme are applied together, the numerical dispersion can occur more severely. For the more in-depth study, the verification and validation of the NTS code built in MATLAB will be conducted further and expanded to handle two

  18. The effect of numerical techniques on differential equation based chaotic generators

    KAUST Repository

    Zidan, Mohammed A.; Radwan, Ahmed G.; Salama, Khaled N.

    2012-01-01

    In this paper, we study the effect of the numerical solution accuracy on the digital implementation of differential chaos generators. Four systems are built on a Xilinx Virtex 4 FPGA using Euler, mid-point, and Runge-Kutta fourth order techniques

  19. Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems

    Science.gov (United States)

    Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.

    2015-12-01

    Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.

  20. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    Science.gov (United States)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  1. [Numerical simulation of the effect of virtual stent release pose on the expansion results].

    Science.gov (United States)

    Li, Jing; Peng, Kun; Cui, Xinyang; Fu, Wenyu; Qiao, Aike

    2018-04-01

    The current finite element analysis of vascular stent expansion does not take into account the effect of the stent release pose on the expansion results. In this study, stent and vessel model were established by Pro/E. Five kinds of finite element assembly models were constructed by ABAQUS, including 0 degree without eccentricity model, 3 degree without eccentricity model, 5 degree without eccentricity model, 0 degree axial eccentricity model and 0 degree radial eccentricity model. These models were divided into two groups of experiments for numerical simulation with respect to angle and eccentricity. The mechanical parameters such as foreshortening rate, radial recoil rate and dog boning rate were calculated. The influence of angle and eccentricity on the numerical simulation was obtained by comparative analysis. Calculation results showed that the residual stenosis rates were 38.3%, 38.4%, 38.4%, 35.7% and 38.2% respectively for the 5 models. The results indicate that the pose has less effect on the numerical simulation results so that it can be neglected when the accuracy of the result is not highly required, and the basic model as 0 degree without eccentricity model is feasible for numerical simulation.

  2. Accuracy of applicator tip reconstruction in MRI-guided interstitial 192Ir-high-dose-rate brachytherapy of liver tumors

    International Nuclear Information System (INIS)

    Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens

    2015-01-01

    Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo

  3. STTR Phase I: Low-Cost, High-Accuracy, Whole-Building Carbon Dioxide Monitoring for Demand Control Ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Hallstrom, Jason; Ni, Zheng Richard

    2018-05-15

    This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a

  4. Multi-scale modelling and numerical simulation of electronic kinetic transport

    International Nuclear Information System (INIS)

    Duclous, R.

    2009-11-01

    This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms

  5. Efficient numerical simulation of heat storage in subsurface georeservoirs

    Science.gov (United States)

    Boockmeyer, A.; Bauer, S.

    2015-12-01

    The transition of the German energy market towards renewable energy sources, e.g. wind or solar power, requires energy storage technologies to compensate for their fluctuating production. Large amounts of energy could be stored in georeservoirs such as porous formations in the subsurface. One possibility here is to store heat with high temperatures of up to 90°C through borehole heat exchangers (BHEs) since more than 80 % of the total energy consumption in German households are used for heating and hot water supply. Within the ANGUS+ project potential environmental impacts of such heat storages are assessed and quantified. Numerical simulations are performed to predict storage capacities, storage cycle times, and induced effects. For simulation of these highly dynamic storage sites, detailed high-resolution models are required. We set up a model that accounts for all components of the BHE and verified it using experimental data. The model ensures accurate simulation results but also leads to large numerical meshes and thus high simulation times. In this work, we therefore present a numerical model for each type of BHE (single U, double U and coaxial) that reduces the number of elements and the simulation time significantly for use in larger scale simulations. The numerical model includes all BHE components and represents the temporal and spatial temperature distribution with an accuracy of less than 2% deviation from the fully discretized model. By changing the BHE geometry and using equivalent parameters, the simulation time is reduced by a factor of ~10 for single U-tube BHEs, ~20 for double U-tube BHEs and ~150 for coaxial BHEs. Results of a sensitivity study that quantify the effects of different design and storage formation parameters on temperature distribution and storage efficiency for heat storage using multiple BHEs are then shown. It is found that storage efficiency strongly depends on the number of BHEs composing the storage site, their distance and

  6. Experimental Preparation and Numerical Simulation of High Thermal Conductive Cu/CNTs Nanocomposites

    Directory of Open Access Journals (Sweden)

    Muhsan Ali Samer

    2014-07-01

    Full Text Available Due to the rapid growth of high performance electronics devices accompanied by overheating problem, heat dissipater nanocomposites material having ultra-high thermal conductivity and low coefficient of thermal expansion was proposed. In this work, a nanocomposite material made of copper (Cu reinforced by multi-walled carbon nanotubes (CNTs up to 10 vol. % was prepared and their thermal behaviour was measured experimentally and evaluated using numerical simulation. In order to numerically predict the thermal behaviour of Cu/CNTs composites, three different prediction methods were performed. The results showed that rules of mixture method records the highest thermal conductivity for all predicted composites. In contrast, the prediction model which takes into account the influence of the interface thermal resistance between CNTs and copper particles, has shown the lowest thermal conductivity which considered as the closest results to the experimental measurement. The experimentally measured thermal conductivities showed remarkable increase after adding 5 vol.% CNTs and higher than the thermal conductivities predicted via Nan models, indicating that the improved fabrication technique of powder injection molding that has been used to produced Cu/CNTs nanocomposites has overcome the challenges assumed in the mathematical models.

  7. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    Science.gov (United States)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  8. Interobserver variability and accuracy of high-definition endoscopic diagnosis for gastric intestinal metaplasia among experienced and inexperienced endoscopists.

    Science.gov (United States)

    Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-05-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.

  9. On the numerical stability analysis of pipelined Krylov subspace methods

    Czech Academy of Sciences Publication Activity Database

    Carson, E.T.; Rozložník, Miroslav; Strakoš, Z.; Tichý, P.; Tůma, M.

    submitted 2017 (2018) R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : Krylov subspace methods * the conjugate gradient method * numerical stability * inexact computations * delay of convergence * maximal attainable accuracy * pipelined Krylov subspace methods * exascale computations

  10. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  11. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    Science.gov (United States)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-09-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  12. Experimental and Numerical Investigation of Thermoacoustic Sources Related to High-Frequency Instabilities

    Directory of Open Access Journals (Sweden)

    Mathieu Zellhuber

    2014-03-01

    Full Text Available Flame dynamics related to high-frequency instabilities in gas turbine combustors are investigated using experimental observations and numerical simulations. Two different combustor types are studied, a premix swirl combustor (experiment and a generic reheat combustor (simulation. In both cases, a very similar dynamic behaviour of the reaction zone is observed, with the appearance of transverse displacement and coherent flame wrinkling. From these observations, a model for the thermoacoustic feedback linked to transverse modes is proposed. The model splits heat release rate fluctuations into distinct contributions that are related to flame displacement and variations of the mass burning rate. The decomposition procedure is applied on the numerical data and successfully verified by comparing a reconstructed Rayleigh index with the directly computed value. It thus allows to quantify the relative importance of various feedback mechanisms for a given setup.

  13. Numerical simulation and experimental research of the integrated high-power LED radiator

    Science.gov (United States)

    Xiang, J. H.; Zhang, C. L.; Gan, Z. J.; Zhou, C.; Chen, C. G.; Chen, S.

    2017-01-01

    The thermal management has become an urgent problem to be solved with the increasing power and the improving integration of the LED (light emitting diode) chip. In order to eliminate the contact resistance of the radiator, this paper presented an integrated high-power LED radiator based on phase-change heat transfer, which realized the seamless connection between the vapor chamber and the cooling fins. The radiator was optimized by combining the numerical simulation and the experimental research. The effects of the chamber diameter and the parameters of fin on the heat dissipation performance were analyzed. The numerical simulation results were compared with the measured values by experiment. The results showed that the fin thickness, the fin number, the fin height and the chamber diameter were the factors which affected the performance of radiator from primary to secondary.

  14. Enhancement accuracy of approximated solutions of the nonlinear singular integral equations of Chew-Low type

    International Nuclear Information System (INIS)

    Zhidkov, E.P.; Nguen Mong; Khoromskij, B.N.

    1979-01-01

    The ways of enhancement of the accuracy of approximate solutions of the Chew-Low type equation are considered. Difference schemes are proposed which allow one to obtain solution expansion in degrees of lattice step. On the basis of the expansion by the Richardson method the refinement of approximated solutions is made. Besides, the iteration process is constructed which reduces immediately to the solution of enhanced accuracy. The efficiency of the methods proposed is illustrated by numerical examples

  15. The numerical dynamic for highly nonlinear partial differential equations

    Science.gov (United States)

    Lafon, A.; Yee, H. C.

    1992-01-01

    Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.

  16. Development of Large Scale Bed Forms in the Sea –2DH Numerical Modeling

    DEFF Research Database (Denmark)

    Margalit, Jonatan; Fuhrman, David R.

    Large repetitive patterns on the sea bed are commonly observed in sandy areas. The formation of the bed forms have been studied extensively in literature using linear stability analyses, commonly conducted analytically and with simplifications in the governing equations. This work presents...... a shallow water equation model that is used to numerically simulate the morphodynamics of the water-bed system. The model includes separate formulations for bed load and suspended load, featuring bed load correction due to a sloping bed and modelled helical flow effects. Horizontal gradients are computed...... with spectral accuracy, which proves highly efficient for the analysis. Numerical linear stability analysis is used to identify the likely emergence of dominant finite sized bed forms, as a function of governing parameters. These are then used for interpretation of the results of a long time morphological...

  17. Modified sine bar device measures small angles with high accuracy

    Science.gov (United States)

    Thekaekara, M.

    1968-01-01

    Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.

  18. Matter power spectrum and the challenge of percent accuracy

    OpenAIRE

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2015-01-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day $N$-body methods, identifying main potential error sources from the set-up of initial conditions to...

  19. Systematic review of discharge coding accuracy

    Science.gov (United States)

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  20. Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model

    International Nuclear Information System (INIS)

    Pope, Michael A.; Mousseau, Vincent A.

    2009-01-01

    The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle this larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient.

  1. Accuracy and Efficiency of a Coupled Neutronics and Thermal Hydraulics Model

    International Nuclear Information System (INIS)

    Vincent A. Mousseau; Michael A. Pope

    2007-01-01

    The accuracy requirements for modern nuclear reactor simulation are steadily increasing due to the cost and regulation of relevant experimental facilities. Because of the increase in the cost of experiments and the decrease in the cost of simulation, simulation will play a much larger role in the design and licensing of new nuclear reactors. Fortunately as the work load of simulation increases, there are better physics models, new numerical techniques, and more powerful computer hardware that will enable modern simulation codes to handle the larger workload. This manuscript will discuss a numerical method where the six equations of two-phase flow, the solid conduction equations, and the two equations that describe neutron diffusion and precursor concentration are solved together in a tightly coupled, nonlinear fashion for a simplified model of a nuclear reactor core. This approach has two important advantages. The first advantage is a higher level of accuracy. Because the equations are solved together in a single nonlinear system, the solution is more accurate than the traditional 'operator split' approach where the two-phase flow equations are solved first, the heat conduction is solved second and the neutron diffusion is solved third, limiting the temporal accuracy to 1st order because the nonlinear coupling between the physics is handled explicitly. The second advantage of the method described in this manuscript is that the time step control in the fully implicit system can be based on the timescale of the solution rather than a stability-based time step restriction like the material Courant. Results are presented from a simulated control rod movement and a rod ejection that address temporal accuracy for the fully coupled solution and demonstrate how the fastest timescale of the problem can change between the state variables of neutronics, conduction and two-phase flow during the course of a transient

  2. Accuracy Assessment and Analysis for GPT2

    Directory of Open Access Journals (Sweden)

    YAO Yibin

    2015-07-01

    Full Text Available GPT(global pressure and temperature is a global empirical model usually used to provide temperature and pressure for the determination of tropospheric delay, there are some weakness to GPT, these have been improved with a new empirical model named GPT2, which not only improves the accuracy of temperature and pressure, but also provides specific humidity, water vapor pressure, mapping function coefficients and other tropospheric parameters, and no accuracy analysis of GPT2 has been made until now. In this paper high-precision meteorological data from ECWMF and NOAA were used to test and analyze the accuracy of temperature, pressure and water vapor pressure expressed by GPT2, testing results show that the mean Bias of temperature is -0.59℃, average RMS is 3.82℃; absolute value of average Bias of pressure and water vapor pressure are less than 1 mb, GPT2 pressure has average RMS of 7 mb, and water vapor pressure no more than 3 mb, accuracy is different in different latitudes, all of them have obvious seasonality. In conclusion, GPT2 model has high accuracy and stability on global scale.

  3. Spatio-Temporal Analysis of the Accuracy of Tropical Multisatellite Precipitation Analysis 3B42 Precipitation Data in Mid-High Latitudes of China

    Science.gov (United States)

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998–2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  4. Spatio-temporal analysis of the accuracy of tropical multisatellite precipitation analysis 3B42 precipitation data in mid-high latitudes of China.

    Directory of Open Access Journals (Sweden)

    Yancong Cai

    Full Text Available Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM Multisatellite Precipitation Analysis (TMPA 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS. This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%. Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these

  5. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  6. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  7. Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.

    Science.gov (United States)

    Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua

    2011-05-15

    High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.

  8. High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li

    CERN Multimedia

    Le scornet, G

    2002-01-01

    The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.

  9. A student's guide to numerical methods

    CERN Document Server

    Hutchinson, Ian H

    2015-01-01

    This concise, plain-language guide for senior undergraduates and graduate students aims to develop intuition, practical skills and an understanding of the framework of numerical methods for the physical sciences and engineering. It provides accessible self-contained explanations of mathematical principles, avoiding intimidating formal proofs. Worked examples and targeted exercises enable the student to master the realities of using numerical techniques for common needs such as solution of ordinary and partial differential equations, fitting experimental data, and simulation using particle and Monte Carlo methods. Topics are carefully selected and structured to build understanding, and illustrate key principles such as: accuracy, stability, order of convergence, iterative refinement, and computational effort estimation. Enrichment sections and in-depth footnotes form a springboard to more advanced material and provide additional background. Whether used for self-study, or as the basis of an accelerated introdu...

  10. The streamline upwind Petrov-Galerkin stabilising method for the numerical solution of highly advective problems

    Directory of Open Access Journals (Sweden)

    Carlos Humberto Galeano Urueña

    2009-05-01

    Full Text Available This article describes the streamline upwind Petrov-Galerkin (SUPG method as being a stabilisation technique for resolving the diffusion-advection-reaction equation by finite elements. The first part of this article has a short analysis of the importance of this type of differential equation in modelling physical phenomena in multiple fields. A one-dimensional description of the SUPG me- thod is then given to extend this basis to two and three dimensions. The outcome of a strongly advective and a high numerical complexity experiment is presented. The results show how the version of the implemented SUPG technique allowed stabilised approaches in space, even for high Peclet numbers. Additional graphs of the numerical experiments presented here can be downloaded from www.gnum.unal.edu.co.

  11. Numerical simulations of novel high-power high-brightness diode laser structures

    Science.gov (United States)

    Boucke, Konstantin; Rogg, Joseph; Kelemen, Marc T.; Poprawe, Reinhart; Weimann, Guenter

    2001-07-01

    One of the key topics in today's semiconductor laser development activities is to increase the brightness of high-power diode lasers. Although structures showing an increased brightness have been developed specific draw-backs of these structures lead to a still strong demand for investigation of alternative concepts. Especially for the investigation of basically novel structures easy-to-use and fast simulation tools are essential to avoid unnecessary, cost and time consuming experiments. A diode laser simulation tool based on finite difference representations of the Helmholtz equation in 'wide-angle' approximation and the carrier diffusion equation has been developed. An optimized numerical algorithm leads to short execution times of a few seconds per resonator round-trip on a standard PC. After each round-trip characteristics like optical output power, beam profile and beam parameters are calculated. A graphical user interface allows online monitoring of the simulation results. The simulation tool is used to investigate a novel high-power, high-brightness diode laser structure, the so-called 'Z-Structure'. In this structure an increased brightness is achieved by reducing the divergency angle of the beam by angular filtering: The round trip path of the beam is two times folded using internal total reflection at surfaces defined by a small index step in the semiconductor material, forming a stretched 'Z'. The sharp decrease of the reflectivity for angles of incidence above the angle of total reflection leads to a narrowing of the angular spectrum of the beam. The simulations of the 'Z-Structure' indicate an increase of the beam quality by a factor of five to ten compared to standard broad-area lasers.

  12. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  13. Towards high fidelity numerical wave tanks for modelling coastal and ocean engineering processes

    Science.gov (United States)

    Cozzuto, G.; Dimakopoulos, A.; de Lataillade, T.; Kees, C. E.

    2017-12-01

    With the increasing availability of computational resources, the engineering and research community is gradually moving towards using high fidelity Comutational Fluid Mechanics (CFD) models to perform numerical tests for improving the understanding of physical processes pertaining to wave propapagation and interaction with the coastal environment and morphology, either physical or man-made. It is therefore important to be able to reproduce in these models the conditions that drive these processes. So far, in CFD models the norm is to use regular (linear or nonlinear) waves for performing numerical tests, however, only random waves exist in nature. In this work, we will initially present the verification and validation of numerical wave tanks based on Proteus, an open-soruce computational toolkit based on finite element analysis, with respect to the generation, propagation and absorption of random sea states comprising of long non-repeating wave sequences. Statistical and spectral processing of results demonstrate that the methodologies employed (including relaxation zone methods and moving wave paddles) are capable of producing results of similar quality to the wave tanks used in laboratories (Figure 1). Subsequently cases studies of modelling complex process relevant to coastal defences and floating structures such as sliding and overturning of composite breakwaters, heave and roll response of floating caissons are presented. Figure 1: Wave spectra in the numerical wave tank (coloured symbols), compared against the JONSWAP distribution

  14. Numerical study of Taylor bubbles with adaptive unstructured meshes

    Science.gov (United States)

    Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry

    2014-11-01

    The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  15. Numerical analysis of energy density and particle density in high energy heavy-ion collisions

    International Nuclear Information System (INIS)

    Fu Yuanyong; Lu Zhongdao

    2004-01-01

    Energy density and particle density in high energy heavy-ion collisions are calculated with infinite series expansion method and Gauss-Laguerre formulas in numerical integration separately, and the results of these two methods are compared, the higher terms and linear terms in series expansion are also compared. The results show that Gauss-Laguerre formulas is a good method in calculations of high energy heavy-ion collisions. (author)

  16. Numerical experiment on finite element method for matching data

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Kumakura, Toshimasa; Yoshimura, Koichi.

    1993-03-01

    Numerical experiments are presented on the finite element method by Pletzer-Dewar for matching data of an ordinary differential equation with regular singular points by using model equation. Matching data play an important role in nonideal MHD stability analysis of a magnetically confined plasma. In the Pletzer-Dewar method, the Frobenius series for the 'big solution', the fundamental solution which is not square-integrable at the regular singular point, is prescribed. The experiments include studies of the convergence rate of the matching data obtained by the finite element method and of the effect on the results of computation by truncating the Frobenius series at finite terms. It is shown from the present study that the finite element method is an effective method for obtaining the matching data with high accuracy. (author)

  17. Accuracy assessment of cadastral maps using high resolution aerial photos

    Directory of Open Access Journals (Sweden)

    Alwan Imzahim

    2018-01-01

    Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.

  18. Analysis and Application of High Resolution Numerical Perturbation Algorithm for Convective-Diffusion Equation

    International Nuclear Information System (INIS)

    Gao Zhi; Shen Yi-Qing

    2012-01-01

    The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme

  19. A high-order discontinuous Galerkin method for wave propagation through coupled elastic-acoustic media

    International Nuclear Information System (INIS)

    Wilcox, Lucas C.; Stadler, Georg; Burstedde, Carsten; Ghattas, Omar

    2010-01-01

    We introduce a high-order discontinuous Galerkin (dG) scheme for the numerical solution of three-dimensional (3D) wave propagation problems in coupled elastic-acoustic media. A velocity-strain formulation is used, which allows for the solution of the acoustic and elastic wave equations within the same unified framework. Careful attention is directed at the derivation of a numerical flux that preserves high-order accuracy in the presence of material discontinuities, including elastic-acoustic interfaces. Explicit expressions for the 3D upwind numerical flux, derived as an exact solution for the relevant Riemann problem, are provided. The method supports h-non-conforming meshes, which are particularly effective at allowing local adaptation of the mesh size to resolve strong contrasts in the local wavelength, as well as dynamic adaptivity to track solution features. The use of high-order elements controls numerical dispersion, enabling propagation over many wave periods. We prove consistency and stability of the proposed dG scheme. To study the numerical accuracy and convergence of the proposed method, we compare against analytical solutions for wave propagation problems with interfaces, including Rayleigh, Lamb, Scholte, and Stoneley waves as well as plane waves impinging on an elastic-acoustic interface. Spectral rates of convergence are demonstrated for these problems, which include a non-conforming mesh case. Finally, we present scalability results for a parallel implementation of the proposed high-order dG scheme for large-scale seismic wave propagation in a simplified earth model, demonstrating high parallel efficiency for strong scaling to the full size of the Jaguar Cray XT5 supercomputer.

  20. Application of large-eddy simulation to pressurized thermal shock: Assessment of the accuracy

    International Nuclear Information System (INIS)

    Loginov, M.S.; Komen, E.M.J.; Hoehne, T.

    2011-01-01

    Highlights: → We compare large-eddy simulation with experiment on the single-phase pressurized thermal shock problem. → Three test cases are considered, they cover entire range of mixing patterns. → The accuracy of the flow mixing in the reactor pressure vessel is assessed qualitatively and quantitatively. - Abstract: Pressurized Thermal Shock (PTS) is identified as one of the safety issues where Computational Fluid Dynamics (CFD) can bring real benefits. The turbulence modeling may impact overall accuracy of the calculated thermal loads on the vessel walls, therefore advanced methods for turbulent flows are required. The feasibility and mesh resolution of LES for single-phase PTS are assessed earlier in a companion paper. The current investigation deals with the accuracy of LES approach with respect to the experiment. Experimental data from the Rossendorf Coolant Mixing (ROCOM) facility is used as a basis for validation. Three test cases with different flow rates are considered. They correspond to a buoyancy-driven, a momentum-driven, and a transitional coolant mixing pattern in the downcomer. Time- and frequency-domain analysis are employed for comparison of the numerical and experimental data. The investigation shows a good qualitative prediction of the bulk flow patterns. The fluctuations are modeled correctly. A conservative estimate of the temperature drop near the wall can be obtained from the numerical results with safety factor of 1.1-1.3. In general, the current LES gives a realistic and reliable description of the considered coolant mixing experiments. The accuracy of the prediction is definitely improved with respect to earlier CFD simulations.

  1. Interethnic differences in the accuracy of anthropometric indicators of obesity in screening for high risk of coronary heart disease

    Science.gov (United States)

    Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE

    2009-01-01

    Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159

  2. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  3. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  4. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  5. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    Science.gov (United States)

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  6. Numerical Simulation of Oil Jet Lubrication for High Speed Gears

    Directory of Open Access Journals (Sweden)

    Tommaso Fondelli

    2015-01-01

    Full Text Available The Geared Turbofan technology is one of the most promising engine configurations to significantly reduce the specific fuel consumption. In this architecture, a power epicyclical gearbox is interposed between the fan and the low pressure spool. Thanks to the gearbox, fan and low pressure spool can turn at different speed, leading to higher engine bypass ratio. Therefore the gearbox efficiency becomes a key parameter for such technology. Further improvement of efficiency can be achieved developing a physical understanding of fluid dynamic losses within the transmission system. These losses are mainly related to viscous effects and they are directly connected to the lubrication method. In this work, the oil injection losses have been studied by means of CFD simulations. A numerical study of a single oil jet impinging on a single high speed gear has been carried out using the VOF method. The aim of this analysis is to evaluate the resistant torque due to the oil jet lubrication, correlating the torque data with the oil-gear interaction phases. URANS calculations have been performed using an adaptive meshing approach, as a way of significantly reducing the simulation costs. A global sensitivity analysis of adopted models has been carried out and a numerical setup has been defined.

  7. The diagnostic test accuracy of ultrasound for the detection of lateral epicondylitis: a systematic review and meta-analysis.

    Science.gov (United States)

    Latham, S K; Smith, T O

    2014-05-01

    The purpose of this study was to determine the diagnostic test accuracy of ultrasound for the detection of lateral epicondylitis. An electronic search of databases registering published (MEDLINE, EMBASE, CINAHL, AMED, Cochrane Library, ScienceDirect) and unpublished literature was conducted to January 2013. All diagnostic accuracy studies that compared the accuracy of ultrasound (index test) with a reference standard for lateral epicondylitis were included. The methodological quality of each of the studies was appraised using the QUADAS tool. When appropriate, the pooled sensitivity and specificity analysis was conducted. Ten studies investigating 711 participants and 1077 elbows were included in this review. Ultrasound had variable sensitivity and specificity (sensitivity: 64%-100%; specificity: 36%-100%). The available literature had modest methodological quality, and was limited in terms of sample sizes and blinding between index and reference test results. There is evidence to support the use of ultrasound in the detection of lateral epicondylitis. However, its accuracy appears to be highly dependent on numerous variables, such as operator experience, equipment and stage of pathology. Judgement should be used when considering the benefit of ultrasound for use in clinical practice. Further research assessing variables such a transducer frequency independently is specifically warranted. Level II. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. Multigrid solution of the convection-diffusion equation with high-Reynolds number

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jun [George Washington Univ., Washington, DC (United States)

    1996-12-31

    A fourth-order compact finite difference scheme is employed with the multigrid technique to solve the variable coefficient convection-diffusion equation with high-Reynolds number. Scaled inter-grid transfer operators and potential on vectorization and parallelization are discussed. The high-order multigrid method is unconditionally stable and produces solution of 4th-order accuracy. Numerical experiments are included.

  9. The effect of numerical techniques on differential equation based chaotic generators

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-29

    In this paper, we study the effect of the numerical solution accuracy on the digital implementation of differential chaos generators. Four systems are built on a Xilinx Virtex 4 FPGA using Euler, mid-point, and Runge-Kutta fourth order techniques. The twelve implementations are compared based on the FPGA used area, maximum throughput, maximum Lyapunov exponent, and autocorrelation confidence region. Based on circuit performance and the chaotic response of the different implementations, it was found that less complicated numerical solution has better chaotic response and higher throughput.

  10. Numerical solutions of stochastic Lotka-Volterra equations via operational matrices

    Directory of Open Access Journals (Sweden)

    F. Hosseini Shekarabi

    2016-03-01

    Full Text Available In this paper, an efficient and convenient method for numerical solutions of stochastic Lotka-Volterra dynamical system is proposed. Here, we consider block pulse functions and their operational matrices of integration. Illustrative example is included to demonstrate the procedure and accuracy of the operational matrices based on block pulse functions.

  11. Numerical Study on Several Stabilized Finite Element Methods for the Steady Incompressible Flow Problem with Damping

    Directory of Open Access Journals (Sweden)

    Jilian Wu

    2013-01-01

    Full Text Available We discuss several stabilized finite element methods, which are penalty, regular, multiscale enrichment, and local Gauss integration method, for the steady incompressible flow problem with damping based on the lowest equal-order finite element space pair. Then we give the numerical comparisons between them in three numerical examples which show that the local Gauss integration method has good stability, efficiency, and accuracy properties and it is better than the others for the steady incompressible flow problem with damping on the whole. However, to our surprise, the regular method spends less CPU-time and has better accuracy properties by using Crout solver.

  12. Assessment of high precision, high accuracy Inductively Coupled Plasma-Optical Emission Spectroscopy to obtain concentration uncertainties less than 0.2% with variable matrix concentrations

    International Nuclear Information System (INIS)

    Rabb, Savelas A.; Olesik, John W.

    2008-01-01

    The ability to obtain high precision, high accuracy measurements in samples with complex matrices using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy (HP-ICP-OES) was investigated. The Common Analyte Internal Standard (CAIS) procedure was incorporated into the High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method to correct for matrix-induced changes in emission intensity ratios. Matrix matching and standard addition approaches to minimize matrix-induced errors when using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy were also assessed. The High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method was tested with synthetic solutions in a variety of matrices, alloy standard reference materials and geological reference materials

  13. The accuracy of {sup 68}Ga-PSMA PET/CT in primary lymph node staging in high-risk prostate cancer

    Energy Technology Data Exchange (ETDEWEB)

    Oebek, Can; Doganca, Tuenkut [Acibadem Taksim Hospital, Department of Urology, Istanbul (Turkey); Demirci, Emre [Sisli Etfal Training and Research Hospital, Department of Nuclear Medicine, Istanbul (Turkey); Ocak, Meltem [Istanbul University, Faculty of Pharmacy, Department of Pharmaceutical Technology, Istanbul (Turkey); Kural, Ali Riza [Acibadem University, Department of Urology, Istanbul (Turkey); Yildirim, Asif [Istanbul Medeniyet University, Department of Urology, Istanbul (Turkey); Yuecetas, Ugur [Istanbul Training and Research Hospital, Department of Urology, Istanbul (Turkey); Demirdag, Cetin [Istanbul University, Cerrahpasa School of Medicine, Department of Urology, Istanbul (Turkey); Erdogan, Sarper M. [Istanbul University, Cerrahpasa School of Medicine, Department of Public Health, Istanbul (Turkey); Kabasakal, Levent [Istanbul University, Cerrahpasa School of Medicine, Department of Nuclear Medicine, Istanbul (Turkey); Collaboration: Members of Urooncology Association, Turkey

    2017-10-15

    To assess the diagnostic accuracy of {sup 68}Ga-PSMA PET in predicting lymph node (LN) metastases in primary N staging in high-risk and very high-risk nonmetastatic prostate cancer in comparison with morphological imaging. This was a multicentre trial of the Society of Urologic Oncology in Turkey in conjunction with the Nuclear Medicine Department of Cerrahpasa School of Medicine, Istanbul University. Patients were accrued from eight centres. Patients with high-risk and very high-risk disease scheduled to undergo surgical treatment with extended LN dissection between July 2014 and October 2015 were included. Either MRI or CT was used for morphological imaging. PSMA PET/CT was performed and evaluated at a single centre. Sensitivity, specificity and accuracy were calculated for the detection of lymphatic metastases by PSMA PET/CT and morphological imaging. Kappa values were calculated to evaluate the correlation between the numbers of LN metastases detected by PSMA PET/CT and by histopathology. Data on 51 eligible patients are presented. The sensitivity, specificity and accuracy of PSMA PET in detecting LN metastases in the primary setting were 53%, 86% and 76%, and increased to 67%, 88% and 81% in the subgroup with of patients with ≥15 LN removed. Kappa values for the correlation between imaging and pathology were 0.41 for PSMA PET and 0.18 for morphological imaging. PSMA PET/CT is superior to morphological imaging for the detection of metastatic LNs in patients with primary prostate cancer. Surgical dissection remains the gold standard for precise lymphatic staging. (orig.)

  14. Discussion of various flow calculation methods in high-speed centrifuges

    International Nuclear Information System (INIS)

    Louvet, P.; Cortet, C.

    1979-01-01

    The flow in high-speed centrifuges for the separation of uranium isotopes has been studied in the frame of linearized theory for long years. Three different methods have been derived for viscous compressible flow with small Ekman numbers and high Mach numbers: - numerical solution of flow equation by finite element method and Gaussian elimination (Centaure Code), - boundary layer theory using matched asymptotic expansions, - the so called eigenfunction method slightly modified. The mathematical assumptions, the easiness and the accuracy of the computations are compared. Numerical applications are performed successively for thermal countercurrent centrifuges with or without injections

  15. Time domain numerical calculations of the short electron bunch wakefields in resistive structures

    Energy Technology Data Exchange (ETDEWEB)

    Tsakanian, Andranik

    2010-10-15

    The acceleration of electron bunches with very small longitudinal and transverse phase space volume is one of the most actual challenges for the future International Linear Collider and high brightness X-Ray Free Electron Lasers. The exact knowledge on the wake fields generated by the ultra-short electron bunches during its interaction with surrounding structures is a very important issue to prevent the beam quality degradation and to optimize the facility performance. The high accuracy time domain numerical calculations play the decisive role in correct evaluation of the wake fields in advanced accelerators. The thesis is devoted to the development of a new longitudinally dispersion-free 3D hybrid numerical scheme in time domain for wake field calculation of ultra short bunches in structures with walls of finite conductivity. The basic approaches used in the thesis to solve the problem are the following. For materials with high but finite conductivity the model of the plane wave reflection from a conducting half-space is used. It is shown that in the conductive half-space the field components perpendicular to the interface can be neglected. The electric tangential component on the surface contributes to the tangential magnetic field in the lossless area just before the boundary layer. For high conducting media, the task is reduced to 1D electromagnetic problem in metal and the so-called 1D conducting line model can be applied instead of a full 3D space description. Further, a TE/TM (''transverse electric - transverse magnetic'') splitting implicit numerical scheme along with 1D conducting line model is applied to develop a new longitudinally dispersion-free hybrid numerical scheme in the time domain. The stability of the new hybrid numerical scheme in vacuum, conductor and bound cell is studied. The convergence of the new scheme is analyzed by comparison with the well-known analytical solutions. The wakefield calculations for a number of

  16. Interobserver Variability and Accuracy of High-Definition Endoscopic Diagnosis for Gastric Intestinal Metaplasia among Experienced and Inexperienced Endoscopists

    Science.gov (United States)

    Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo

    2013-01-01

    Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267

  17. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  18. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  19. High-accuracy waveforms for binary black hole inspiral, merger, and ringdown

    International Nuclear Information System (INIS)

    Scheel, Mark A.; Boyle, Michael; Chu, Tony; Matthews, Keith D.; Pfeiffer, Harald P.; Kidder, Lawrence E.

    2009-01-01

    The first spectral numerical simulations of 16 orbits, merger, and ringdown of an equal-mass nonspinning binary black hole system are presented. Gravitational waveforms from these simulations have accumulated numerical phase errors through ringdown of f /M=0.951 62±0.000 02, and the final black hole spin is S f /M f 2 =0.686 46±0.000 04.

  20. The effect of high-resolution orography on numerical modelling of atmospheric flow: a preliminary experiment

    International Nuclear Information System (INIS)

    Scarani, C.; Tampieri, F.; Tibaldi, S.

    1983-01-01

    The effect of increasing the resolution of the topography in models of numerical weather prediction is assessed. Different numerical experiments have been performed, referring to a case of cyclogenesis in the lee of the Alps. From the comparison, it appears that the lower atmospheric levels are better described by the model with higherresolution topography; comparable horizontal resolution runs with smoother topography appear to be less satisfactory in this respect. It turns out also that the vertical propagation of the signal due to the front-mountain interaction is faster in the high-resolution experiment

  1. Numerical Prediction of Springback Shape of Severely Bent Sheet Metal

    International Nuclear Information System (INIS)

    Iwata, Noritoshi; Murata, Atsunobu; Yogo, Yasuhiro; Tsutamori, Hideo; Niihara, Masatomo; Ishikura, Hiroshi; Umezu, Yasuyoshi

    2007-01-01

    In the sheet metal forming simulation, the shell element widely used is assumed as a plane stress state based on the Mindlin-Reissner theory. Numerical prediction with the conventional shell element is not accurate when the bending radius is small compared to the sheet thickness. The main reason is because the strain and stress formulation of the conventional shell element does not fit the actual phenomenon. In order to predict precisely the springback of a bent sheet with a severe bend, a measurement method for through-thickness strain has been proposed. The strain was formulated based on measurement results and calculation results from solid element. Through-thickness stress distribution was formulated based on the equilibrium. The proposed shell element based on the formulations was newly introduced into the FEM code. The accuracy of this method's prediction of the springback shape of two bent processes has been confirmed. As a result, it was found that the springback shape even in severe bending can be predicted with high accuracy. Moreover, the calculation time in the proposed shell element is about twice that in the conventional shell element, and has been shortened to about 1/20 compared to a solid element

  2. Modeling and numerical simulations of the influenced Sznajd model

    Science.gov (United States)

    Karan, Farshad Salimi Naneh; Srinivasan, Aravinda Ramakrishnan; Chakraborty, Subhadeep

    2017-08-01

    This paper investigates the effects of independent nonconformists or influencers on the behavioral dynamic of a population of agents interacting with each other based on the Sznajd model. The system is modeled on a complete graph using the master equation. The acquired equation has been numerically solved. Accuracy of the mathematical model and its corresponding assumptions have been validated by numerical simulations. Regions of initial magnetization have been found from where the system converges to one of two unique steady-state PDFs, depending on the distribution of influencers. The scaling property and entropy of the stationary system in presence of varying level of influence have been presented and discussed.

  3. Preliminary analysis of four numerical models for calculating the mesoscale transport of Kr-85

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W; Cooper, R E [Du Pont de Nemours (E.I.) and Co., Aiken, SC (USA). Savannah River Lab.

    1983-01-01

    A performance study of four numerical algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids has been made. Dispersion from point and distributed sources and a simulation of a continuous source are compared with analytical solutions to assess relative accuracy. Model predictions are then compared with actual measurements of Kr-85 emitted from the Savannah River Plant (SRP). The particle-in-cell and method of moments algorithms exhibit superior accuracy in modeling single source releases. For modeling distributed sources, algorithms based on the pseudospectral and finite element interpolation concepts exhibit comparable accuracy. The method of moments is felt to be the best overall performer, although all the models appear to be relatively close in accuracy.

  4. On Numerical Stability in Large Scale Linear Algebraic Computations

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Liesen, J.

    2005-01-01

    Roč. 85, č. 5 (2005), s. 307-325 ISSN 0044-2267 R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : linear algebraic systems * eigenvalue problems * convergence * numerical stability * backward error * accuracy * Lanczos method * conjugate gradient method * GMRES method Subject RIV: BA - General Mathematics Impact factor: 0.351, year: 2005

  5. Numerical solution of modified differential equations based on symmetry preservation.

    Science.gov (United States)

    Ozbenli, Ersin; Vedula, Prakash

    2017-12-01

    In this paper, we propose a method to construct invariant finite-difference schemes for solution of partial differential equations (PDEs) via consideration of modified forms of the underlying PDEs. The invariant schemes, which preserve Lie symmetries, are obtained based on the method of equivariant moving frames. While it is often difficult to construct invariant numerical schemes for PDEs due to complicated symmetry groups associated with cumbersome discrete variable transformations, we note that symmetries associated with more convenient transformations can often be obtained by appropriately modifying the original PDEs. In some cases, modifications to the original PDEs are also found to be useful in order to avoid trivial solutions that might arise from particular selections of moving frames. In our proposed method, modified forms of PDEs can be obtained either by addition of perturbation terms to the original PDEs or through defect correction procedures. These additional terms, whose primary purpose is to enable symmetries with more convenient transformations, are then removed from the system by considering moving frames for which these specific terms go to zero. Further, we explore selection of appropriate moving frames that result in improvement in accuracy of invariant numerical schemes based on modified PDEs. The proposed method is tested using the linear advection equation (in one- and two-dimensions) and the inviscid Burgers' equation. Results obtained for these tests cases indicate that numerical schemes derived from the proposed method perform significantly better than existing schemes not only by virtue of improvement in numerical accuracy but also due to preservation of qualitative properties or symmetries of the underlying differential equations.

  6. Classification Accuracy Increase Using Multisensor Data Fusion

    Science.gov (United States)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  7. The ripple electromagnetic calculation: accuracy demand and possible responses

    International Nuclear Information System (INIS)

    Cocilovo, V.; Ramogida, G.; Formisano, A.; Martone, R.; Portone, A.; Roccella, M.; Roccella, R.

    2006-01-01

    Due to a number of causes (the finite number of toroidal field coils or the presence of concentrate blocks of magnetic materials, as the neutral beam shielding) the actual magnetic configuration in a Tokamak differs from the desired one. For example, a ripple is added to the ideal axisymmetric toroidal field, impacting the equilibrium and stability of the plasma column; as a further example the magnetic field out of plasma affects the operation of a number of critical components, included the diagnostic system and the neutral beam. Therefore the actual magnetic field has to be suitably calculated and his shape controlled within the required limits. Due to the complexity of its design, the problem is quite critical for the ITER project. In this paper the problem is discussed both from mathematical and numerical point of view. In particular, a complete formulation is proposed, taking into account both the presence of the non linear magnetic materials and the fully 3D geometry. Then the quality level requirements are discussed, included the accuracy of calculations and the spatial resolution. As a consequence, the numerical tools able to fulfil the quality needs while requiring reasonable computer burden are considered. In particular possible tools based on numerical FEM scheme are considered; in addition, in spite of the presence of non linear materials, the practical possibility to use Biot-Savart based approaches, as cross check tools, is also discussed. The paper also analyses the possible geometrical simplifications of the geometry able to make possible the actual calculation while guarantying the required accuracy. Finally the characteristics required for a correction system able to effectively counteract the magnetic field degradation are presented. Of course a number of examples will be also reported and commented. (author)

  8. Numerical Solutions for Nonlinear High Damping Rubber Bearing Isolators: Newmark's Method with Netwon-Raphson Iteration Revisited

    Science.gov (United States)

    Markou, A. A.; Manolis, G. D.

    2018-03-01

    Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project) against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark's time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.

  9. A three axis turntable's online initial state measurement method based on the high-accuracy laser gyro SINS

    Science.gov (United States)

    Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu

    2016-10-01

    As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.

  10. Measurement of shape mapping accuracy of a flaccid membrane of a heart assist pump

    Directory of Open Access Journals (Sweden)

    Wojciech Sulej

    2017-12-01

    Full Text Available The paper presents the research results which are a continuation of work on the use of image processing techniques to determine the membrane shape of the artificial ventricle. The studies were focused on developing a technique for measuring the accuracy of the membrane shape mapping. It is important in view of ensuring the required accuracy of determining the instantaneous stroke volume of controlled pneumatic artificial ventricular. Experiments were carried out on the models of convex, concave, and flat membranes. The purpose of the research was to obtain a numerical indicator, which will be used to evaluate the options to improve mapping techniques of the membrane shape. Keywords: accuracy measurement, membrane shape mapping, optical sensor

  11. Numerical Study on Critical Wedge Angle of Cellular Detonation Reflections

    International Nuclear Information System (INIS)

    Gang, Wang; Kai-Xin, Liu; De-Liang, Zhang

    2010-01-01

    The critical wedge angle (CWA) for the transition from regular reflection (RR) to Mach reflection (MR) of a cellular detonation wave is studied numerically by an improved space-time conservation element and solution element method together with a two-step chemical reaction model. The accuracy of that numerical way is verified by simulating cellular detonation reflections at a 19.3° wedge. The planar and cellular detonation reflections over 45°–55° wedges are also simulated. When the cellular detonation wave is over a 50° wedge, numerical results show a new phenomenon that RR and MR occur alternately. The transition process between RR and MR is investigated with the local pressure contours. Numerical analysis shows that the cellular structure is the essential reason for the new phenomenon and the CWA of detonation reflection is not a certain angle but an angle range. (fundamental areas of phenomenology(including applications))

  12. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Science.gov (United States)

    Zhu, Xiangbin; Qiu, Huiling

    2016-01-01

    Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  13. High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.

    Directory of Open Access Journals (Sweden)

    Xiangbin Zhu

    Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.

  14. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    Science.gov (United States)

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  15. Three-dimensional holographic optical manipulation through a high-numerical-aperture soft-glass multimode fibre

    Science.gov (United States)

    Leite, Ivo T.; Turtaev, Sergey; Jiang, Xin; Šiler, Martin; Cuschieri, Alfred; Russell, Philip St. J.; Čižmár, Tomáš

    2018-01-01

    Holographic optical tweezers (HOT) hold great promise for many applications in biophotonics, allowing the creation and measurement of minuscule forces on biomolecules, molecular motors and cells. Geometries used in HOT currently rely on bulk optics, and their exploitation in vivo is compromised by the optically turbid nature of tissues. We present an alternative HOT approach in which multiple three-dimensional (3D) traps are introduced through a high-numerical-aperture multimode optical fibre, thus enabling an equally versatile means of manipulation through channels having cross-section comparable to the size of a single cell. Our work demonstrates real-time manipulation of 3D arrangements of micro-objects, as well as manipulation inside otherwise inaccessible cavities. We show that the traps can be formed over fibre lengths exceeding 100 mm and positioned with nanometric resolution. The results provide the basis for holographic manipulation and other high-numerical-aperture techniques, including advanced microscopy, through single-core-fibre endoscopes deep inside living tissues and other complex environments.

  16. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    Science.gov (United States)

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  17. Numerical method for partial equilibrium flow

    International Nuclear Information System (INIS)

    Ramshaw, J.D.; Cloutman, L.D.; Los Alamos, New Mexico 87545)

    1981-01-01

    A numerical method is presented for chemically reactive fluid flow in which equilibrium and nonequilibrium reactions occur simultaneously. The equilibrium constraints on the species concentrations are established by a quadratic iterative procedure. If the equilibrium reactions are uncoupled and of second or lower order, the procedure converges in a single step. In general, convergence is most rapid when the reactions are weakly coupled. This can frequently be achieved by a judicious choice of the independent reactions. In typical transient calculations, satisfactory accuracy has been achieved with about five iterations per time step

  18. High-accuracy contouring using projection moiré

    Science.gov (United States)

    Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.

    2005-09-01

    Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.

  19. Numerical simulations of highly buoyant flows in the Castel Giorgio - Torre Alfina deep geothermal reservoir

    Science.gov (United States)

    Volpi, Giorgio; Crosta, Giovanni B.; Colucci, Francesca; Fischer, Thomas; Magri, Fabien

    2017-04-01

    Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. However, nowadays its utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. This is mainly due to the uncertainties associated with it, as for example the lack of appropriate computational tools, necessary to perform effective analyses. The aim of the present study is to build an accurate 3D numerical model, to simulate the exploitation process of the deep geothermal reservoir of Castel Giorgio - Torre Alfina (central Italy), and to compare results and performances of parallel simulations performed with TOUGH2 (Pruess et al. 1999), FEFLOW (Diersch 2014) and the open source software OpenGeoSys (Kolditz et al. 2012). Detailed geological, structural and hydrogeological data, available for the selected area since early 70s, show that Castel Giorgio - Torre Alfina is a potential geothermal reservoir with high thermal characteristics (120 ° C - 150 ° C) and fluids such as pressurized water and gas, mainly CO2, hosted in a carbonate formation. Our two steps simulations firstly recreate the undisturbed natural state of the considered system and then perform the predictive analysis of the industrial exploitation process. The three adopted software showed a strong numerical simulations accuracy, which has been verified by comparing the simulated and measured temperature and pressure values of the geothermal wells in the area. The results of our simulations have demonstrated the sustainability of the investigated geothermal field for the development of a 5 MW pilot plant with total fluids reinjection in the same original formation. From the thermal point of view, a very efficient buoyant circulation inside the geothermal system has been observed, thus allowing the reservoir to support the hypothesis of a 50 years production time with a flow rate of 1050 t

  20. Accuracy, convergence and stability of finite element CFD algorithms

    International Nuclear Information System (INIS)

    Baker, A.J.; Iannelli, G.S.; Noronha, W.P.

    1989-01-01

    The requirement for artificial dissipation is well understood for shock-capturing CFD procedures in aerodynamics. However, numerical diffusion is widely utilized across the board in Navier-Stokes CFD algorithms, ranging from incompressible through supersonic flow applications. The Taylor weak statement (TWS) theory is applicable to any conservation law system containing an evolutionary component, wherein the analytical modifications becomes functionally dependent on the Jacobian of the corresponding equation system flux vector. The TWS algorithm is developed for a range of fluid mechanics conservation law systems including incompressible Navier-Stokes, depth-averaged free surface hydrodynamic Navier-Stokes, and the compressible Euler and Navier-Stokes equations. This paper presents the TWS statement for the problem class range and highlights the important theoretical issues of accuracy, convergence and stability. Numerical results for a variety of benchmark problems are presented to document key features. 8 refs

  1. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  2. Numerical simulation of proton exchange membrane fuel cells at high operating temperature

    Science.gov (United States)

    Peng, Jie; Lee, Seung Jae

    A three-dimensional, single-phase, non-isothermal numerical model for proton exchange membrane (PEM) fuel cell at high operating temperature (T ≥ 393 K) was developed and implemented into a computational fluid dynamic (CFD) code. The model accounts for convective and diffusive transport and allows predicting the concentration of species. The heat generated from electrochemical reactions, entropic heat and ohmic heat arising from the electrolyte ionic resistance were considered. The heat transport model was coupled with the electrochemical and mass transport models. The product water was assumed to be vaporous and treated as ideal gas. Water transportation across the membrane was ignored because of its low water electro-osmosis drag force in the polymer polybenzimidazole (PBI) membrane. The results show that the thermal effects strongly affect the fuel cell performance. The current density increases with the increasing of operating temperature. In addition, numerical prediction reveals that the width and distribution of gas channel and current collector land area are key optimization parameters for the cell performance improvement.

  3. Numerical simulation of proton exchange membrane fuel cells at high operating temperature

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Jie; Lee, Seung Jae [Energy Lab, Samsung Advanced Institute of Technology, Mt. 14-1 Nongseo-Dong, Giheung-Gu, Yongin-Si, Gyeonggi-Do 446-712 (Korea, Republic of)

    2006-11-22

    A three-dimensional, single-phase, non-isothermal numerical model for proton exchange membrane (PEM) fuel cell at high operating temperature (T>=393K) was developed and implemented into a computational fluid dynamic (CFD) code. The model accounts for convective and diffusive transport and allows predicting the concentration of species. The heat generated from electrochemical reactions, entropic heat and ohmic heat arising from the electrolyte ionic resistance were considered. The heat transport model was coupled with the electrochemical and mass transport models. The product water was assumed to be vaporous and treated as ideal gas. Water transportation across the membrane was ignored because of its low water electro-osmosis drag force in the polymer polybenzimidazole (PBI) membrane. The results show that the thermal effects strongly affect the fuel cell performance. The current density increases with the increasing of operating temperature. In addition, numerical prediction reveals that the width and distribution of gas channel and current collector land area are key optimization parameters for the cell performance improvement. (author)

  4. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    International Nuclear Information System (INIS)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook; Kim, Woo Youn

    2015-01-01

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal to 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems

  5. A numerical study on the mechanical properties and the processing behaviour of composite high strength steels

    Energy Technology Data Exchange (ETDEWEB)

    Muenstermann, Sebastian [RWTH Aachen (Germany). Dept. of Ferrous Metallurgy; Vajragupta, Napat [RWTH Aachen (Germany). Materials Mechanics Group; Weisgerber, Bernadette [ThyssenKrupp Steel Europe AG (Germany). Patent Dept.; Kern, Andreas [ThyssenKrupp Steel Europe AG (Germany). Dept. of Quality Affairs

    2013-06-01

    The demand for lightweight construction in mechanical and civil engineering has strongly promoted the development of high strength steels with excellent damage tolerance. Nowadays, the requirements from mechanical and civil engineering are even more challenging, as gradients in mechanical properties are demanded increasingly often for components that are utilized close to the limit state of load bearing capacity. A metallurgical solution to this demand is given by composite rolling processes. In this process components with different chemical compositions were jointed, which develop after heat treatment special properties. These are actually evaluated in order to verify that structural steels with the desired gradients in mechanical properties can be processed. A numerical study was performed aiming to numerically predict strenght and toughness properties, as well as the procesing behaviour using Finite Element (FE) simulations with damage mechanics approaches. For determination of mechanical properties, simulations of tensile specimen, SENB sample, and a mobile crane have been carried out for different configurations of composite rolled materias out of high strebght structural steels. As a parameter study, both the geometrical and the metallurgical configurations of the composite rolled steels were modified. Thickness of each steel layer and materials configuration have been varied. Like this, a numerical procedure to define optimum tailored configurations of high strenght steels could be established.

  6. Solutions manual to accompany An introduction to numerical methods and analysis

    CERN Document Server

    Epperson, James F

    2014-01-01

    A solutions manual to accompany An Introduction to Numerical Methods and Analysis, Second Edition An Introduction to Numerical Methods and Analysis, Second Edition reflects the latest trends in the field, includes new material and revised exercises, and offers a unique emphasis on applications. The author clearly explains how to both construct and evaluate approximations for accuracy and performance, which are key skills in a variety of fields. A wide range of higher-level methods and solutions, including new topics such as the roots of polynomials, sp

  7. Numerical modeling and validation of helium jet impingement cooling of high heat flux divertor components

    International Nuclear Information System (INIS)

    Koncar, Bostjan; Simonovski, Igor; Norajitra, Prachai

    2009-01-01

    Numerical analyses of jet impingement cooling presented in this paper were performed as a part of helium-cooled divertor studies for post-ITER generation of fusion reactors. The cooling ability of divertor cooled by multiple helium jets was analysed. Thermal-hydraulic characteristics and temperature distributions in the solid structures were predicted for the reference geometry of one cooling finger. To assess numerical errors, different meshes (hexagonal, tetra, tetra-prism) and discretisation schemes were used. The temperatures in the solid structures decrease with finer mesh and higher order discretisation and converge towards finite values. Numerical simulations were validated against high heat flux experiments, performed at Efremov Institute, St. Petersburg. The predicted design parameters show reasonable agreement with measured data. The calculated maximum thimble temperature was below the tile-thimble brazing temperature, indicating good heat removal capability of reference divertor design. (author)

  8. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    Science.gov (United States)

    Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana

    2015-01-01

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.

  9. Accuracy of thick-walled hollows during piercing on three-high mill

    International Nuclear Information System (INIS)

    Potapov, I.N.; Romantsev, B.A.; Shamanaev, V.I.; Popov, V.A.; Kharitonov, E.A.

    1975-01-01

    The results of investigations are presented concerning the accuracy of geometrical dimensions of thick-walled sleeves produced by piercing on a 100-ton trio screw rolling mill MISiS with three schemes of fixing and centering the rod. The use of a spherical thrust journal for the rod and of a long centering bushing makes it possible to diminish the non-uniformity of the wall thickness of the sleeves by 30-50%. It is established that thick-walled sleeves with accurate geometrical dimensions (nonuniformity of the wall thickness being less than 10%) can be produced if the system sleeve - mandrel - rod is highly rigid and the rod has a two- or three-fold stability margin over the length equal to that of the sleeve being pierced. The process of piercing is expedient to be carried out with increased angles of feed (14-16 deg). Blanks have been made from steel 12Kh1MF

  10. Numerical method for the eigenvalue problem and the singular equation by using the multi-grid method and application to ordinary differential equation

    International Nuclear Information System (INIS)

    Kanki, Takashi; Uyama, Tadao; Tokuda, Shinji.

    1995-07-01

    In the numerical method to compute the matching data which are necessary for resistive MHD stability analyses, it is required to solve the eigenvalue problem and the associated singular equation. An iterative method is developed to solve the eigenvalue problem and the singular equation. In this method, the eigenvalue problem is replaced with an equivalent nonlinear equation and a singular equation is derived from Newton's method for the nonlinear equation. The multi-grid method (MGM), a high speed iterative method, can be applied to this method. The convergence of the eigenvalue and the eigenvector, and the CPU time in this method are investigated for a model equation. It is confirmed from the numerical results that this method is effective for solving the eigenvalue problem and the singular equation with numerical stability and high accuracy. It is shown by improving the MGM that the CPU time for this method is 50 times shorter than that of the direct method. (author)

  11. Numerically stable, scalable formulas for parallel and online computation of higher-order multivariate central moments with arbitrary weights

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Terriberry, Timothy B. [Xiph.Org Foundation, Arlington, VA (United States); Kolla, Hemanth [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Bennett, Janine [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2016-03-29

    Formulas for incremental or parallel computation of second order central moments have long been known, and recent extensions of these formulas to univariate and multivariate moments of arbitrary order have been developed. Formulas such as these, are of key importance in scenarios where incremental results are required and in parallel and distributed systems where communication costs are high. We survey these recent results, and improve them with arbitrary-order, numerically stable one-pass formulas which we further extend with weighted and compound variants. We also develop a generalized correction factor for standard two-pass algorithms that enables the maintenance of accuracy over nearly the full representable range of the input, avoiding the need for extended-precision arithmetic. We then empirically examine algorithm correctness for pairwise update formulas up to order four as well as condition number and relative error bounds for eight different central moment formulas, each up to degree six, to address the trade-offs between numerical accuracy and speed of the various algorithms. Finally, we demonstrate the use of the most elaborate among the above mentioned formulas, with the utilization of the compound moments for a practical large-scale scientific application.

  12. Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas

    Science.gov (United States)

    Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi

    2016-06-01

    The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  13. MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS

    Directory of Open Access Journals (Sweden)

    Y. Zhang

    2016-06-01

    Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.

  14. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  15. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  16. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  17. High construal level can help negotiators to reach integrative agreements: The role of information exchange and judgement accuracy.

    Science.gov (United States)

    Wening, Stefanie; Keith, Nina; Abele, Andrea E

    2016-06-01

    In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.

  18. Numerical investigation of the inverse blackbody radiation problem

    International Nuclear Information System (INIS)

    Xin Tan, Guo-zhen Yang, Ben-yuan Gu

    1994-01-01

    A numerical algorithm for the inverse blackbody radiation problem, which is the determination of the temperature distribution of a thermal radiator (TDTR) from its total radiated power spectrum (TRPS), is presented, based on the general theory of amplitude-phase retrieval. With application of this new algorithm, the ill-posed nature of the Fredholm equation of the first kind can be largely overcome and a convergent solution to high accuracy can be obtained. By incorporation of the hybrid input-output algorithm into our algorithm, the convergent process can be substantially expedited and the stagnation problem of the solution can be averted. From model calculations it is found that the new algorithm can also provide a robust reconstruction of the TDTR from the noise-corrupted data of the TRPS. Therefore the new algorithm may offer a useful approach to solving the ill-posed inverse problem. 18 refs., 9 figs

  19. High-accuracy alignment based on atmospherical dispersion - technological approaches and solutions for the dual-wavelength transmitter

    International Nuclear Information System (INIS)

    Burkhard, Boeckem

    1999-01-01

    In the course of the progressive developments of sophisticated geodetic systems utilizing electromagnetic waves in the visible or near IR-range a more detailed knowledge of the propagation medium and coevally solutions of atmospherically induced limitations will become important. An alignment system based on atmospherical dispersion, called a dispersometer, is a metrological solution to the atmospherically induced limitations, in optical alignment and direction observations of high accuracy. In the dispersometer we are using the dual-wavelength method for dispersive air to obtain refraction compensated angle measurements, the detrimental impact of atmospheric turbulence notwithstanding. The principle of the dual-wavelength method utilizes atmospherical dispersion, i.e. the wavelength dependence of the refractive index. The difference angle between two light beams of different wavelengths, which is called the dispersion angle Δβ, is to first approximation proportional to the refraction angle: β IR ν(β blue - β IR ) = ν Δβ, this equation implies that the dispersion angle has to be measured at least 42 times more accurate than the desired accuracy of the refraction angle for the wavelengths used in the present dispersometer. This required accuracy constitutes one major difficulty for the instrumental performance in applying the dispersion effect. However, the dual-wavelength method can only be successfully used in an optimized transmitter-receiver combination. Beyond the above mentioned resolution requirement for the detector, major difficulties in instrumental realization arise in the availability of a suitable dual-wavelength laser light source, laser light modulation with a very high extinction ratio and coaxial emittance of mono-mode radiation at both wavelengths. Therefore, this paper focuses on the solutions of the dual-wavelength transmitter introducing a new hardware approach and a complete re-design of the in [1] proposed conception of the dual

  20. What do we mean by accuracy in geomagnetic measurements?

    Science.gov (United States)

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  1. Non-hydrostatic semi-elastic hybrid-coordinate SISL extension of HIRLAM. Part II: numerical testing

    OpenAIRE

    Rõõm, Rein; Männik, Aarne; Luhamaa, Andres; Zirk, Marko

    2007-01-01

    The semi-implicit semi-Lagrangian (SISL), two-time-level, non-hydrostatic numerical scheme, based on the non-hydrostatic, semi-elastic pressure-coordinate equations, is tested in model experiments with flow over given orography (elliptical hill, mountain ridge, system of successive ridges) in a rectangular domain with emphasis on the numerical accuracy and non-hydrostatic effect presentation capability. Comparison demonstrates good (in strong primary wave generation) to satisfactory (in weak ...

  2. FY 1991 report on the survey of geothermal development promotion. Attached data. Electromagnetic exploration (High accuracy MT method) (No.38 - West area of Mt. Aso); Chinetsu kaihatsu sokushin chosa chijo chosa hokokusho futai shiryo. 1991 nendo chinetsu kaihatsu sokushin chosa - Denji tansa (Koseido MT ho) hokokusho (No.38 Asosan seibu chiiki - Tenpu shiryo)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1991-12-01

    As a part of the survey of geothermal development promotion in FY 1991, electromagnetic exploration by the high accuracy MT method was conducted to acquire the information on the geothermal structure in the west area of Mt. Aso, Kumamoto Prefecture. The detailed data were arranged as the data attached to the report on the electromagnetic exploration. As the attached data, included were the results of the 1D analysis (measuring/analysis {rho}a-F chart, analytic structure drawing), results of the 1D analysis (numerical list of the apparent resistivity analytic value and inverse analytic value) and numerical list of the apparent resistivity measured value. (NEDO)

  3. Procedure to determine the two channel timing measurement accuracy and precision of a digital oscilloscope

    International Nuclear Information System (INIS)

    Johnson, M.; Matulik, M.

    1994-01-01

    The digital oscilloscope allows one to make numerous timing measurements, but just how good are those measurements? This document describes a procedure which can be used to determine the accuracy and precision to which a digital oscilloscope can make various two channel timing measurements

  4. Infinite occupation number basis of bosons: Solving a numerical challenge

    Science.gov (United States)

    Geißler, Andreas; Hofstetter, Walter

    2017-06-01

    In any bosonic lattice system, which is not dominated by local interactions and thus "frozen" in a Mott-type state, numerical methods have to cope with the infinite size of the corresponding Hilbert space even for finite lattice sizes. While it is common practice to restrict the local occupation number basis to Nc lowest occupied states, the presence of a finite condensate fraction requires the complete number basis for an exact representation of the many-body ground state. In this work we present a truncation scheme to account for contributions from higher number states. By simply adding a single coherent-tail state to this common truncation, we demonstrate increased numerical accuracy and the possible increase in numerical efficiency of this method for the Gutzwiller variational wave function and within dynamical mean-field theory.

  5. Numerical simulation of the effects of variation of angle of attack and sweep angle on vortex breakdown over delta wings

    Science.gov (United States)

    Ekaterinaris, J. A.; Schiff, Lewis B.

    1990-01-01

    In the present investigation of the vortical flowfield structure over delta wings at high angles of attack, three-dimensional Navier-Stokes numerical simulations were conducted to predict the complex leeward flowfield characteristics; these encompass leading-edge separation, secondary separation, and vortex breakdown. Attention is given to the effect on solution accuracy of circumferential grid-resolution variations in the vicinity of the wing leading edge, and well as to the effect of turbulence modeling on the solutions. When a critical angle-of-attack was reached, bubble-type vortex breakdown was found. With further angle-of-attack increase, a change from bubble-type to spiral-type vortex breakdown was predicted by the numerical solution.

  6. High-order accurate numerical algorithm for three-dimensional transport prediction

    Energy Technology Data Exchange (ETDEWEB)

    Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J

    1980-01-01

    The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.

  7. High diagnostic accuracy of the Sysmex XT-2000iV delta total nucleated cells on effusions for feline infectious peritonitis.

    Science.gov (United States)

    Giordano, Alessia; Stranieri, Angelica; Rossi, Gabriele; Paltrinieri, Saverio

    2015-06-01

    The ΔWBC (the ratio between DIFF and BASO counts of the Sysmex XT-2000iV), hereafter defined as ΔTNC (total nucleated cells), is high in effusions due to feline infectious peritonitis (FIP), as cells are entrapped in fibrin clots formed in the BASO reagent. Similar clots form in the Rivalta's test, a method with high diagnostic accuracy for FIP. The objective of this study was to determine the diagnostic accuracy for FIP and the optimal cutoff of ΔTNC. After a retrospective search of our database, DIFF and BASO counts, and the ΔTNC from cats with and without FIP were compared to each other. Sensitivity, specificity, and positive and negative likelihood ratios (LR+, LR-) were calculated. A ROC curve was designed to determine the cutoff for best sensitivity and specificity. Effusions from 20 FIP and 31 non-FIP cats were analyzed. The ΔTNC was higher (P  2.5 had 100% specificity. The ΔTNC has a high diagnostic accuracy for FIP-related effusions by providing an estimate of precipitable proteins, as the Rivalta's test, in addition to the cell count. As fibrin clots result in false lower BASO counts, the ΔTNC is preferable to the WBC count generated by the BASO channel alone in suspected FIP effusions. © 2015 American Society for Veterinary Clinical Pathology.

  8. High-order FDTD methods via derivative matching for Maxwell's equations with material interfaces

    International Nuclear Information System (INIS)

    Zhao Shan; Wei, G.W.

    2004-01-01

    This paper introduces a series of novel hierarchical implicit derivative matching methods to restore the accuracy of high-order finite-difference time-domain (FDTD) schemes of computational electromagnetics (CEM) with material interfaces in one (1D) and two spatial dimensions (2D). By making use of fictitious points, systematic approaches are proposed to locally enforce the physical jump conditions at material interfaces in a preprocessing stage, to arbitrarily high orders of accuracy in principle. While often limited by numerical instability, orders up to 16 and 12 are achieved, respectively, in 1D and 2D. Detailed stability analyses are presented for the present approach to examine the upper limit in constructing embedded FDTD methods. As natural generalizations of the high-order FDTD schemes, the proposed derivative matching methods automatically reduce to the standard FDTD schemes when the material interfaces are absent. An interesting feature of the present approach is that it encompasses a variety of schemes of different orders in a single code. Another feature of the present approach is that it can be robustly implemented with other high accuracy time-domain approaches, such as the multiresolution time-domain method and the local spectral time-domain method, to cope with material interfaces. Numerical experiments on both 1D and 2D problems are carried out to test the convergence, examine the stability, access the efficiency, and explore the limitation of the proposed methods. It is found that operating at their best capacity, the proposed high-order schemes could be over 2000 times more efficient than their fourth-order versions in 2D. In conclusion, the present work indicates that the proposed hierarchical derivative matching methods might lead to practical high-order schemes for numerical solution of time-domain Maxwell's equations with material interfaces

  9. "Dilute-and-inject" multi-target screening assay for highly polar doping agents using hydrophilic interaction liquid chromatography high resolution/high accuracy mass spectrometry for sports drug testing.

    Science.gov (United States)

    Görgens, Christian; Guddat, Sven; Orlovius, Anne-Katrin; Sigmund, Gerd; Thomas, Andreas; Thevis, Mario; Schänzer, Wilhelm

    2015-07-01

    In the field of LC-MS, reversed phase liquid chromatography is the predominant method of choice for the separation of prohibited substances from various classes in sports drug testing. However, highly polar and charged compounds still represent a challenging task in liquid chromatography due to their difficult chromatographic behavior using reversed phase materials. A very promising approach for the separation of hydrophilic compounds is hydrophilic interaction liquid chromatography (HILIC). Despite its great potential and versatile advantages for the separation of highly polar compounds, HILIC is up to now not very common in doping analysis, although most manufacturers offer a variety of HILIC columns in their portfolio. In this study, a novel multi-target approach based on HILIC high resolution/high accuracy mass spectrometry is presented to screen for various polar stimulants, stimulant sulfo-conjugates, glycerol, AICAR, ethyl glucuronide, morphine-3-glucuronide, and myo-inositol trispyrophosphate after direct injection of diluted urine specimens. The usage of an effective online sample cleanup and a zwitterionic HILIC analytical column in combination with a new generation Hybrid Quadrupol-Orbitrap® mass spectrometer enabled the detection of highly polar analytes without any time-consuming hydrolysis or further purification steps, far below the required detection limits. The methodology was fully validated for qualitative and quantitative (AICAR, glycerol) purposes considering the parameters specificity; robustness (rRT  0.99); intra- and inter-day precision at low, medium, and high concentration levels (CV < 20%); limit of detection (stimulants and stimulant sulfo-conjugates < 10 ng/mL; norfenefrine; octopamine < 30 ng/mL; AICAR < 10 ng/mL; glycerol 100 μg/mL; ETG < 100 ng/mL); accuracy (AICAR 103.8-105.5%, glycerol 85.1-98.3% at three concentration levels) and ion suppression/enhancement effects.

  10. A contrastive study on the influences of radial and three-dimensional satellite gravity gradiometry on the accuracy of the Earth's gravitational field recovery

    International Nuclear Information System (INIS)

    Zheng Wei; Hsu Hou-Tse; Zhong Min; Yun Mei-Juan

    2012-01-01

    The accuracy of the Earth's gravitational field measured from the gravity field and steady-state ocean circulation explorer (GOCE), up to 250 degrees, influenced by the radial gravity gradient V zz and three-dimensional gravity gradient V ij from the satellite gravity gradiometry (SGG) are contrastively demonstrated based on the analytical error model and numerical simulation, respectively. Firstly, the new analytical error model of the cumulative geoid height, influenced by the radial gravity gradient V zz and three-dimensional gravity gradient V ij are established, respectively. In 250 degrees, the GOCE cumulative geoid height error measured by the radial gravity gradient V zz is about 2 ½ times higher than that measured by the three-dimensional gravity gradient V ij . Secondly, the Earth's gravitational field from GOCE completely up to 250 degrees is recovered using the radial gravity gradient V zz and three-dimensional gravity gradient V ij by numerical simulation, respectively. The study results show that when the measurement error of the gravity gradient is 3 × 10 −12 /s 2 , the cumulative geoid height errors using the radial gravity gradient V zz and three-dimensional gravity gradient V ij are 12.319 cm and 9.295 cm at 250 degrees, respectively. The accuracy of the cumulative geoid height using the three-dimensional gravity gradient V ij is improved by 30%–40% on average compared with that using the radial gravity gradient V zz in 250 degrees. Finally, by mutual verification of the analytical error model and numerical simulation, the orders of magnitude from the accuracies of the Earth's gravitational field recovery make no substantial differences based on the radial and three-dimensional gravity gradients, respectively. Therefore, it is feasible to develop in advance a radial cold-atom interferometric gradiometer with a measurement accuracy of 10 −13 /s 2 −10 −15 /s 2 for precisely producing the next-generation GOCE Follow-On Earth gravity field

  11. Accuracy Assessment of Different Digital Surface Models

    Directory of Open Access Journals (Sweden)

    Ugur Alganci

    2018-03-01

    Full Text Available Digital elevation models (DEMs, which can occur in the form of digital surface models (DSMs or digital terrain models (DTMs, are widely used as important geospatial information sources for various remote sensing applications, including the precise orthorectification of high-resolution satellite images, 3D spatial analyses, multi-criteria decision support systems, and deformation monitoring. The accuracy of DEMs has direct impacts on specific calculations and process chains; therefore, it is important to select the most appropriate DEM by considering the aim, accuracy requirement, and scale of each study. In this research, DSMs obtained from a variety of satellite sensors were compared to analyze their accuracy and performance. For this purpose, freely available Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER 30 m, Shuttle Radar Topography Mission (SRTM 30 m, and Advanced Land Observing Satellite (ALOS 30 m resolution DSM data were obtained. Additionally, 3 m and 1 m resolution DSMs were produced from tri-stereo images from the SPOT 6 and Pleiades high-resolution (PHR 1A satellites, respectively. Elevation reference data provided by the General Command of Mapping, the national mapping agency of Turkey—produced from 30 cm spatial resolution stereo aerial photos, with a 5 m grid spacing and ±3 m or better overall vertical accuracy at the 90% confidence interval (CI—were used to perform accuracy assessments. Gross errors and water surfaces were removed from the reference DSM. The relative accuracies of the different DSMs were tested using a different number of checkpoints determined by different methods. In the first method, 25 checkpoints were selected from bare lands to evaluate the accuracies of the DSMs on terrain surfaces. In the second method, 1000 randomly selected checkpoints were used to evaluate the methods’ accuracies for the whole study area. In addition to the control point approach, vertical cross

  12. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  13. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    Science.gov (United States)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  14. A numerical integration approach suitable for simulating PWR dynamics using a microcomputer system

    International Nuclear Information System (INIS)

    Zhiwei, L.; Kerlin, T.W.

    1983-01-01

    It is attractive to use microcomputer systems to simulate nuclear power plant dynamics for the purpose of teaching and/or control system design. An analysis and a comparison of feasibility of existing numerical integration methods have been made. The criteria for choosing the integration step using various numerical integration methods including the matrix exponential method are derived. In order to speed up the simulation, an approach is presented using the Newton recursion calculus which can avoid convergence limitations in choosing the integration step size. The accuracy consideration will dominate the integration step limited. The advantages of this method have been demonstrated through a case study using CBM model 8032 microcomputer to simulate a reduced order linear PWR model under various perturbations. It has been proven theoretically and practically that the Runge-Kutta method and Adams-Moulton method are not feasible. The matrix exponential method is good at accuracy and fairly good at speed. The Newton recursion method can save 3/4 to 4/5 time compared to the matrix exponential method with reasonable accuracy. Vertical Barhis method can be expanded to deal with nonlinear nuclear power plant models and higher order models as well

  15. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    Science.gov (United States)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  16. Numerical performance of the parabolized ADM formulation of general relativity

    International Nuclear Information System (INIS)

    Paschalidis, Vasileios; Hansen, Jakob; Khokhlov, Alexei

    2008-01-01

    In a recent paper [Vasileios Paschalidis, Phys. Rev. D 78, 024002 (2008).], the first coauthor presented a new parabolic extension (PADM) of the standard 3+1 Arnowitt, Deser, Misner (ADM) formulation of the equations of general relativity. By parabolizing first-order ADM in a certain way, the PADM formulation turns it into a well-posed system which resembles the structure of mixed hyperbolic-second-order parabolic partial differential equations. The surface of constraints of PADM becomes a local attractor for all solutions and all possible well-posed gauge conditions. This paper describes a numerical implementation of PADM and studies its accuracy and stability in a series of standard numerical tests. Numerical properties of PADM are compared with those of standard ADM and its hyperbolic Kidder, Scheel, Teukolsky (KST) extension. The PADM scheme is numerically stable, convergent, and second-order accurate. The new formulation has better control of the constraint-violating modes than ADM and KST.

  17. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.; Motwalli, Olaa Amin; Oliva, Romina; Jankovic, Boris R.; Medvedeva, Yulia; Ashoor, Haitham; Essack, Magbubah; Gao, Xin; Bajic, Vladimir B.

    2018-01-01

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  18. A novel method for improved accuracy of transcription factor binding site prediction

    KAUST Repository

    Khamis, Abdullah M.

    2018-03-20

    Identifying transcription factor (TF) binding sites (TFBSs) is important in the computational inference of gene regulation. Widely used computational methods of TFBS prediction based on position weight matrices (PWMs) usually have high false positive rates. Moreover, computational studies of transcription regulation in eukaryotes frequently require numerous PWM models of TFBSs due to a large number of TFs involved. To overcome these problems we developed DRAF, a novel method for TFBS prediction that requires only 14 prediction models for 232 human TFs, while at the same time significantly improves prediction accuracy. DRAF models use more features than PWM models, as they combine information from TFBS sequences and physicochemical properties of TF DNA-binding domains into machine learning models. Evaluation of DRAF on 98 human ChIP-seq datasets shows on average 1.54-, 1.96- and 5.19-fold reduction of false positives at the same sensitivities compared to models from HOCOMOCO, TRANSFAC and DeepBind, respectively. This observation suggests that one can efficiently replace the PWM models for TFBS prediction by a small number of DRAF models that significantly improve prediction accuracy. The DRAF method is implemented in a web tool and in a stand-alone software freely available at http://cbrc.kaust.edu.sa/DRAF.

  19. A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.

    Science.gov (United States)

    Ling, Hong; Luo, Ercang; Dai, Wei

    2006-12-22

    Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.

  20. New perspectives for high accuracy SLR with second generation geodesic satellites

    Science.gov (United States)

    Lund, Glenn

    1993-01-01

    This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return