High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang
2012-01-01
Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related
Furukawa, Masaru; Ohkawa, Yushiro; Matsuyama, Akinobu
2016-01-01
A high-accuracy numerical integration algorithm for a charged particle motion is developed. The algorithm is based on the Hamiltonian mechanics and the operator decomposition. The algorithm is made to be time-reversal symmetric, and its order of accuracy can be increased to any order by using a recurrence formula. One of the advantages is that it is an explicit method. An effective way to decompose the time evolution operator is examined; the Poisson tensor is decomposed and non-canonical variables are adopted. The algorithm is extended to a time dependent fields' case by introducing the extended phase space. Numerical tests showing the performance of the algorithm are presented. One is the pure cyclotron motion for a long time period, and the other is a charged particle motion in a rapidly oscillating field. (author)
Review of The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing
Bailey, David
2005-01-01
In the January 2002 edition of SIAM News, Nick Trefethen announced the '$100, 100-Digit Challenge'. In this note he presented ten easy-to-state but hard-to-solve problems of numerical analysis, and challenged readers to find each answer to ten-digit accuracy. Trefethen closed with the enticing comment: 'Hint: They're hard. If anyone gets 50 digits in total, I will be impressed.' This challenge obviously struck a chord in hundreds of numerical mathematicians worldwide, as 94 teams from 25 nations later submitted entries. Many of these submissions exceeded the target of 50 correct digits; in fact, 20 teams achieved a perfect score of 100 correct digits. Trefethen had offered $100 for the best submission. Given the overwhelming response, a generous donor (William Browning, founder of Applied Mathematics, Inc.) provided additional funds to provide a $100 award to each of the 20 winning teams. Soon after the results were out, four participants, each from a winning team, got together and agreed to write a book about the problems and their solutions. The team is truly international: Bornemann is from Germany, Laurie is from South Africa, Wagon is from the USA, and Waldvogel is from Switzerland. This book provides some mathematical background for each problem, and then shows in detail how each of them can be solved. In fact, multiple solution techniques are mentioned in each case. The book describes how to extend these solutions to much larger problems and much higher numeric precision (hundreds or thousands of digit accuracy). The authors also show how to compute error bounds for the results, so that one can say with confidence that one's results are accurate to the level stated. Numerous numerical software tools are demonstrated in the process, including the commercial products Mathematica, Maple and Matlab. Computer programs that perform many of the algorithms mentioned in the book are provided, both in an appendix to the book and on a website. In the process, the
Dongarra, Jack
2013-09-18
The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example
Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.
2013-01-01
The LU factorization is an important numerical algorithm for solving systems of linear equations in science and engineering and is a characteristic of many dense linear algebra computations. For example, it has become the de facto numerical algorithm implemented within the LINPACK benchmark to rank the most powerful supercomputers in the world, collected by the TOP500 website. Multicore processors continue to present challenges to the development of fast and robust numerical software due to the increasing levels of hardware parallelism and widening gap between core and memory speeds. In this context, the difficulty in developing new algorithms for the scientific community resides in the combination of two goals: achieving high performance while maintaining the accuracy of the numerical algorithm. This paper proposes a new approach for computing the LU factorization in parallel on multicore architectures, which not only improves the overall performance but also sustains the numerical quality of the standard LU factorization algorithm with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic as well as the atomicity of selecting the appropriate pivots. Our approach uses a parallel fine-grained recursive formulation of the panel factorization step and implements the update of the trailing submatrix with the tile algorithm. Based on conflict-free partitioning of the data and lockless synchronization mechanisms, our implementation lets the overall computation flow naturally without contention. The dynamic runtime system called QUARK is then able to schedule tasks with heterogeneous granularities and to transparently introduce algorithmic lookahead. The performance results of our implementation are competitive compared to the currently available software packages and libraries. For example
Zhao, Y; Zimmermann, E; Wolters, B; Van Waasen, S; Huisman, J A; Treichel, A; Kemna, A
2013-01-01
Electrical impedance tomography (EIT) is gaining importance in the field of geophysics and there is increasing interest for accurate borehole EIT measurements in a broad frequency range (mHz to kHz) in order to study subsurface properties. To characterize weakly polarizable soils and sediments with EIT, high phase accuracy is required. Typically, long electrode cables are used for borehole measurements. However, this may lead to undesired electromagnetic coupling effects associated with the inductive coupling between the double wire pairs for current injection and potential measurement and the capacitive coupling between the electrically conductive shield of the cable and the electrically conductive environment surrounding the electrode cables. Depending on the electrical properties of the subsurface and the measured transfer impedances, both coupling effects can cause large phase errors that have typically limited the frequency bandwidth of field EIT measurements to the mHz to Hz range. The aim of this paper is to develop numerical corrections for these phase errors. To this end, the inductive coupling effect was modeled using electronic circuit models, and the capacitive coupling effect was modeled by integrating discrete capacitances in the electrical forward model describing the EIT measurement process. The correction methods were successfully verified with measurements under controlled conditions in a water-filled rain barrel, where a high phase accuracy of 0.8 mrad in the frequency range up to 10 kHz was achieved. The corrections were also applied to field EIT measurements made using a 25 m long EIT borehole chain with eight electrodes and an electrode separation of 1 m. The results of a 1D inversion of these measurements showed that the correction methods increased the measurement accuracy considerably. It was concluded that the proposed correction methods enlarge the bandwidth of the field EIT measurement system, and that accurate EIT measurements can now
Huang, Wei-Ren; Huang, Shih-Pu; Tsai, Tsung-Yueh; Lin, Yi-Jyun; Yu, Zong-Ru; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Young, Hong-Tsu
2017-09-01
Spherical lenses lead to forming spherical aberration and reduced optical performance. Consequently, in practice optical system shall apply a combination of spherical lenses for aberration correction. Thus, the volume of the optical system increased. In modern optical systems, aspherical lenses have been widely used because of their high optical performance with less optical components. However, aspherical surfaces cannot be fabricated by traditional full aperture polishing process due to their varying curvature. Sub-aperture computer numerical control (CNC) polishing is adopted for aspherical surface fabrication in recent years. By using CNC polishing process, mid-spatial frequency (MSF) error is normally accompanied during this process. And the MSF surface texture of optics decreases the optical performance for high precision optical system, especially for short-wavelength applications. Based on a bonnet polishing CNC machine, this study focuses on the relationship between MSF surface texture and CNC polishing parameters, which include feed rate, head speed, track spacing and path direction. The power spectral density (PSD) analysis is used to judge the MSF level caused by those polishing parameters. The test results show that controlling the removal depth of single polishing path, through the feed rate, and without same direction polishing path for higher total removal depth can efficiently reduce the MSF error. To verify the optical polishing parameters, we divided a correction polishing process to several polishing runs with different direction polishing paths. Compare to one shot polishing run, multi-direction path polishing plan could produce better surface quality on the optics.
On the Numerical Accuracy of Spreadsheets
Alejandro C. Frery
2010-10-01
Full Text Available This paper discusses the numerical precision of five spreadsheets (Calc, Excel, Gnumeric, NeoOffice and Oleo running on two hardware platforms (i386 and amd64 and on three operating systems (Windows Vista, Ubuntu Intrepid and Mac OS Leopard. The methodology consists of checking the number of correct significant digits returned by each spreadsheet when computing the sample mean, standard deviation, first-order autocorrelation, F statistic in ANOVA tests, linear and nonlinear regression and distribution functions. A discussion about the algorithms for pseudorandom number generation provided by these platforms is also conducted. We conclude that there is no safe choice among the spreadsheets here assessed: they all fail in nonlinear regression and they are not suited for Monte Carlo experiments.
Shin, J. K.; Choi, Y. D.
1992-01-01
QUICKER scheme has several attractive properties. However, under highly convective conditions, it produces overshoots and possibly some oscillations on each side of steps in the dependent variable when the flow is convected at an angle oblique to the grid line. Fortunately, it is possible to modify the QUICKER scheme using non-linear and linear functional relationship. Details of the development of polynomial upwinding scheme are given in this paper, where it is seen that this non-linear scheme has also third order accuracy. This polynomial upwinding scheme is used as the basis for the SHARPER and SMARTER schemes. Another revised scheme was developed by partial modification of QUICKER scheme using CDS and UPWIND schemes (QUICKUP). These revised schemes are tested at the well known bench mark flows, Two-Dimensional Pure Convection Flows in Oblique-Step, Lid Driven Cavity Flows and Buoyancy Driven Cavity Flows. For remain absolutely monotonic without overshoot and oscillation. QUICKUP scheme is more accurate than any other scheme in their relative accuracy. In high Reynolds number Lid Driven Catity Flow, SMARTER and SHARPER schemes retain lower computational cost than QUICKER and QUICKUP schemes, but computed velocity values in the revised schemes produced less predicted values than QUICKER scheme which is strongly effected by overshoot and undershoot values. Also, in Buoyancy Driven Cavity Flow, SMARTER, SHARPER and QUICKUP schemes give acceptable results. (Author)
On mesh refinement and accuracy of numerical solutions
Zhou, Hong; Peters, Maria; van Oosterom, Adriaan
1993-01-01
This paper investigates mesh refinement and its relation with the accuracy of the boundary element method (BEM) and the finite element method (FEM). TO this end an isotropic homogeneous spherical volume conductor, for which the analytical solution is available, wag used. The numerical results
Numerical accuracy of real inversion formulas for the Laplace transform
Masol, V.; Teugels, J.L.
2008-01-01
In this paper we investigate and compare a number of real inversion formulas for the Laplace transform. The focus is on the accuracy and applicability of the formulas for numerical inversion. In this contribution, we study the performance of the formulas for measures concentrated on a positive
Boyle, Michael; Brown, Duncan A; Pekowsky, Larne
2009-01-01
We study the effectiveness of stationary-phase approximated post-Newtonian waveforms currently used by ground-based gravitational-wave detectors to search for the coalescence of binary black holes by comparing them to an accurate waveform obtained from numerical simulation of an equal-mass non-spinning binary black hole inspiral, merger and ringdown. We perform this study for the initial- and advanced-LIGO detectors. We find that overlaps between the templates and signal can be improved by integrating the match filter to higher frequencies than used currently. We propose simple analytic frequency cutoffs for both initial and advanced LIGO, which achieve nearly optimal matches, and can easily be extended to unequal-mass, spinning systems. We also find that templates that include terms in the phase evolution up to 3.5 post-Newtonian (pN) order are nearly always better, and rarely significantly worse, than 2.0 pN templates currently in use. For initial LIGO we recommend a strategy using templates that include a recently introduced pseudo-4.0 pN term in the low-mass (M ≤ 35 M o-dot ) region, and 3.5 pN templates allowing unphysical values of the symmetric reduced mass η above this. This strategy always achieves overlaps within 0.3% of the optimum, for the data used here. For advanced LIGO we recommend a strategy using 3.5 pN templates up to M = 12 M o-dot , 2.0 pN templates up to M = 21 M o-dot , pseudo-4.0 pN templates up to 65 M o-dot , and 3.5 pN templates with unphysical η for higher masses. This strategy always achieves overlaps within 0.7% of the optimum for advanced LIGO.
High current high accuracy IGBT pulse generator
Nesterov, V.V.; Donaldson, A.R.
1995-05-01
A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 μF capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles
Zhou, Xiafeng; Guo, Jiong; Li, Fu
2015-01-01
Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of
Zhou, Xiafeng, E-mail: zhou-xf11@mails.tsinghua.edu.cn; Guo, Jiong, E-mail: guojiong12@tsinghua.edu.cn; Li, Fu, E-mail: lifu@tsinghua.edu.cn
2015-12-15
Highlights: • NEMs are innovatively applied to solve convection diffusion equation. • Stability, accuracy and numerical diffusion for NEM are analyzed for the first time. • Stability and numerical diffusion depend on the NEM expansion order and its parity. • NEMs have higher accuracy than both second order upwind and QUICK scheme. • NEMs with different expansion orders are integrated into a unified discrete form. - Abstract: The traditional finite difference method or finite volume method (FDM or FVM) is used for HTGR thermal-hydraulic calculation at present. However, both FDM and FVM require the fine mesh sizes to achieve the desired precision and thus result in a limited efficiency. Therefore, a more efficient and accurate numerical method needs to be developed. Nodal expansion method (NEM) can achieve high accuracy even on the coarse meshes in the reactor physics analysis so that the number of spatial meshes and computational cost can be largely decreased. Because of higher efficiency and accuracy, NEM can be innovatively applied to thermal-hydraulic calculation. In the paper, NEMs with different orders of basis functions are successfully developed and applied to multi-dimensional steady convection diffusion equation. Numerical results show that NEMs with three or higher order basis functions can track the reference solutions very well and are superior to second order upwind scheme and QUICK scheme. However, the false diffusion and unphysical oscillation behavior are discovered for NEMs. To explain the reasons for the above-mentioned behaviors, the stability, accuracy and numerical diffusion properties of NEM are analyzed by the Fourier analysis, and by comparing with exact solutions of difference and differential equation. The theoretical analysis results show that the accuracy of NEM increases with the expansion order. However, the stability and numerical diffusion properties depend not only on the order of basis functions but also on the parity of
Learning linear spatial-numeric associations improves accuracy of memory for numbers
Clarissa Ann Thompson
2016-01-01
Full Text Available Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1. Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status. To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2. As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy.
High Accuracy Transistor Compact Model Calibrations
Hembree, Charles E. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Mar, Alan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Robertson, Perry J. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.
When Lagrangian stochastic models for turbulent dispersion are applied to complex flows, some type of ad hoc intervention is almost always necessary to eliminate unphysical behavior in the numerical solution. This paper discusses numerical considerations when solving the Langevin-based particle velo...
High accuracy FIONA-AFM hybrid imaging
Fronczek, D.N.; Quammen, C.; Wang, H.; Kisker, C.; Superfine, R.; Taylor, R.; Erie, D.A.; Tessmer, I.
2011-01-01
Multi-protein complexes are ubiquitous and play essential roles in many biological mechanisms. Single molecule imaging techniques such as electron microscopy (EM) and atomic force microscopy (AFM) are powerful methods for characterizing the structural properties of multi-protein and multi-protein-DNA complexes. However, a significant limitation to these techniques is the ability to distinguish different proteins from one another. Here, we combine high resolution fluorescence microscopy and AFM (FIONA-AFM) to allow the identification of different proteins in such complexes. Using quantum dots as fiducial markers in addition to fluorescently labeled proteins, we are able to align fluorescence and AFM information to ≥8 nm accuracy. This accuracy is sufficient to identify individual fluorescently labeled proteins in most multi-protein complexes. We investigate the limitations of localization precision and accuracy in fluorescence and AFM images separately and their effects on the overall registration accuracy of FIONA-AFM hybrid images. This combination of the two orthogonal techniques (FIONA and AFM) opens a wide spectrum of possible applications to the study of protein interactions, because AFM can yield high resolution (5-10 nm) information about the conformational properties of multi-protein complexes and the fluorescence can indicate spatial relationships of the proteins in the complexes. -- Research highlights: → Integration of fluorescent signals in AFM topography with high (<10 nm) accuracy. → Investigation of limitations and quantitative analysis of fluorescence-AFM image registration using quantum dots. → Fluorescence center tracking and display as localization probability distributions in AFM topography (FIONA-AFM). → Application of FIONA-AFM to a biological sample containing damaged DNA and the DNA repair proteins UvrA and UvrB conjugated to quantum dots.
High energy gravitational scattering: a numerical study
Marchesini, Giuseppe
2008-01-01
The S-matrix in gravitational high energy scattering is computed from the region of large impact parameters b down to the regime where classical gravitational collapse is expected to occur. By solving the equation of an effective action introduced by Amati, Ciafaloni and Veneziano we find that the perturbative expansion around the leading eikonal result diverges at a critical value signalling the onset of a new regime. We then discuss the main features of our explicitly unitary S-matrix down to the Schwarzschild's radius R=2G s^(1/2), where it diverges at a critical value b ~ 2.22 R of the impact parameter. The nature of the singularity is studied with particular attention to the scaling behaviour of various observables at the transition. The numerical approach is validated by reproducing the known exact solution in the axially symmetric case to high accuracy.
Busck, Jens; Heiselberg, Henning
2004-01-01
We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...
Increased-accuracy numerical modeling of electron-optical systems with space-charge
Sveshnikov, V.
2011-01-01
This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.
High accuracy satellite drag model (HASDM)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Fast and High Accuracy Wire Scanner
Koujili, M; Koopman, J; Ramos, D; Sapinski, M; De Freitas, J; Ait Amira, Y; Djerdir, A
2009-01-01
Scanning of a high intensity particle beam imposes challenging requirements on a Wire Scanner system. It is expected to reach a scanning speed of 20 m.s-1 with a position accuracy of the order of 1 μm. In addition a timing accuracy better than 1 millisecond is needed. The adopted solution consists of a fork holding a wire rotating by a maximum of 200°. Fork, rotor and angular position sensor are mounted on the same axis and located in a chamber connected to the beam vacuum. The requirements imply the design of a system with extremely low vibration, vacuum compatibility, radiation and temperature tolerance. The adopted solution consists of a rotary brushless synchronous motor with the permanent magnet rotor installed inside of the vacuum chamber and the stator installed outside. The accurate position sensor will be mounted on the rotary shaft inside of the vacuum chamber, has to resist a bake-out temperature of 200°C and ionizing radiation up to a dozen of kGy/year. A digital feedback controller allows maxi...
Determination of Solution Accuracy of Numerical Schemes as Part of Code and Calculation Verification
Blottner, F.G.; Lopez, A.R.
1998-10-01
This investigation is concerned with the accuracy of numerical schemes for solving partial differential equations used in science and engineering simulation codes. Richardson extrapolation methods for steady and unsteady problems with structured meshes are presented as part of the verification procedure to determine code and calculation accuracy. The local truncation error de- termination of a numerical difference scheme is shown to be a significant component of the veri- fication procedure as it determines the consistency of the numerical scheme, the order of the numerical scheme, and the restrictions on the mesh variation with a non-uniform mesh. Genera- tion of a series of co-located, refined meshes with the appropriate variation of mesh cell size is in- vestigated and is another important component of the verification procedure. The importance of mesh refinement studies is shown to be more significant than just a procedure to determine solu- tion accuracy. It is suggested that mesh refinement techniques can be developed to determine con- sistency of numerical schemes and to determine if governing equations are well posed. The present investigation provides further insight into the conditions and procedures required to effec- tively use Richardson extrapolation with mesh refinement studies to achieve confidence that sim- ulation codes are producing accurate numerical solutions.
Miller, Mark
2005-01-01
I discuss the accuracy requirements on numerical relativity calculations of inspiraling compact object binaries whose extracted gravitational waveforms are to be used as templates for matched filtering signal extraction and physical parameter estimation in modern interferometric gravitational wave detectors. Using a post-Newtonian point particle model for the premerger phase of the binary inspiral, I calculate the maximum allowable errors for the mass and relative velocity and positions of the binary during numerical simulations of the binary inspiral. These maximum allowable errors are compared to the errors of state-of-the-art numerical simulations of multiple-orbit binary neutron star calculations in full general relativity, and are found to be smaller by several orders of magnitude. A post-Newtonian model for the error of these numerical simulations suggests that adaptive mesh refinement coupled with second-order accurate finite difference codes will not be able to robustly obtain the accuracy required for reliable gravitational wave extraction on Terabyte-scale computers. I conclude that higher-order methods (higher-order finite difference methods and/or spectral methods) combined with adaptive mesh refinement and/or multipatch technology will be needed for robustly accurate gravitational wave extraction from numerical relativity calculations of binary coalescence scenarios
Testing the accuracy and stability of spectral methods in numerical relativity
Boyle, Michael; Lindblom, Lee; Pfeiffer, Harald P.; Scheel, Mark A.; Kidder, Lawrence E.
2007-01-01
The accuracy and stability of the Caltech-Cornell pseudospectral code is evaluated using the Kidder, Scheel, and Teukolsky (KST) representation of the Einstein evolution equations. The basic 'Mexico City tests' widely adopted by the numerical relativity community are adapted here for codes based on spectral methods. Exponential convergence of the spectral code is established, apparently limited only by numerical roundoff error or by truncation error in the time integration. A general expression for the growth of errors due to finite machine precision is derived, and it is shown that this limit is achieved here for the linear plane-wave test
Hybrid RANS-LES using high order numerical methods
Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael
2017-11-01
Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.
Electron ray tracing with high accuracy
Saito, K.; Okubo, T.; Takamoto, K.; Uno, Y.; Kondo, M.
1986-01-01
An electron ray tracing program is developed to investigate the overall geometrical and chromatic aberrations in electron optical systems. The program also computes aberrations due to manufacturing errors in lenses and deflectors. Computation accuracy is improved by (1) calculating electrostatic and magnetic scalar potentials using the finite element method with third-order isoparametric elements, and (2) solving the modified ray equation which the aberrations satisfy. Computation accuracy of 4 nm is achieved for calculating optical properties of the system with an electrostatic lens
Numerical models for high beta magnetohydrodynamic flow
Brackbill, J.U.
1987-01-01
The fundamentals of numerical magnetohydrodynamics for highly conducting, high-beta plasmas are outlined. The discussions emphasize the physical properties of the flow, and how elementary concepts in numerical analysis can be applied to the construction of finite difference approximations that capture these features. The linear and nonlinear stability of explicit and implicit differencing in time is examined, the origin and effect of numerical diffusion in the calculation of convective transport is described, and a technique for maintaining solenoidality in the magnetic field is developed. Many of the points are illustrated by numerical examples. The techniques described are applicable to the time-dependent, high-beta flows normally encountered in magnetically confined plasmas, plasma switches, and space and astrophysical plasmas. 40 refs
Pradipto; Purqon, Acep
2017-07-01
Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.
Numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp
Hara, Tatsuhiko; Nishimura, Naoki
2011-12-01
We perform numerical experiments to investigate the accuracy of broad-band moment magnitude, Mwp. We conduct these experiments by measuring Mwp from synthetic seismograms and comparing the resulting values to the moment magnitudes used in the calculation of synthetic seismograms. In the numerical experiments using point sources, we have found that there is a significant dependence of Mwp on focal mechanisms, and that depths phases have a large impact on Mwp estimates, especially for large shallow earthquakes. Numerical experiments using line sources suggest that the effects of source finiteness and rupture propagation on Mwp estimates are on the order of 0.2 magnitude units for vertical fault planes with pure dip-slip mechanisms and 45° dipping fault planes with pure dip-slip (thrust) mechanisms, but that the dependence is small for strike-slip events on a vertical fault plane. Numerical experiments for huge thrust faulting earthquakes on a fault plane with a shallow dip angle suggest that the Mwp estimates do not saturate in the moment magnitude range between 8 and 9, although they are underestimates. Our results are consistent with previous studies that compared Mwp estimates to moment magnitudes calculated from seismic moment tensors obtained by analyses of observed data.
Liang, Fayun; Chen, Haibing; Huang, Maosong
2017-07-01
To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.
High accuracy in silico sulfotransferase models.
Cook, Ian; Wang, Ting; Falany, Charles N; Leyh, Thomas S
2013-11-29
Predicting enzymatic behavior in silico is an integral part of our efforts to understand biology. Hundreds of millions of compounds lie in targeted in silico libraries waiting for their metabolic potential to be discovered. In silico "enzymes" capable of accurately determining whether compounds can inhibit or react is often the missing piece in this endeavor. This problem has now been solved for the cytosolic sulfotransferases (SULTs). SULTs regulate the bioactivities of thousands of compounds--endogenous metabolites, drugs and other xenobiotics--by transferring the sulfuryl moiety (SO3) from 3'-phosphoadenosine 5'-phosphosulfate to the hydroxyls and primary amines of these acceptors. SULT1A1 and 2A1 catalyze the majority of sulfation that occurs during human Phase II metabolism. Here, recent insights into the structure and dynamics of SULT binding and reactivity are incorporated into in silico models of 1A1 and 2A1 that are used to identify substrates and inhibitors in a structurally diverse set of 1,455 high value compounds: the FDA-approved small molecule drugs. The SULT1A1 models predict 76 substrates. Of these, 53 were known substrates. Of the remaining 23, 21 were tested, and all were sulfated. The SULT2A1 models predict 22 substrates, 14 of which are known substrates. Of the remaining 8, 4 were tested, and all are substrates. The models proved to be 100% accurate in identifying substrates and made no false predictions at Kd thresholds of 100 μM. In total, 23 "new" drug substrates were identified, and new linkages to drug inhibitors are predicted. It now appears to be possible to accurately predict Phase II sulfonation in silico.
High accuracy autonomous navigation using the global positioning system (GPS)
Truong, Son H.; Hart, Roger C.; Shoan, Wendy C.; Wood, Terri; Long, Anne C.; Oza, Dipak H.; Lee, Taesul
1997-01-01
The application of global positioning system (GPS) technology to the improvement of the accuracy and economy of spacecraft navigation, is reported. High-accuracy autonomous navigation algorithms are currently being qualified in conjunction with the GPS attitude determination flyer (GADFLY) experiment for the small satellite technology initiative Lewis spacecraft. Preflight performance assessments indicated that these algorithms are able to provide a real time total position accuracy of better than 10 m and a velocity accuracy of better than 0.01 m/s, with selective availability at typical levels. It is expected that the position accuracy will be increased to 2 m if corrections are provided by the GPS wide area augmentation system.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units
Qingzhong Cai
2016-06-01
Full Text Available An inertial navigation system (INS has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs using common turntables, has a great application potential in future atomic gyro INSs.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
Evaluate the accuracy of the numerical solution of hydrogeological problems of mass transfer
Yevhrashkina G.P.
2014-12-01
Full Text Available In the hydrogeological task on quantifying pollution of aquifers the error are starting add up with moment organization of regime observation network as a source of information on the pollution of groundwater in order to evaluate migration options for future prognosis calculations. Optimum element regime observation network should consist of three drill holes on the groundwater flow at equal distances from one another and transversely to the flow of the three drill holes, and at equal distances. If the target of observation drill holes coincides with the stream line on which will then be decided by direct migration task, the error will be minimal. The theoretical basis and results of numerical experiments to assess the accuracy of direct predictive tasks planned migration of groundwater in the area of full water saturation. For the vadose zone, we consider problems of vertical salt transport moisture. All studies were performed by comparing the results of fundamental and approximate solutions in a wide range of characteristics of the processes, which are discussed in relation to ecological and hydrogeological conditions of mining regions on the example of the Western Donbass.
Ko, P.; Kurosawa, S.
2014-03-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine.
Ko, P; Kurosawa, S
2014-01-01
The understanding and accurate prediction of the flow behaviour related to cavitation and pressure fluctuation in a Kaplan turbine are important to the design work enhancing the turbine performance including the elongation of the operation life span and the improvement of turbine efficiency. In this paper, high accuracy turbine and cavitation performance prediction method based on entire flow passage for a Kaplan turbine is presented and evaluated. Two-phase flow field is predicted by solving Reynolds-Averaged Navier-Stokes equations expressed by volume of fluid method tracking the free surface and combined with Reynolds Stress model. The growth and collapse of cavitation bubbles are modelled by the modified Rayleigh-Plesset equation. The prediction accuracy is evaluated by comparing with the model test results of Ns 400 Kaplan model turbine. As a result that the experimentally measured data including turbine efficiency, cavitation performance, and pressure fluctuation are accurately predicted. Furthermore, the cavitation occurrence on the runner blade surface and the influence to the hydraulic loss of the flow passage are discussed. Evaluated prediction method for the turbine flow and performance is introduced to facilitate the future design and research works on Kaplan type turbine
High accuracy wavelength calibration for a scanning visible spectrometer
Scotti, Filippo; Bell, Ronald E. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2010-10-15
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
Taylor bubbles at high viscosity ratios: experiments and numerical simulations
Hewakandamby, Buddhika; Hasan, Abbas; Azzopardi, Barry; Xie, Zhihua; Pain, Chris; Matar, Omar
2015-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube, often occurring in gas-liquid slug flows in many industrial applications, particularly oil and gas production. The objective of this study is to investigate the fluid dynamics of three-dimensional Taylor bubble rising in highly viscous silicone oil in a vertical pipe. An adaptive unstructured mesh modelling framework is adopted here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rising and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a `volume of fluid'-type method for the interface-capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Experimental results for the Taylor bubble shape and rise velocity are presented, together with numerical results for the dynamics of the bubbles. A comparison of the simulation predictions with experimental data available in the literature is also presented to demonstrate the capabilities of our numerical method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
NINJA: Java for High Performance Numerical Computing
José E. Moreira
2002-01-01
Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.
High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science
Pop, Florin
2014-01-01
Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
High-accuracy measurements of the normal specular reflectance
Voarino, Philippe; Piombini, Herve; Sabary, Frederic; Marteau, Daniel; Dubard, Jimmy; Hameury, Jacques; Filtz, Jean Remy
2008-01-01
The French Laser Megajoule (LMJ) is designed and constructed by the French Commissariata l'Energie Atomique (CEA). Its amplifying section needs highly reflective multilayer mirrors for the flash lamps. To monitor and improve the coating process, the reflectors have to be characterized to high accuracy. The described spectrophotometer is designed to measure normal specular reflectance with high repeatability by using a small spot size of 100 μm. Results are compared with ellipsometric measurements. The instrument can also perform spatial characterization to detect coating nonuniformity
High accuracy 3D electromagnetic finite element analysis
Nelson, E.M.
1996-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
High accuracy 3D electromagnetic finite element analysis
Nelson, Eric M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed
Why is a high accuracy needed in dosimetry
Lanzl, L.H.
1976-01-01
Dose and exposure intercomparisons on a national or international basis have become an important component of quality assurance in the practice of good radiotherapy. A high degree of accuracy of γ and x radiation dosimetry is essential in our international society, where medical information is so readily exchanged and used. The value of accurate dosimetry lies mainly in the avoidance of complications in normal tissue and an optimal degree of tumor control
Computer modeling of oil spill trajectories with a high accuracy method
Garcia-Martinez, Reinaldo; Flores-Tovar, Henry
1999-01-01
This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
Achieving High Accuracy in Calculations of NMR Parameters
Faber, Rasmus
quantum chemical methods have been developed, the calculation of NMR parameters with quantitative accuracy is far from trivial. In this thesis I address some of the issues that makes accurate calculation of NMR parameters so challenging, with the main focus on SSCCs. High accuracy quantum chemical......, but no programs were available to perform such calculations. As part of this thesis the CFOUR program has therefore been extended to allow the calculation of SSCCs using the CC3 method. CC3 calculations of SSCCs have then been performed for several molecules, including some difficult cases. These results show...... vibrations must be included. The calculation of vibrational corrections to NMR parameters has been reviewed as part of this thesis. A study of the basis set convergence of vibrational corrections to nuclear shielding constants has also been performed. The basis set error in vibrational correction...
A high accuracy land use/cover retrieval system
Alaa Hefnawy
2012-03-01
Full Text Available The effects of spatial resolution on the accuracy of mapping land use/cover types have received increasing attention as a large number of multi-scale earth observation data become available. Although many methods of semi automated image classification of remotely sensed data have been established for improving the accuracy of land use/cover classification during the past 40 years, most of them were employed in single-resolution image classification, which led to unsatisfactory results. In this paper, we propose a multi-resolution fast adaptive content-based retrieval system of satellite images. Through our proposed system, we apply a Super Resolution technique for the Landsat-TM images to have a high resolution dataset. The human–computer interactive system is based on modified radial basis function for retrieval of satellite database images. We apply the backpropagation supervised artificial neural network classifier for both the multi and single resolution datasets. The results show significant improved land use/cover classification accuracy for the multi-resolution approach compared with those from single-resolution approach.
Two high accuracy digital integrators for Rogowski current transducers
Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua
2014-01-01
The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.
High Accuracy Piezoelectric Kinemometer; Cinemometro piezoelectrico de alta exactitud (VUAE)
Jimenez Martinez, F. J.; Frutos, J. de; Pastor, C.; Vazquez Rodriguez, M.
2012-07-01
We have developed a portable computerized and low consumption, our system is called High Accuracy Piezoelectric Kinemometer measurement, herein VUAE. By the high accuracy obtained by VUAE it make able to use the VUAE to obtain references measurements of system for measuring Speeds in Vehicles. Therefore VUAE could be used how reference equipment to estimate the error of installed kinemometers. The VUAE was created with n (n=2) pairs of ultrasonic transmitter-receiver, herein E-Rult. The transmitters used in the n couples E-Rult generate n ultrasonic barriers and receivers receive the echoes when the vehicle crosses the barriers. Digital processing of the echoes signals let us to obtain acceptable signals. Later, by mean of cross correlation technics is possible make a highly exact estimation of speed of the vehicle. The log of the moments of interception and the distance between each of the n ultrasounds allows for a highly exact estimation of speed of the vehicle. VUAE speed measurements were compared to a speed reference system based on piezoelectric cables. (Author) 11 refs.
High accuracy 3D electromagnetic finite element analysis
Nelson, E.M.
1997-01-01
A high accuracy 3D electromagnetic finite element field solver employing quadratic hexahedral elements and quadratic mixed-order one-form basis functions will be described. The solver is based on an object-oriented C++ class library. Test cases demonstrate that frequency errors less than 10 ppm can be achieved using modest workstations, and that the solutions have no contamination from spurious modes. The role of differential geometry and geometrical physics in finite element analysis will also be discussed. copyright 1997 American Institute of Physics
Reactions, accuracy and response complexity of numerical typing on touch screens.
Lin, Cheng-Jhe; Wu, Changxu
2013-01-01
Touch screens are popular nowadays as seen on public kiosks, industrial control panels and personal mobile devices. Numerical typing is one frequent task performed on touch screens, but this task on touch screen is subject to human errors and slow responses. This study aims to find innate differences of touch screens from standard physical keypads in the context of numerical typing by eliminating confounding issues. Effects of precise visual feedback and urgency of numerical typing were also investigated. The results showed that touch screens were as accurate as physical keyboards, but reactions were indeed executed slowly on touch screens as signified by both pre-motor reaction time and reaction time. Provision of precise visual feedback caused more errors, and the interaction between devices and urgency was not found on reaction time. To improve usability of touch screens, designers should focus more on reducing response complexity and be cautious about the use of visual feedback. The study revealed that slower responses on touch screens involved more complex human cognition to formulate motor responses. Attention should be given to designing precise visual feedback appropriately so that distractions or visual resource competitions can be avoided to improve human performance on touch screens.
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
Meng, Xiaojing; Wang, Yi; Liu, Tiening; Xing, Xiao; Cao, Yingxue; Zhao, Jiangping
2016-01-01
Highlights: • The effects of radiation on predictive accuracy in numerical simulations were studied. • A scaled experimental model with a high-temperature heat source was set up. • Simulation results were discussed considering with and without radiation model. • The buoyancy force and the ventilation rate were investigated. - Abstract: This paper investigates the effects of radiation on predictive accuracy in the numerical simulations of industrial buildings. A scaled experimental model with a high-temperature heat source is set up and the buoyancy-driven natural ventilation performance is presented. Besides predicting ventilation performance in an industrial building, the scaled model in this paper is also used to generate data to validate the numerical simulations. The simulation results show good agreement with the experiment data. The effects of radiation on predictive accuracy in the numerical simulations are studied for both pure convection model and combined convection and radiation model. Detailed results are discussed regarding the temperature and velocity distribution, the buoyancy force and the ventilation rate. The temperature and velocity distributions through the middle plane are presented for the pure convection model and the combined convection and radiation model. It is observed that the overall temperature and velocity magnitude predicted by the simulations for pure convection were significantly greater than those for the combined convection and radiation model. In addition, the Grashof number and the ventilation rate are investigated. The results show that the Grashof number and the ventilation rate are greater for the pure convection model than for the combined convection and radiation model.
Implementation and assessment of high-resolution numerical methods in TRACE
Wang, Dean, E-mail: wangda@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley RD 6167, Oak Ridge, TN 37831 (United States); Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G. [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)
2013-10-15
Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency.
Implementation and assessment of high-resolution numerical methods in TRACE
Wang, Dean; Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G.
2013-01-01
Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
Read-only high accuracy volume holographic optical correlator
Zhao, Tian; Li, Jingming; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2011-10-01
A read-only volume holographic correlator (VHC) is proposed. After the recording of all of the correlation database pages by angular multiplexing, a stand-alone read-only high accuracy VHC will be separated from the VHC recording facilities which include the high-power laser and the angular multiplexing system. The stand-alone VHC has its own low power readout laser and very compact and simple structure. Since there are two lasers that are employed for recording and readout, respectively, the optical alignment tolerance of the laser illumination on the SLM is very sensitive. The twodimensional angular tolerance is analyzed based on the theoretical model of the volume holographic correlator. The experimental demonstration of the proposed read-only VHC is introduced and discussed.
Brohi Ali Anwar
2017-01-01
Full Text Available The entropy production in 2-D heat transfer system has been analyzed systematically by using the finite volume method, to develop new criteria for the numerical simulation in case of multidimensional systems, with the aid of the CFD codes. The steady-state heat conduction problem has been investigated for entropy production, and the entropy production profile has been calculated based upon the current approach. From results for 2-D heat conduction, it can be found that the stability of entropy production profile exhibits a better agreement with the exact solution accordingly, and the current approach is effective for measuring the accuracy and stability of numerical simulations for heat transfer problems.
Cullum, J. [IBM T.J. Watson Research Center, Yorktown Heights, NY (United States)
1994-12-31
Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.
Synchrotron accelerator technology for proton beam therapy with high accuracy
Hiramoto, Kazuo
2009-01-01
Proton beam therapy was applied at the beginning to head and neck cancers, but it is now extended to prostate, lung and liver cancers. Thus the need for a pencil beam scanning method is increasing. With this method radiation dose concentration property of the proton beam will be further intensified. Hitachi group has supplied a pencil beam scanning therapy system as the first one for M. D. Anderson Hospital in United States, and it has been operational since May 2008. Hitachi group has been developing proton therapy system to correspond high-accuracy proton therapy to concentrate the dose in the diseased part which is located with various depths, and which sometimes has complicated shape. The author described here on the synchrotron accelerator technology that is an important element for constituting the proton therapy system. (K.Y.)
High-accuracy critical exponents for O(N) hierarchical 3D sigma models
Godina, J. J.; Li, L.; Meurice, Y.; Oktay, M. B.
2006-01-01
The critical exponent γ and its subleading exponent Δ in the 3D O(N) Dyson's hierarchical model for N up to 20 are calculated with high accuracy. We calculate the critical temperatures for the measure δ(φ-vector.φ-vector-1). We extract the first coefficients of the 1/N expansion from our numerical data. We show that the leading and subleading exponents agree with Polchinski equation and the equivalent Litim equation, in the local potential approximation, with at least 4 significant digits
Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)
1988-07-01
A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)
Fission product model for BWR analysis with improved accuracy in high burnup
Ikehara, Tadashi; Yamamoto, Munenari; Ando, Yoshihira
1998-01-01
A new fission product (FP) chain model has been studied to be used in a BWR lattice calculation. In attempting to establish the model, two requirements, i.e. the accuracy in predicting burnup reactivity and the easiness in practical application, are simultaneously considered. The resultant FP model consists of 81 explicit FP nuclides and two lumped pseudo nuclides having the absorption cross sections independent of burnup history and fuel composition. For the verification, extensive numerical tests covering over a wide range of operational conditions and fuel compositions have been carried out. The results indicate that the estimated errors in burnup reactivity are within 0.1%Δk for exposures up to 100GWd/t. It is concluded that the present model can offer a high degree of accuracy for FP representation in BWR lattice calculation. (author)
Numerical Model of High Strength Concrete
Wang, R. Z.; Wang, C. Y.; Lin, Y. L.
2018-03-01
The purpose of this paper is to present a three-dimensional constitutive model based on the concept of equivalent uniaxial strain. closed Menetrey-Willam (CMW) failure surfaces which combined with Menetrey-Willam meridian and the cap model are introduced in this paper. Saenz stress-strain model is applied and adjusted by the ultimate strength parameters from CMW failure surface to reflect the latest stress or strain condition. The high strength concrete (HSC) under tri-axial non-proportional loading is considered and the model in this paper performed a good prediction.
Shim S.M.
2012-01-01
Full Text Available The performance of the CO2 absorber column using mono-ethanolamine (MEA solution as chemical solvent are predicted by a One-Dimensional (1-D rate based model in the present study. 1-D Mass and heat balance equations of vapor and liquid phase are coupled with interfacial mass transfer model and vapor-liquid equilibrium model. The two-film theory is used to estimate the mass transfer between the vapor and liquid film. Chemical reactions in MEA-CO2-H2O system are considered to predict the equilibrium pressure of CO2 in the MEA solution. The mathematical and reaction kinetics models used in this work are calculated by using in-house code. The numerical results are validated in the comparison of simulation results with experimental and simulation data given in the literature. The performance of CO2 absorber column is evaluated by the 1-D rate based model using various reaction rate coefficients suggested by various researchers. When the rate of liquid to gas mass flow rate is about 8.3, 6.6, 4.5 and 3.1, the error of CO2 loading and the CO2 removal efficiency using the reaction rate coefficients of Aboudheir et al. is within about 4.9 % and 5.2 %, respectively. Therefore, the reaction rate coefficient suggested by Aboudheir et al. among the various reaction rate coefficients used in this study is appropriate to predict the performance of CO2 absorber column using MEA solution. [Acknowledgement. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF, funded by the Ministry of Education, Science and Technology (2011-0017220].
High accuracy magnetic field mapping of the LEP spectrometer magnet
Roncarolo, F
2000-01-01
The Large Electron Positron accelerator (LEP) is a storage ring which has been operated since 1989 at the European Laboratory for Particle Physics (CERN), located in the Geneva area. It is intended to experimentally verify the Standard Model theory and in particular to detect with high accuracy the mass of the electro-weak force bosons. Electrons and positrons are accelerated inside the LEP ring in opposite directions and forced to collide at four locations, once they reach an energy high enough for the experimental purposes. During head-to-head collisions the leptons loose all their energy and a huge amount of energy is concentrated in a small region. In this condition the energy is quickly converted in other particles which tend to go away from the interaction point. The higher the energy of the leptons before the collisions, the higher the mass of the particles that can escape. At LEP four large experimental detectors are accommodated. All detectors are multi purpose detectors covering a solid angle of alm...
Accuracy assessment of high-rate GPS measurements for seismology
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Accuracy assessment of cadastral maps using high resolution aerial photos
Alwan Imzahim
2018-01-01
Full Text Available A cadastral map is a map that shows the boundaries and ownership of land parcels. Some cadastral maps show additional details, such as survey district names, unique identifying numbers for parcels, certificate of title numbers, positions of existing structures, section or lot numbers and their respective areas, adjoining and adjacent street names, selected boundary dimensions and references to prior maps. In Iraq / Baghdad Governorate, the main problem is that the cadastral maps are georeferenced to a local geodetic datum known as Clark 1880 while the widely used reference system for navigation purpose (GPS and GNSS and uses Word Geodetic System 1984 (WGS84 as a base reference datum. The objective of this paper is to produce a cadastral map with scale 1:500 (metric scale by using aerial photographs 2009 with high ground spatial resolution 10 cm reference WGS84 system. The accuracy assessment for the cadastral maps updating approach to urban large scale cadastral maps (1:500-1:1000 was ± 0.115 meters; which complies with the American Social for Photogrammetry and Remote Sensing Standards (ASPRS.
Determination of UAV position using high accuracy navigation platform
Ireneusz Kubicki
2016-07-01
Full Text Available The choice of navigation system for mini UAV is very important because of its application and exploitation, particularly when the installed on it a synthetic aperture radar requires highly precise information about an object’s position. The presented exemplary solution of such a system draws attention to the possible problems associated with the use of appropriate technology, sensors, and devices or with a complete navigation system. The position and spatial orientation errors of the measurement platform influence on the obtained SAR imaging. Both, turbulences and maneuvers performed during flight cause the changes in the position of the airborne object resulting in deterioration or lack of images from SAR. Consequently, it is necessary to perform operations for reducing or eliminating the impact of the sensors’ errors on the UAV position accuracy. You need to look for compromise solutions between newer better technologies and in the field of software. Keywords: navigation systems, unmanned aerial vehicles, sensors integration
Modified sine bar device measures small angles with high accuracy
Thekaekara, M.
1968-01-01
Modified sine bar device measures small angles with enough accuracy to calibrate precision optical autocollimators. The sine bar is a massive bar of steel supported by two cylindrical rods at one end and one at the other.
Wang, Yi
2016-07-21
Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Measurement system with high accuracy for laser beam quality.
Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming
2015-05-20
Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.
NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.
LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.
2005-09-12
Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.
Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena
2009-01-30
Modelling and Numerical Analysis, 40(5):815-841, 2006. [31] Michael Dumbser, Martin Kaser, and Eleuterio F. Toro. An arbitrary high-order Discontinuous...proximation of PML, SIAM J. Numer. Anal., 41 (2003), pp. 287-305. [60] E. BECACHE, S. FAUQUEUX, AND P. JOLY , Stability of perfectly matched layers, group...time-domain performance analysis, IEEE Trans, on Magnetics, 38 (2002), pp. 657- 660. [64] J. DIAZ AND P. JOLY , An analysis of higher-order boundary
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients
Iyengar, S.S.; Morgan-Hughes, G.; Ukoumunne, O.; Clayton, B.; Davies, E.J.; Nikolaou, V.; Hyde, C.J.; Shore, A.C.; Roobottom, C.A.
2016-01-01
Aim: To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Materials and methods: Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥50%) stenosis and secondarily as the presence of severe (≥70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. Results: No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. Conclusion: The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. - Highlights: • Diagnostic accuracy of High-Definition CT Angiography (HD-CTCA) has been assessed. • Invasive Coronary angiography (ICA) is the reference standard. • Diagnostic accuracy of HD-CTCA is comparable to ICA. • Diagnostic accuracy is not affected by coronary calcium or stents. • HD-CTCA provides a non-invasive alternative in high-risk patients.
Zeng, Zhaoli; Qu, Xueming; Tan, Yidong; Tan, Runtao; Zhang, Shulian
2015-06-29
A simple and high-accuracy self-mixing interferometer based on single high-order orthogonally polarized feedback effects is presented. The single high-order feedback effect is realized when dual-frequency laser reflects numerous times in a Fabry-Perot cavity and then goes back to the laser resonator along the same route. In this case, two orthogonally polarized feedback fringes with nanoscale resolution are obtained. This self-mixing interferometer has the advantages of higher sensitivity to weak signal than that of conventional interferometer. In addition, two orthogonally polarized fringes are useful for discriminating the moving direction of measured object. The experiment of measuring 2.5nm step is conducted, which shows a great potential in nanometrology.
High-accuracy user identification using EEG biometrics.
Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip
2016-08-01
We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.
High Accuracy Nonlinear Control and Estimation for Machine Tool Systems
Papageorgiou, Dimitrios
Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...
Methodology for GPS Synchronization Evaluation with High Accuracy
Li Zan; Braun Torsten; Dimitrova Desislava
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper we are particularly interested in GPS based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. Ou...
Methodology for GPS Synchronization Evaluation with High Accuracy
Li, Zan; Braun, Torsten; Dimitrova, Desislava Cvetanova
2015-01-01
Clock synchronization in the order of nanoseconds is one of the critical factors for time-based localization. Currently used time synchronization methods are developed for the more relaxed needs of network operation. Their usability for positioning should be carefully evaluated. In this paper, we are particularly interested in GPS-based time synchronization. To judge its usability for localization we need a method that can evaluate the achieved time synchronization with nanosecond accuracy. O...
Takayama, T., E-mail: takayama@yz.yamagata-u.ac.j [Faculty of Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan); Kamitani, A.; Tanaka, A. [Graduate School of Science and Engineering, Yamagata University, 4-3-16, Johnan, Yonezawa, Yamagata 992-8510 (Japan)
2010-11-01
Influence of the magnet position on the determination of the distribution of the critical current density in a high-temperature superconducting (HTS) thin film has been investigated numerically. For this purpose, a numerical code has been developed for analyzing the shielding current density in a HTS sample. By using the code, the permanent magnet method is reproduced. The results of computations show that, even if the center of the permanent magnet is located near the film edge, the maximum repulsive force is roughly proportional to the critical current density. This means that the distribution of the critical current density in the HTS film can be estimated from the proportionality constants determined by using the relations between the maximum repulsive force and the critical current density.
Integral equation models for image restoration: high accuracy methods and fast algorithms
Lu, Yao; Shen, Lixin; Xu, Yuesheng
2010-01-01
Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images
Numerical evaluation of high energy particle effects in magnetohydrodynamics
White, R.B.; Wu, Y.
1994-03-01
The interaction of high energy ions with magnetohydrodynamic modes is analyzed. A numerical code is developed which evaluates the contribution of the high energy particles to mode stability using orbit averaging of motion in either analytic or numerically generated equilibria through Hamiltonian guiding center equations. A dispersion relation is then used to evaluate the effect of the particles on the linear mode. Generic behavior of the solutions of the dispersion relation is discussed and dominant contributions of different components of the particle distribution function are identified. Numerical convergence of Monte-Carlo simulations is analyzed. The resulting code ORBIT provides an accurate means of comparing experimental results with the predictions of kinetic magnetohydrodynamics. The method can be extended to include self consistent modification of the particle orbits by the mode, and hence the full nonlinear dynamics of the coupled system
Highly uniform parallel microfabrication using a large numerical aperture system
Zhang, Zi-Yu; Su, Ya-Hui, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [School of Electrical Engineering and Automation, Anhui University, Hefei 230601 (China); Zhang, Chen-Chu; Hu, Yan-Lei; Wang, Chao-Wei; Li, Jia-Wen; Chu, Jia-Ru; Wu, Dong, E-mail: ustcsyh@ahu.edu.cn, E-mail: dongwu@ustc.edu.cn [CAS Key Laboratory of Mechanical Behavior and Design of Materials, Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026 (China)
2016-07-11
In this letter, we report an improved algorithm to produce accurate phase patterns for generating highly uniform diffraction-limited multifocal arrays in a large numerical aperture objective system. It is shown that based on the original diffraction integral, the uniformity of the diffraction-limited focal arrays can be improved from ∼75% to >97%, owing to the critical consideration of the aperture function and apodization effect associated with a large numerical aperture objective. The experimental results, e.g., 3 × 3 arrays of square and triangle, seven microlens arrays with high uniformity, further verify the advantage of the improved algorithm. This algorithm enables the laser parallel processing technology to realize uniform microstructures and functional devices in the microfabrication system with a large numerical aperture objective.
Qian, Shaoxiang, E-mail: qian.shaoxiang@jgc.com [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kanamaru, Shinichiro [EN Technology Center, Process Technology Division, JGC Corporation, 2-3-1 Minato Mirai, Nishi-ku, Yokohama 220-6001 (Japan); Kasahara, Naoto [Nuclear Engineering and Management, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)
2015-07-15
Highlights: • Numerical methods for accurate prediction of thermal loading were proposed. • Predicted fluid temperature fluctuation (FTF) intensity is close to the experiment. • Predicted structure temperature fluctuation (STF) range is close to the experiment. • Predicted peak frequencies of FTF and STF also agree well with the experiment. • CFD results show the proposed numerical methods are of sufficiently high accuracy. - Abstract: Temperature fluctuations generated by the mixing of hot and cold fluids at a T-junction, which is widely used in nuclear power and process plants, can cause thermal fatigue failure. The conventional methods for evaluating thermal fatigue tend to provide insufficient accuracy, because they were developed based on limited experimental data and a simplified one-dimensional finite element analysis (FEA). CFD/FEA coupling analysis is expected as a useful tool for the more accurate evaluation of thermal fatigue. The present paper aims to verify the accuracy of proposed numerical methods of simulating fluid and structure temperature fluctuations at a T-junction for thermal fatigue evaluation. The dynamic Smagorinsky model (DSM) is used for large eddy simulation (LES) sub-grid scale (SGS) turbulence model, and a hybrid scheme (HS) is adopted for the calculation of convective terms in the governing equations. Also, heat transfer between fluid and structure is calculated directly through thermal conduction by creating a mesh with near wall resolution (NWR) by allocating grid points within the thermal boundary sub-layer. The simulation results show that the distribution of fluid temperature fluctuation intensity and the range of structure temperature fluctuation are remarkably close to the experimental results. Moreover, the peak frequencies of power spectrum density (PSD) of both fluid and structure temperature fluctuations also agree well with the experimental results. Therefore, the numerical methods used in the present paper are
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase I
National Aeronautics and Space Administration — NASA's future science and exploratory missions will require much lighter, smaller, and longer life rate sensors that can provide high accuracy navigational...
High Accuracy Positioning using Jet Thrusters for Quadcopter
Pi ChenHuan
2018-01-01
Full Text Available A quadcopter is equipped with four additional jet thrusters on its horizontal plane and vertical to each other in order to improve the maneuverability and positioning accuracy of quadcopter. A dynamic model of the quadcopter with jet thrusters is derived and two controllers are implemented in simulation, one is a dual loop state feedback controller for pose control and another is an auxiliary jet thruster controller for accurate positioning. Step response simulations showed that the jet thruster can control the quadcopter with less overshoot compared to the conventional one. Over 10s loiter simulation with disturbance, the quadcopter with jet thruster decrease 85% of RMS error of horizontal disturbance compared to a conventional quadcopter with only a dual loop state feedback controller. The jet thruster controller shows the possibility for further accurate in the field of quadcopter positioning.
High-accuracy contouring using projection moiré
Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.
2005-09-01
Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.
Numerical solution of High-kappa model of superconductivity
Karamikhova, R. [Univ. of Texas, Arlington, TX (United States)
1996-12-31
We present formulation and finite element approximations of High-kappa model of superconductivity which is valid in the high {kappa}, high magnetic field setting and accounts for applied magnetic field and current. Major part of this work deals with steady-state and dynamic computational experiments which illustrate our theoretical results numerically. In our experiments we use Galerkin discretization in space along with Backward-Euler and Crank-Nicolson schemes in time. We show that for moderate values of {kappa}, steady states of the model system, computed using the High-kappa model, are virtually identical with results computed using the full Ginzburg-Landau (G-L) equations. We illustrate numerically optimal rates of convergence in space and time for the L{sup 2} and H{sup 1} norms of the error in the High-kappa solution. Finally, our numerical approximations demonstrate some well-known experimentally observed properties of high-temperature superconductors, such as appearance of vortices, effects of increasing the applied magnetic field and the sample size, and the effect of applied constant current.
A new adaptive GMRES algorithm for achieving high accuracy
Sosonkina, M.; Watson, L.T.; Kapania, R.K. [Virginia Polytechnic Inst., Blacksburg, VA (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
1996-12-31
GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.
Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine
Spodniak Miroslav
2017-01-01
Full Text Available This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.
A generalized polynomial chaos based ensemble Kalman filter with high accuracy
Li Jia; Xiu Dongbin
2009-01-01
As one of the most adopted sequential data assimilation methods in many areas, especially those involving complex nonlinear dynamics, the ensemble Kalman filter (EnKF) has been under extensive investigation regarding its properties and efficiency. Compared to other variants of the Kalman filter (KF), EnKF is straightforward to implement, as it employs random ensembles to represent solution states. This, however, introduces sampling errors that affect the accuracy of EnKF in a negative manner. Though sampling errors can be easily reduced by using a large number of samples, in practice this is undesirable as each ensemble member is a solution of the system of state equations and can be time consuming to compute for large-scale problems. In this paper we present an efficient EnKF implementation via generalized polynomial chaos (gPC) expansion. The key ingredients of the proposed approach involve (1) solving the system of stochastic state equations via the gPC methodology to gain efficiency; and (2) sampling the gPC approximation of the stochastic solution with an arbitrarily large number of samples, at virtually no additional computational cost, to drastically reduce the sampling errors. The resulting algorithm thus achieves a high accuracy at reduced computational cost, compared to the classical implementations of EnKF. Numerical examples are provided to verify the convergence property and accuracy improvement of the new algorithm. We also prove that for linear systems with Gaussian noise, the first-order gPC Kalman filter method is equivalent to the exact Kalman filter.
M. Boumaza
2015-07-01
Full Text Available Transient convection heat transfer is of fundamental interest in many industrial and environmental situations, as well as in electronic devices and security of energy systems. Transient fluid flow problems are among the more difficult to analyze and yet are very often encountered in modern day technology. The main objective of this research project is to carry out a theoretical and numerical analysis of transient convective heat transfer in vertical flows, when the thermal field is due to different kinds of variation, in time and space of some boundary conditions, such as wall temperature or wall heat flux. This is achieved by the development of a mathematical model and its resolution by suitable numerical methods, as well as performing various sensitivity analyses. These objectives are achieved through a theoretical investigation of the effects of wall and fluid axial conduction, physical properties and heat capacity of the pipe wall on the transient downward mixed convection in a circular duct experiencing a sudden change in the applied heat flux on the outside surface of a central zone.
European Workshop on High Order Nonlinear Numerical Schemes for Evolutionary PDEs
Beaugendre, Héloïse; Congedo, Pietro; Dobrzynski, Cécile; Perrier, Vincent; Ricchiuto, Mario
2014-01-01
This book collects papers presented during the European Workshop on High Order Nonlinear Numerical Methods for Evolutionary PDEs (HONOM 2013) that was held at INRIA Bordeaux Sud-Ouest, Talence, France in March, 2013. The central topic is high order methods for compressible fluid dynamics. In the workshop, and in this proceedings, greater emphasis is placed on the numerical than the theoretical aspects of this scientific field. The range of topics is broad, extending through algorithm design, accuracy, large scale computing, complex geometries, discontinuous Galerkin, finite element methods, Lagrangian hydrodynamics, finite difference methods and applications and uncertainty quantification. These techniques find practical applications in such fields as fluid mechanics, magnetohydrodynamics, nonlinear solid mechanics, and others for which genuinely nonlinear methods are needed.
Compact, High Accuracy CO2 Monitor, Phase I
National Aeronautics and Space Administration — This Small Business Innovative Research Phase I proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...
Compact, High Accuracy CO2 Monitor, Phase II
National Aeronautics and Space Administration — This Small Business Innovative Research Phase II proposal seeks to develop a low cost, robust, highly precise and accurate CO2 monitoring system. This system will...
Topics in the numerical simulation of high temperature flows
Cheret, R.; Dautray, R.; Desgraz, J.C.; Mercier, B.; Meurant, G.; Ovadia, J.; Sitt, B.
1984-06-01
In the fields of inertial confinement fusion, astrophysics, detonation, or other high energy phenomena, one has to deal with multifluid flows involving high temperatures, high speeds and strong shocks initiated e.g. by chemical reactions or even by thermonuclear reactions. The simulation of multifluid flows is reviewed: first are Lagrangian methods which have been successfully applied in the past. Then we describe our experience with newer adaptive mesh methods, originally designed to increase the accuracy of Lagrangian methods. Finally, some facts about Eulerian methods are recalled, with emphasis on the EAD scheme which has been recently extended to the elasto-plastic case. High temperature flows is then considered, described by the equations of radiation hydrodynamics. We show how conservation of energy can be preserved while solving the radiative transfer equation via the Monte Carlo method. For detonation, some models, introduced to describe the initiation of detonation in heterogeneous explosives. Finally we say a few words about instability of these flows
High-accuracy Subdaily ERPs from the IGS
Ray, J. R.; Griffiths, J.
2012-04-01
Since November 2000 the International GNSS Service (IGS) has published Ultra-rapid (IGU) products for near real-time (RT) and true real-time applications. They include satellite orbits and clocks, as well as Earth rotation parameters (ERPs) for a sliding 48-hr period. The first day of each update is based on the most recent GPS and GLONASS observational data from the IGS hourly tracking network. At the time of release, these observed products have an initial latency of 3 hr. The second day of each update consists of predictions. So the predictions between about 3 and 9 hr into the second half are relevant for true RT uses. Originally updated twice daily, the IGU products since April 2004 have been issued every 6 hr, at 3, 9, 15, and 21 UTC. Up to seven Analysis Centers (ACs) contribute to the IGU combinations. Two sets of ERPs are published with each IGU update, observed values at the middle epoch of the first half and predicted values at the middle epoch of the second half. The latency of the near RT ERPs is 15 hr while the predicted ERPs, based on projections of each AC's most recent determinations, are issued 9 hr ahead of their reference epoch. While IGU ERPs are issued every 6 hr, each set represents an integrated estimate over the surrounding 24 hr. So successive values are temporally correlated with about 75% of the data being common; this fact should be taken into account in user assimilations. To evaluate the accuracy of these near RT and predicted ERPs, they have been compared to the IGS Final ERPs, available about 11 to 17 d after data collection. The IGU products improved dramatically in the earlier years but since about 2008.0 the performance has been stable and excellent. During the last three years, RMS differences for the observed IGU ERPs have been about 0.036 mas and 0.0101 ms for each polar motion component and LOD respectively. (The internal precision of the reference IGS ERPs over the same period is about 0.016 mas for polar motion and 0
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the
High-Accuracy Measurements of Total Column Water Vapor From the Orbiting Carbon Observatory-2
Nelson, Robert R.; Crisp, David; Ott, Lesley E.; O'Dell, Christopher W.
2016-01-01
Accurate knowledge of the distribution of water vapor in Earth's atmosphere is of critical importance to both weather and climate studies. Here we report on measurements of total column water vapor (TCWV) from hyperspectral observations of near-infrared reflected sunlight over land and ocean surfaces from the Orbiting Carbon Observatory-2 (OCO-2). These measurements are an ancillary product of the retrieval algorithm used to measure atmospheric carbon dioxide concentrations, with information coming from three highly resolved spectral bands. Comparisons to high-accuracy validation data, including ground-based GPS and microwave radiometer data, demonstrate that OCO-2 TCWV measurements have maximum root-mean-square deviations of 0.9-1.3mm. Our results indicate that OCO-2 is the first space-based sensor to accurately and precisely measure the two most important greenhouse gases, water vapor and carbon dioxide, at high spatial resolution [1.3 x 2.3 km(exp. 2)] and that OCO-2 TCWV measurements may be useful in improving numerical weather predictions and reanalysis products.
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Mol, Pieter; Krabbe, Hans G.; de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background: Due to increasing numbers of people with diabetes taking part in extreme sports (e. g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior
Innovative Fiber-Optic Gyroscopes (FOGs) for High Accuracy Space Applications, Phase II
National Aeronautics and Space Administration — This project aims to develop a compact, highly innovative Inertial Reference/Measurement Unit (IRU/IMU) that pushes the state-of-the-art in high accuracy performance...
Numerical optimization of circulation control airfoil at high subsonic speed
Tai, T. C.; Kidwell, G. H., Jr.
1984-01-01
A numerical procedure for optimizing the design of the circulation control airfoil for use at high subsonic speeds is presented. The procedure consists of an optimization scheme coupled with a viscous potential flow analysis for the blowing jet. The desired airfoil is defined by a combination of three baseline shapes (cambered ellipse and cambered ellipse with drooped and spiraled trailing edges). The coefficients of these shapes are used as design variables in the optimization process. Under the constraints of lift augmentation and lift-to-drag ratios, the airfoil, optimized at free-stream Mach 0.54 and alpha = -2 degrees can be characterized as a cambered ellipse with a drooped trailing edge. Experimental tests support the performance improvement predicted by numerical optimization.
Ricardo Lopes Cardoso
Full Text Available Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295 of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.
Cardoso, Ricardo Lopes; Leite, Rodrigo Oliveira; de Aquino, André Carlos Busanelli
2016-01-01
Previous researches support that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Moreover, literature shows that different types of graphical information can help or harm the accuracy on decision making of accountants and financial analysts. We conducted a 4×2 mixed-design experiment to examine the effects of numerical information disclosure on financial analysts' accuracy, and investigated the role of overconfidence in decision making. Results show that compared to text, column graph enhanced accuracy on decision making, followed by line graphs. No difference was found between table and textual disclosure. Overconfidence harmed accuracy, and both genders behaved overconfidently. Additionally, the type of disclosure (text, table, line graph and column graph) did not affect the overconfidence of individuals, providing evidence that overconfidence is a personal trait. This study makes three contributions. First, it provides evidence from a larger sample size (295) of financial analysts instead of a smaller sample size of students that graphs are relevant decision aids to tasks related to the interpretation of numerical information. Second, it uses the text as a baseline comparison to test how different ways of information disclosure (line and column graphs, and tables) can enhance understandability of information. Third, it brings an internal factor to this process: overconfidence, a personal trait that harms the decision-making process of individuals. At the end of this paper several research paths are highlighted to further study the effect of internal factors (personal traits) on financial analysts' accuracy on decision making regarding numerical information presented in a graphical form. In addition, we offer suggestions concerning some practical implications for professional accountants, auditors, financial analysts and standard setters.
Mengaldo, Gianmarco; De Grazia, Daniele; Moura, Rodrigo C.; Sherwin, Spencer J.
2018-04-01
This study focuses on the dispersion and diffusion characteristics of high-order energy-stable flux reconstruction (ESFR) schemes via the spatial eigensolution analysis framework proposed in [1]. The analysis is performed for five ESFR schemes, where the parameter 'c' dictating the properties of the specific scheme recovered is chosen such that it spans the entire class of ESFR methods, also referred to as VCJH schemes, proposed in [2]. In particular, we used five values of 'c', two that correspond to its lower and upper bounds and the others that identify three schemes that are linked to common high-order methods, namely the ESFR recovering two versions of discontinuous Galerkin methods and one recovering the spectral difference scheme. The performance of each scheme is assessed when using different numerical intercell fluxes (e.g. different levels of upwinding), ranging from "under-" to "over-upwinding". In contrast to the more common temporal analysis, the spatial eigensolution analysis framework adopted here allows one to grasp crucial insights into the diffusion and dispersion properties of FR schemes for problems involving non-periodic boundary conditions, typically found in open-flow problems, including turbulence, unsteady aerodynamics and aeroacoustics.
A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips
Guanyi Sun
2011-01-01
Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.
Impact of a highly detailed emission inventory on modeling accuracy
Taghavi, M.; Cautenet, S.; Arteta, J.
2005-03-01
During Expérience sur Site pour COntraindre les Modèles de Pollution atmosphérique et de Transport d'Emissions (ESCOMPTE) campaign (June 10 to July 14, 2001), two pollution events observed during an intensive measurement period (IOP2a and IOP2b) have been simulated. The comprehensive Regional Atmospheric Modeling Systems (RAMS) model, version 4.3, coupled online with a chemical module including 29 species is used to follow the chemistry of a polluted zone over Southern France. This online method takes advantage of a parallel code and use of the powerful computer SGI 3800. Runs are performed with two emission inventories: the Emission Pre Inventory (EPI) and the Main Emission Inventory (MEI). The latter is more recent and has a high resolution. The redistribution of simulated chemical species (ozone and nitrogen oxides) is compared with aircraft and surface station measurements for both runs at regional scale. We show that the MEI inventory is more efficient than the EPI in retrieving the redistribution of chemical species in space (three-dimensional) and time. In surface stations, MEI is superior especially for primary species, like nitrogen oxides. The ozone pollution peaks obtained from an inventory, such as EPI, have a large uncertainty. To understand the realistic geographical distribution of pollutants and to obtain a good order of magnitude in ozone concentration (in space and time), a high-resolution inventory like MEI is necessary. Coupling RAMS-Chemistry with MEI provides a very efficient tool able to simulate pollution plumes even in a region with complex circulations, such as the ESCOMPTE zone.
Switched-capacitor techniques for high-accuracy filter and ADC design
Quinn, P.J.; Roermund, van A.H.M.
2007-01-01
Switched capacitor (SC) techniques are well proven to be excellent candidates for implementing critical analogue functions with high accuracy, surpassing other analogue techniques when embedded in mixed-signal CMOS VLSI. Conventional SC circuits are primarily limited in accuracy by a) capacitor
Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.
Petrinović, Davor; Brezović, Marko
2011-04-01
We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE
High accuracy laboratory spectroscopy to support active greenhouse gas sensing
Long, D. A.; Bielska, K.; Cygan, A.; Havey, D. K.; Okumura, M.; Miller, C. E.; Lisak, D.; Hodges, J. T.
2011-12-01
Recent carbon dioxide (CO2) remote sensing missions have set precision targets as demanding as 0.25% (1 ppm) in order to elucidate carbon sources and sinks [1]. These ambitious measurement targets will require the most precise body of spectroscopic reference data ever assembled. Active sensing missions will be especially susceptible to subtle line shape effects as the narrow bandwidth of these measurements will greatly limit the number of spectral transitions which are employed in retrievals. In order to assist these remote sensing missions we have employed frequency-stabilized cavity ring-down spectroscopy (FS-CRDS) [2], a high-resolution, ultrasensitive laboratory technique, to measure precise line shape parameters for transitions of O2, CO2, and other atmospherically-relevant species within the near-infrared. These measurements have led to new HITRAN-style line lists for both 16O2 [3] and rare isotopologue [4] transitions in the A-band. In addition, we have performed detailed line shape studies of CO2 transitions near 1.6 μm under a variety of broadening conditions [5]. We will address recent measurements in these bands as well as highlight recent instrumental improvements to the FS-CRDS spectrometer. These improvements include the use of the Pound-Drever-Hall locking scheme, a high bandwidth servo which enables measurements to be made at rates greater than 10 kHz [6]. In addition, an optical frequency comb will be utilized as a frequency reference, which should allow for transition frequencies to be measured with uncertainties below 10 kHz (3×10-7 cm-1). [1] C. E. Miller, D. Crisp, P. L. DeCola, S. C. Olsen, et al., J. Geophys. Res.-Atmos. 112, D10314 (2007). [2] J. T. Hodges, H. P. Layer, W. W. Miller, G. E. Scace, Rev. Sci. Instrum. 75, 849-863 (2004). [3] D. A. Long, D. K. Havey, M. Okumura, C. E. Miller, et al., J. Quant. Spectrosc. Radiat. Transfer 111, 2021-2036 (2010). [4] D. A. Long, D. K. Havey, S. S. Yu, M. Okumura, et al., J. Quant. Spectrosc
The accuracy of QCD perturbation theory at high energies
Dalla Brida, Mattia; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2016-01-01
We discuss the determination of the strong coupling $\\alpha_\\mathrm{\\overline{MS}}^{}(m_\\mathrm{Z})$ or equivalently the QCD $\\Lambda$-parameter. Its determination requires the use of perturbation theory in $\\alpha_s(\\mu)$ in some scheme, $s$, and at some energy scale $\\mu$. The higher the scale $\\mu$ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the $\\Lambda$-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to $\\alpha_s = 0.1$ and below. We find that perturbation theory is very accurate there, yielding a three percent error in the $\\Lambda$-parameter, while data around $\\alpha_s \\approx 0.2$ is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
Numerical simulation of realistic high-temperature superconductors
1997-01-01
One of the main obstacles in the development of practical high-temperature superconducting (HTS) materials is dissipation, caused by the motion of magnetic flux quanta called vortices. Numerical simulations provide a promising new approach for studying these vortices. By exploiting the extraordinary memory and speed of massively parallel computers, researchers can obtain the extremely fine temporal and spatial resolution needed to model complex vortex behavior. The results may help identify new mechanisms to increase the current-capability capabilities and to predict the performance characteristics of HTS materials intended for industrial applications
A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers
Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang
1990-02-01
In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.
Accuracy Assessment for the Three-Dimensional Coordinates by High-Speed Videogrammetric Measurement
Xianglei Liu
2018-01-01
Full Text Available High-speed CMOS camera is a new kind of transducer to make the videogrammetric measurement for monitoring the displacement of high-speed shaking table structure. The purpose of this paper is to validate the three-dimensional coordinate accuracy of the shaking table structure acquired from the presented high-speed videogrammetric measuring system. In the paper, all of the key intermediate links are discussed, including the high-speed CMOS videogrammetric measurement system, the layout of the control network, the elliptical target detection, and the accuracy validation of final 3D spatial results. Through the accuracy analysis, the submillimeter accuracy can be made for the final the three-dimensional spatial coordinates which certify that the proposed high-speed videogrammetric technique is a better alternative technique which can replace the traditional transducer technique for monitoring the dynamic response for the shaking table structure.
Lucas, P.; Van Zuijlen, A.H.; Bijl, H.
2009-01-01
Mesh adaptation is a fairly established tool to obtain numerically accurate solutions for flow problems. Computational efficiency is, however, not always guaranteed for the adaptation strategies found in literature. Typically excessive mesh growth diminishes the potential efficiency gain. This
High-precision numerical integration of equations in dynamics
Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.
2018-05-01
An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.
A High-Accuracy Linear Conservative Difference Scheme for Rosenau-RLW Equation
Jinsong Hu
2013-01-01
Full Text Available We study the initial-boundary value problem for Rosenau-RLW equation. We propose a three-level linear finite difference scheme, which has the theoretical accuracy of Oτ2+h4. The scheme simulates two conservative properties of original problem well. The existence, uniqueness of difference solution, and a priori estimates in infinite norm are obtained. Furthermore, we analyze the convergence and stability of the scheme by energy method. At last, numerical experiments demonstrate the theoretical results.
High-accuracy determination for optical indicatrix rotation in ferroelectric DTGS
O.S.Kushnir; O.A.Bevz; O.G.Vlokh
2000-01-01
Optical indicatrix rotation in deuterated ferroelectric triglycine sulphate is studied with the high-accuracy null-polarimetric technique. The behaviour of the effect in ferroelectric phase is referred to quadratic spontaneous electrooptics.
Applying recursive numerical integration techniques for solving high dimensional integrals
Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan
2016-11-01
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Applying recursive numerical integration techniques for solving high dimensional integrals
Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik
2016-11-15
The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.
Zhong Xiaolin; Tatineni, Mahidhar
2003-01-01
The direct numerical simulation of receptivity, instability and transition of hypersonic boundary layers requires high-order accurate schemes because lower-order schemes do not have an adequate accuracy level to compute the large range of time and length scales in such flow fields. The main limiting factor in the application of high-order schemes to practical boundary-layer flow problems is the numerical instability of high-order boundary closure schemes on the wall. This paper presents a family of high-order non-uniform grid finite difference schemes with stable boundary closures for the direct numerical simulation of hypersonic boundary-layer transition. By using an appropriate grid stretching, and clustering grid points near the boundary, high-order schemes with stable boundary closures can be obtained. The order of the schemes ranges from first-order at the lowest, to the global spectral collocation method at the highest. The accuracy and stability of the new high-order numerical schemes is tested by numerical simulations of the linear wave equation and two-dimensional incompressible flat plate boundary layer flows. The high-order non-uniform-grid schemes (up to the 11th-order) are subsequently applied for the simulation of the receptivity of a hypersonic boundary layer to free stream disturbances over a blunt leading edge. The steady and unsteady results show that the new high-order schemes are stable and are able to produce high accuracy for computations of the nonlinear two-dimensional Navier-Stokes equations for the wall bounded supersonic flow
Theoretical and numerical study of highly anisotropic turbulent flows
Biferale, L.; Daumont, I.; Lanotte, A.; Toschi, F.
2004-01-01
We present a detailed numerical study of anisotropic statistical fluctuations in stationary, homogeneous turbulent flows. We address both problems of intermittency in anisotropic sectors, and the relative importance of isotropic and anisotropic fluctuations at different scales on a direct numerical
Experimental and numerical studies of high-velocity impact fragmentation
Kipp, M.E.; Grady, D.E.; Swegle, J.W.
1993-08-01
Developments are reported in both experimental and numerical capabilities for characterizing the debris spray produced in penetration events. We have performed a series of high-velocity experiments specifically designed to examine the fragmentation of the projectile during impact. High-strength, well-characterized steel spheres (6.35 mm diameter) were launched with a two-stage light-gas gun to velocities in the range of 3 to 5 km/s. Normal impact with PMMA plates, thicknesses of 0.6 to 11 mm, applied impulsive loads of various amplitudes and durations to the steel sphere. Multiple flash radiography diagnostics and recovery techniques were used to assess size, velocity, trajectory and statistics of the impact-induced fragment debris. Damage modes to the primary target plate (plastic) and to a secondary target plate (aluminum) were also evaluated. Dynamic fragmentation theories, based on energy-balance principles, were used to evaluate local material deformation and fracture state information from CTH, a three-dimensional Eulerian solid dynamics shock wave propagation code. The local fragment characterization of the material defines a weighted fragment size distribution, and the sum of these distributions provides a composite particle size distribution for the steel sphere. The calculated axial and radial velocity changes agree well with experimental data, and the calculated fragment sizes are in qualitative agreement with the radiographic data. A secondary effort involved the experimental and computational analyses of normal and oblique copper ball impacts on steel target plates. High-resolution radiography and witness plate diagnostics provided impact motion and statistical fragment size data. CTH simulations were performed to test computational models and numerical methods.
Numerical Simulation of Oil Jet Lubrication for High Speed Gears
Tommaso Fondelli
2015-01-01
Full Text Available The Geared Turbofan technology is one of the most promising engine configurations to significantly reduce the specific fuel consumption. In this architecture, a power epicyclical gearbox is interposed between the fan and the low pressure spool. Thanks to the gearbox, fan and low pressure spool can turn at different speed, leading to higher engine bypass ratio. Therefore the gearbox efficiency becomes a key parameter for such technology. Further improvement of efficiency can be achieved developing a physical understanding of fluid dynamic losses within the transmission system. These losses are mainly related to viscous effects and they are directly connected to the lubrication method. In this work, the oil injection losses have been studied by means of CFD simulations. A numerical study of a single oil jet impinging on a single high speed gear has been carried out using the VOF method. The aim of this analysis is to evaluate the resistant torque due to the oil jet lubrication, correlating the torque data with the oil-gear interaction phases. URANS calculations have been performed using an adaptive meshing approach, as a way of significantly reducing the simulation costs. A global sensitivity analysis of adopted models has been carried out and a numerical setup has been defined.
The numerical dynamic for highly nonlinear partial differential equations
Lafon, A.; Yee, H. C.
1992-01-01
Problems associated with the numerical computation of highly nonlinear equations in computational fluid dynamics are set forth and analyzed in terms of the potential ranges of spurious behaviors. A reaction-convection equation with a nonlinear source term is employed to evaluate the effects related to spatial and temporal discretizations. The discretization of the source term is described according to several methods, and the various techniques are shown to have a significant effect on the stability of the spurious solutions. Traditional linearized stability analyses cannot provide the level of confidence required for accurate fluid dynamics computations, and the incorporation of nonlinear analysis is proposed. Nonlinear analysis based on nonlinear dynamical systems complements the conventional linear approach and is valuable in the analysis of hypersonic aerodynamics and combustion phenomena.
Numerical modelling of flow pattern for high swirling flows
Parra Teresa
2015-01-01
Full Text Available This work focuses on the interaction of two coaxial swirling jets. High swirl burners are suitable for lean flames and produce low emissions. Computational Fluid Dynamics has been used to study the isothermal behaviour of two confined jets whose setup and operating conditions are those of the benchmark of Roback and Johnson. Numerical model is a Total Variation Diminishing and PISO is used to pressure velocity coupling. Transient analysis let identify the non-axisymmetric region of reverse flow. The center of instantaneous azimuthal velocities is not located in the axis of the chamber. The temporal sampling evidences this center spins around the axis of the device forming the precessing vortex core (PVC whose Strouhal numbers are more than two for Swirl numbers of one. Influence of swirl number evidences strong swirl numbers are precursor of large vortex breakdown. Influence of conical diffusers evidence the reduction of secondary flows associated to boundary layer separation.
Tong, Vivian, E-mail: v.tong13@imperial.ac.uk [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Jiang, Jun [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom); Wilkinson, Angus J. [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Britton, T. Ben [Department of Materials, Imperial College London, Prince Consort Road, London SW7 2AZ (United Kingdom)
2015-08-15
High resolution, cross-correlation-based, electron backscatter diffraction (EBSD) measures the variation of elastic strains and lattice rotations from a reference state. Regions near grain boundaries are often of interest but overlap of patterns from the two grains could reduce accuracy of the cross-correlation analysis. To explore this concern, patterns from the interior of two grains have been mixed to simulate the interaction volume crossing a grain boundary so that the effect on the accuracy of the cross correlation results can be tested. It was found that the accuracy of HR-EBSD strain measurements performed in a FEG-SEM on zirconium remains good until the incident beam is less than 18 nm from a grain boundary. A simulated microstructure was used to measure how often pattern overlap occurs at any given EBSD step size, and a simple relation was found linking the probability of overlap with step size. - Highlights: • Pattern overlap occurs at grain boundaries and reduces HR-EBSD accuracy. • A test is devised to measure the accuracy of HR-EBSD in the presence of overlap. • High pass filters can sometimes, but not generally, improve HR-EBSD measurements. • Accuracy of HR-EBSD remains high until the reference pattern intensity is <72%. • 9% of points near a grain boundary will have significant error for 200nm step size in Zircaloy-4.
Zhao, Dan; Wang, Xiao; Mu, Jie; Li, Zhilin; Zuo, Yanlei; Zhou, Song; Zhou, Kainan; Zeng, Xiaoming; Su, Jingqin; Zhu, Qihua
2017-02-01
The grating tiling technology is one of the most effective means to increase the aperture of the gratings. The line-density error (LDE) between sub-gratings will degrade the performance of the tiling gratings, high accuracy measurement and compensation of the LDE are of significance to improve the output pulses characteristics of the tiled-grating compressor. In this paper, the influence of LDE on the output pulses of the tiled-grating compressor is quantitatively analyzed by means of numerical simulation, the output beams drift and output pulses broadening resulting from the LDE are presented. Based on the numerical results we propose a compensation method to reduce the degradations of the tiled grating compressor by applying angular tilt error and longitudinal piston error at the same time. Moreover, a monitoring system is setup to measure the LDE between sub-gratings accurately and the dispersion variation due to the LDE is also demonstrated based on spatial-spectral interference. In this way, we can realize high-accuracy measurement and compensation of the LDE, and this would provide an efficient way to guide the adjustment of the tiling gratings.
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu
2015-07-01
The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.
High speed numerical integration algorithm using FPGA | Razak ...
Conventionally, numerical integration algorithm is executed in software and time consuming to accomplish. Field Programmable Gate Arrays (FPGAs) can be used as a much faster, very efficient and reliable alternative to implement the numerical integration algorithm. This paper proposed a hardware implementation of four ...
Adaptive sensor-based ultra-high accuracy solar concentrator tracker
Brinkley, Jordyn; Hassanzadeh, Ali
2017-09-01
Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.
Functional knowledge transfer for high-accuracy prediction of under-studied biological processes.
Christopher Y Park
Full Text Available A key challenge in genetics is identifying the functional roles of genes in pathways. Numerous functional genomics techniques (e.g. machine learning that predict protein function have been developed to address this question. These methods generally build from existing annotations of genes to pathways and thus are often unable to identify additional genes participating in processes that are not already well studied. Many of these processes are well studied in some organism, but not necessarily in an investigator's organism of interest. Sequence-based search methods (e.g. BLAST have been used to transfer such annotation information between organisms. We demonstrate that functional genomics can complement traditional sequence similarity to improve the transfer of gene annotations between organisms. Our method transfers annotations only when functionally appropriate as determined by genomic data and can be used with any prediction algorithm to combine transferred gene function knowledge with organism-specific high-throughput data to enable accurate function prediction. We show that diverse state-of-art machine learning algorithms leveraging functional knowledge transfer (FKT dramatically improve their accuracy in predicting gene-pathway membership, particularly for processes with little experimental knowledge in an organism. We also show that our method compares favorably to annotation transfer by sequence similarity. Next, we deploy FKT with state-of-the-art SVM classifier to predict novel genes to 11,000 biological processes across six diverse organisms and expand the coverage of accurate function predictions to processes that are often ignored because of a dearth of annotated genes in an organism. Finally, we perform in vivo experimental investigation in Danio rerio and confirm the regulatory role of our top predicted novel gene, wnt5b, in leftward cell migration during heart development. FKT is immediately applicable to many bioinformatics
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
2017-02-01
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.
A qualitative numerical study of high dimensional dynamical systems
Albers, David James
Since Poincare, the father of modern mathematical dynamical systems, much effort has been exerted to achieve a qualitative understanding of the physical world via a qualitative understanding of the functions we use to model the physical world. In this thesis, we construct a numerical framework suitable for a qualitative, statistical study of dynamical systems using the space of artificial neural networks. We analyze the dynamics along intervals in parameter space, separating the set of neural networks into roughly four regions: the fixed point to the first bifurcation; the route to chaos; the chaotic region; and a transition region between chaos and finite-state neural networks. The study is primarily with respect to high-dimensional dynamical systems. We make the following general conclusions as the dimension of the dynamical system is increased: the probability of the first bifurcation being of type Neimark-Sacker is greater than ninety-percent; the most probable route to chaos is via a cascade of bifurcations of high-period periodic orbits, quasi-periodic orbits, and 2-tori; there exists an interval of parameter space such that hyperbolicity is violated on a countable, Lebesgue measure 0, "increasingly dense" subset; chaos is much more likely to persist with respect to parameter perturbation in the chaotic region of parameter space as the dimension is increased; moreover, as the number of positive Lyapunov exponents is increased, the likelihood that any significant portion of these positive exponents can be perturbed away decreases with increasing dimension. The maximum Kaplan-Yorke dimension and the maximum number of positive Lyapunov exponents increases linearly with dimension. The probability of a dynamical system being chaotic increases exponentially with dimension. The results with respect to the first bifurcation and the route to chaos comment on previous results of Newhouse, Ruelle, Takens, Broer, Chenciner, and Iooss. Moreover, results regarding the high
Accuracy of hiatal hernia detection with esophageal high-resolution manometry
Weijenborg, P. W.; van Hoeij, F. B.; Smout, A. J. P. M.; Bredenoord, A. J.
2015-01-01
The diagnosis of a sliding hiatal hernia is classically made with endoscopy or barium esophagogram. Spatial separation of the lower esophageal sphincter (LES) and diaphragm, the hallmark of hiatal hernia, can also be observed on high-resolution manometry (HRM), but the diagnostic accuracy of this
Gnad, Florian; de Godoy, Lyris M F; Cox, Jürgen
2009-01-01
Protein phosphorylation is a fundamental regulatory mechanism that affects many cell signaling processes. Using high-accuracy MS and stable isotope labeling in cell culture-labeling, we provide a global view of the Saccharomyces cerevisiae phosphoproteome, containing 3620 phosphorylation sites ma...
High accuracy positioning using carrier-phases with the opensource GPSTK software
Salazar Hernández, Dagoberto José; Hernández Pajares, Manuel; Juan Zornoza, José Miguel; Sanz Subirana, Jaume
2008-01-01
The objective of this work is to show how using a proper GNSS data management strategy, combined with the flexibility provided by the open source "GPS Toolkit" (GPSTk), it is possible to easily develop both simple code-based processing strategies as well as basic high accuracy carrier-phase positioning techniques like Precise Point Positioning (PPP
Very high-accuracy calibration of radiation pattern and gain of a near-field probe
Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav
2014-01-01
In this paper, very high-accuracy calibration of the radiation pattern and gain of a near-field probe is described. An open-ended waveguide near-field probe has been used in a recent measurement of the C-band Synthetic Aperture Radar (SAR) Antenna Subsystem for the Sentinel 1 mission of the Europ...
From journal to headline: the accuracy of climate science news in Danish high quality newspapers
Vestergård, Gunver Lystbæk
2011-01-01
analysis to examine the accuracy of Danish high quality newspapers in quoting scientific publications from 1997 to 2009. Out of 88 articles, 46 contained inaccuracies though the majority was found to be insignificant and random. The study concludes that Danish broadsheet newspapers are ‘moderately...
Technics study on high accuracy crush dressing and sharpening of diamond grinding wheel
Jia, Yunhai; Lu, Xuejun; Li, Jiangang; Zhu, Lixin; Song, Yingjie
2011-05-01
Mechanical grinding of artificial diamond grinding wheel was traditional wheel dressing process. The rotate speed and infeed depth of tool wheel were main technics parameters. The suitable technics parameters of metals-bonded diamond grinding wheel and resin-bonded diamond grinding wheel high accuracy crush dressing were obtained by a mount of experiment in super-hard material wheel dressing grind machine and by analysis of grinding force. In the same time, the effect of machine sharpening and sprinkle granule sharpening was contrasted. These analyses and lots of experiments had extent instruction significance to artificial diamond grinding wheel accuracy crush dressing.
High accuracy interface characterization of three phase material systems in three dimensions
Jørgensen, Peter Stanley; Hansen, Karin Vels; Larsen, Rasmus
2010-01-01
Quantification of interface properties such as two phase boundary area and triple phase boundary length is important in the characterization ofmanymaterial microstructures, in particular for solid oxide fuel cell electrodes. Three-dimensional images of these microstructures can be obtained...... by tomography schemes such as focused ion beam serial sectioning or micro-computed tomography. We present a high accuracy method of calculating two phase surface areas and triple phase length of triple phase systems from subvoxel accuracy segmentations of constituent phases. The method performs a three phase...... polygonization of the interface boundaries which results in a non-manifold mesh of connected faces. We show how the triple phase boundaries can be extracted as connected curve loops without branches. The accuracy of the method is analyzed by calculations on geometrical primitives...
Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation
Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter
1996-01-01
The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.
Arkar, C.; Medved, S. [University of Ljubljana, Faculty of Mechanical Engineering, Askerceva 6, 1000 Ljubljana (Slovenia)
2005-11-01
With the integration of latent-heat thermal energy storage (LHTES) in building services, solar energy and the coldness of ambient air can be efficiently used to reduce the energy used for heating and cooling and to improve the level of living comfort. For this purpose, a cylindrical LHTES containing spheres filled with paraffin was developed. For the proper modelling of the LHTES thermal response the thermal properties of the phase change material (PCM) must be accurately known. This article presents the influence of the accuracy of thermal property data of the PCM on the result of the prediction of the LHTES's thermal response. A packed bed numerical model was adapted to take into account the non-uniformity of the PCM's porosity and the fluid's velocity. Both are the consequence of a small tube-to-sphere diameter ratio, which is characteristic of the developed LHTES. The numerical model can also take into account the PCM's temperature-dependent thermal properties. The temperature distribution of the latent heat of the paraffin (RT20) used in the experiment in the form of apparent heat capacity was determined using a differential scanning calorimeter (DSC) at different heating and cooling rates. A comparison of the numerical and experimental results confirmed our hypothesis relating to the important role that the PCM's thermal properties play, especially during slow running processes, which are characteristic for our application.
T. Chourushi
2017-01-01
Full Text Available Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwind-differencing scheme (UDS is often employed for improving the stability of algorithm. To elude this effect, some works been reported in the past, where high-resolution-schemes (HRS were employed and Deborah number was varied. However, these works are limited to creeping flows and do not detail any information on the numerical stability of HRS. Hence, this article presents the numerical study of high shearing contraction flows, where stability of HRS are addressed in reference to fluid elasticity. Results suggest that all HRS show some order of undue oscillations in flow variable profiles, measured along vertical lines placed near contraction region in the upstream section of domain, at varied elasticity number E≈5. Furthermore, by E, a clear relationship between numerical stability of HRS and E was obtained, which states that the order of undue oscillations in flow variable profiles is directly proportional to E.
High Accuracy Acoustic Relative Humidity Measurement inDuct Flow with Air
Cees van der Geld
2010-08-01
Full Text Available An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
High accuracy digital aging monitor based on PLL-VCO circuit
Zhang Yuejun; Jiang Zhidi; Wang Pengjun; Zhang Xuelong
2015-01-01
As the manufacturing process is scaled down to the nanoscale, the aging phenomenon significantly affects the reliability and lifetime of integrated circuits. Consequently, the precise measurement of digital CMOS aging is a key aspect of nanoscale aging tolerant circuit design. This paper proposes a high accuracy digital aging monitor using phase-locked loop and voltage-controlled oscillator (PLL-VCO) circuit. The proposed monitor eliminates the circuit self-aging effect for the characteristic of PLL, whose frequency has no relationship with circuit aging phenomenon. The PLL-VCO monitor is implemented in TSMC low power 65 nm CMOS technology, and its area occupies 303.28 × 298.94 μm 2 . After accelerating aging tests, the experimental results show that PLL-VCO monitor improves accuracy about high temperature by 2.4% and high voltage by 18.7%. (semiconductor integrated circuits)
High accuracy acoustic relative humidity measurement in duct flow with air.
van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees
2010-01-01
An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.
A proposal for limited criminal liability in high-accuracy endoscopic sinus surgery.
Voultsos, P; Casini, M; Ricci, G; Tambone, V; Midolo, E; Spagnolo, A G
2017-02-01
The aim of the present study is to propose legal reform limiting surgeons' criminal liability in high-accuracy and high-risk surgery such as endoscopic sinus surgery (ESS). The study includes a review of the medical literature, focusing on identifying and examining reasons why ESS carries a very high risk of serious complications related to inaccurate surgical manoeuvers and reviewing British and Italian legal theory and case-law on medical negligence, especially with regard to Italian Law 189/2012 (so called "Balduzzi" Law). It was found that serious complications due to inaccurate surgical manoeuvers may occur in ESS regardless of the skill, experience and prudence/diligence of the surgeon. Subjectivity should be essential to medical negligence, especially regarding high-accuracy surgery. Italian Law 189/2012 represents a good basis for the limitation of criminal liability resulting from inaccurate manoeuvres in high-accuracy surgery such as ESS. It is concluded that ESS surgeons should be relieved of criminal liability in cases of simple/ordinary negligence where guidelines have been observed. © Copyright by Società Italiana di Otorinolaringologia e Chirurgia Cervico-Facciale, Rome, Italy.
Mazaheri, Alireza; Ricchiuto, Mario; Nishikawa, Hiroaki
2016-01-01
In this paper, we introduce a new hyperbolic first-order system for general dispersive partial differential equations (PDEs). We then extend the proposed system to general advection-diffusion-dispersion PDEs. We apply the fourth-order RD scheme of Ref. 1 to the proposed hyperbolic system, and solve time-dependent dispersive equations, including the classical two-soliton KdV and a dispersive shock case. We demonstrate that the predicted results, including the gradient and Hessian (second derivative), are in a very good agreement with the exact solutions. We then show that the RD scheme applied to the proposed system accurately captures dispersive shocks without numerical oscillations. We also verify that the solution, gradient and Hessian are predicted with equal order of accuracy.
High-accuracy energy formulas for the attractive two-site Bose-Hubbard model
Ermakov, Igor; Byrnes, Tim; Bogoliubov, Nikolay
2018-02-01
The attractive two-site Bose-Hubbard model is studied within the framework of the analytical solution obtained by the application of the quantum inverse scattering method. The structure of the ground and excited states is analyzed in terms of solutions of Bethe equations, and an approximate solution for the Bethe roots is given. This yields approximate formulas for the ground-state energy and for the first excited-state energy. The obtained formulas work with remarkable precision for a wide range of parameters of the model, and are confirmed numerically. An expansion of the Bethe state vectors into a Fock space is also provided for evaluation of expectation values, although this does not have accuracy similar to that of the energies.
Accuracy and repeatability positioning of high-performancel athe for non-circular turning
Majda Paweł
2017-11-01
Full Text Available This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe’s numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.
Accuracy and repeatability positioning of high-performancel athe for non-circular turning
Majda, Paweł; Powałka, Bartosz
2017-11-01
This paper presents research on the accuracy and repeatability of CNC axis positioning in an innovative lathe with an additional Xs axis. This axis is used to perform movements synchronized with the angular position of the main drive, i.e. the spindle, and with the axial feed along the Z axis. This enables the one-pass turning of non-circular surfaces, rope and trapezoidal threads, as well as the surfaces of rotary tools such as a gear cutting hob, etc. The paper presents and discusses the interpretation of results and the calibration effects of positioning errors in the lathe's numerical control system. Finally, it shows the geometric characteristics of the rope thread turned at various spindle speeds, including before and after-correction of the positioning error of the Xs axis.
Selle, L.; Ferret, B. [Universite de Toulouse, INPT, UPS, IMFT, Institut de Mecanique des Fluides de Toulouse (France); CNRS, IMFT, Toulouse (France); Poinsot, T. [Universite de Toulouse, INPT, UPS, IMFT, Institut de Mecanique des Fluides de Toulouse (France); CNRS, IMFT, Toulouse (France); CERFACS, Toulouse (France)
2011-01-15
Measuring the velocities of premixed laminar flames with precision remains a controversial issue in the combustion community. This paper studies the accuracy of such measurements in two-dimensional slot burners and shows that while methane/air flame speeds can be measured with reasonable accuracy, the method may lack precision for other mixtures such as hydrogen/air. Curvature at the flame tip, strain on the flame sides and local quenching at the flame base can modify local flame speeds and require corrections which are studied using two-dimensional DNS. Numerical simulations also provide stretch, displacement and consumption flame speeds along the flame front. For methane/air flames, DNS show that the local stretch remains small so that the local consumption speed is very close to the unstretched premixed flame speed. The only correction needed to correctly predict flame speeds in this case is due to the finite aspect ratio of the slot used to inject the premixed gases which induces a flow acceleration in the measurement region (this correction can be evaluated from velocity measurement in the slot section or from an analytical solution). The method is applied to methane/air flames with and without water addition and results are compared to experimental data found in the literature. The paper then discusses the limitations of the slot-burner method to measure flame speeds for other mixtures and shows that it is not well adapted to mixtures with a Lewis number far from unity, such as hydrogen/air flames. (author)
Kong, Xiangxue; Tang, Lei; Ye, Qiang; Huang, Wenhua; Li, Jianyi
2017-11-01
Accurate and safe posterior thoracic pedicle insertion (PTPI) remains a challenge. Patient-specific drill templates (PDTs) created by rapid prototyping (RP) can assist in posterior thoracic pedicle insertion, but pose biocompatibility risks. The aims of this study were to develop alternative PDTs with computer numerical control (CNC) and assess their feasibility and accuracy in assisting PTPI. Preoperative CT images of 31 cadaveric thoracic vertebras were obtained and then the optimal pedicle screw trajectories were planned. The PDTs with optimal screw trajectories were randomly assigned to be designed and manufactured by CNC or RP in each vertebra. With the guide of the CNC- or RP-manufactured PDTs, the appropriate screws were inserted into the pedicles. Postoperative CT scans were performed to analyze any deviations at entry point and midpoint of the pedicles. The CNC group was found to be significant manufacture-time-shortening, and cost-decreasing, when compared with the RP group (P 0.05). The screw positions were grade 0 in 90.3% and grade 1 in 9.7% of the cases in the CNC group and grade 0 in 93.5% and grade 1 in 6.5% of the cases in the RP group (P = 0.641). CNC-manufactured PDTs are viable for assisting in PTPI with good feasibility and accuracy.
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
High-Accuracy Spherical Near-Field Measurements for Satellite Antenna Testing
Breinbjerg, Olav
2017-01-01
The spherical near-field antenna measurement technique is unique in combining several distinct advantages and it generally constitutes the most accurate technique for experimental characterization of radiation from antennas. From the outset in 1970, spherical near-field antenna measurements have...... matured into a well-established technique that is widely used for testing antennas for many wireless applications. In particular, for high-accuracy applications, such as remote sensing satellite missions in ESA's Earth Observation Programme with uncertainty requirements at the level of 0.05dB - 0.10d......B, the spherical near-field antenna measurement technique is generally superior. This paper addresses the means to achieving high measurement accuracy; these include the measurement technique per se, its implementation in terms of proper measurement procedures, the use of uncertainty estimates, as well as facility...
A New Approach to High-accuracy Road Orthophoto Mapping Based on Wavelet Transform
Ming Yang
2011-12-01
Full Text Available Existing orthophoto map based on satellite photography and aerial photography is not precise enough for road marking. This paper proposes a new approach to high-accuracy orthophoto mapping. The approach uses inverse perspective transformation to process the image information and generates the orthophoto fragment. The offline interpolation algorithm is used to process the location information. It processes the dead reckoning and the EKF location information, and uses the result to transform the fragments to the global coordinate system. At last it uses wavelet transform to divides the image to two frequency bands and uses weighted median algorithm to deal with them separately. The result of experiment shows that the map produced with this method has high accuracy.
Identification and delineation of areas flood hazard using high accuracy of DEM data
Riadi, B.; Barus, B.; Widiatmaka; Yanuar, M. J. P.; Pramudya, B.
2018-05-01
Flood incidents that often occur in Karawang regency need to be mitigated. These expectations exist on technologies that can predict, anticipate and reduce disaster risks. Flood modeling techniques using Digital Elevation Model (DEM) data can be applied in mitigation activities. High accuracy DEM data used in modeling, will result in better flooding flood models. The result of high accuracy DEM data processing will yield information about surface morphology which can be used to identify indication of flood hazard area. The purpose of this study was to identify and describe flood hazard areas by identifying wetland areas using DEM data and Landsat-8 images. TerraSAR-X high-resolution data is used to detect wetlands from landscapes, while land cover is identified by Landsat image data. The Topography Wetness Index (TWI) method is used to detect and identify wetland areas with basic DEM data, while for land cover analysis using Tasseled Cap Transformation (TCT) method. The result of TWI modeling yields information about potential land of flood. Overlay TWI map with land cover map that produces information that in Karawang regency the most vulnerable areas occur flooding in rice fields. The spatial accuracy of the flood hazard area in this study was 87%.
Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt
2018-03-01
Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).
Numerical Analysis of Film Cooling at High Blowing Ratio
El-Gabry, Lamyaa; Heidmann, James; Ameri, Ali
2009-01-01
Computational Fluid Dynamics is used in the analysis of a film cooling jet in crossflow. Predictions of film effectiveness are compared with experimental results for a circular jet at blowing ratios ranging from 0.5 to 2.0. Film effectiveness is a surface quantity which alone is insufficient in understanding the source and finding a remedy for shortcomings of the numerical model. Therefore, in addition, comparisons are made to flow field measurements of temperature along the jet centerline. These comparisons show that the CFD model is accurately predicting the extent and trajectory of the film cooling jet; however, there is a lack of agreement in the near-wall region downstream of the film hole. The effects of main stream turbulence conditions, boundary layer thickness, turbulence modeling, and numerical artificial dissipation are evaluated and found to have an insufficient impact in the wake region of separated films (i.e. cannot account for the discrepancy between measured and predicted centerline fluid temperatures). Analyses of low and moderate blowing ratio cases are carried out and results are in good agreement with data.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Guanwu Zhou
2014-07-01
Full Text Available Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system’s performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor.
Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning
Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes
2014-05-01
In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The
Frouzakis, C. E.; Boulouchos, K.
2005-12-15
This comprehensive illustrated final report for the Swiss Federal Office of Energy (SFOE) reports on the work done at the Swiss Federal Institute of Technology in Zurich on the numerical simulation of combustion processes at high Reynolds numbers. The authors note that with appropriate extensive calculation effort, results can be obtained that demonstrate a high degree of accuracy. It is noted that a large part of the project work was devoted to the development of algorithms for the simulation of the combustion processes. Application work is also discussed with research on combustion stability being carried on. The direct numerical simulation (DNS) methods used are described and co-operation with other institutes is noted. The results of experimental work are compared with those provided by simulation and are discussed in detail. Conclusions and an outlook round off the report.
Numerical determination of injector design for high beam quality
Boyd, J.K.
1985-01-01
The performance of a free electron laser strongly depends on the electron beam quality or brightness. The electron beam is transported into the free electron laser after it has been accelerated to the desired energy. Typically the maximum beam brightness produced by an accelerator is constrained by the beam brightness deliverd by the accelerator injector. Thus it is important to design the accelerator injector to yield the required electron beam brightness. The DPC (Darwin Particle Code) computer code has been written to numerically model accelerator injectors. DPC solves for the transport of a beam from emission through acceleration up to the full energy of the injector. The relativistic force equation is solved to determine particle orbits. Field equations are solved for self consistent electric and magnetic fields in the Darwin approximation. DPC has been used to investigate the beam quality consequences of A-K gap, accelerating stress, electrode configuration and axial magnetic field profile
Numerical simulation of high Reynolds number bubble motion
McLaughlin, J.B. [Clarkson Univ., Potsdam, NY (United States)
1995-12-31
This paper presents the results of numerical simulations of bubble motion. All the results are for single bubbles in unbounded fluids. The liquid phase is quiescent except for the motion created by the bubble, which is axisymmetric. The main focus of the paper is on bubbles that are of order 1 mm in diameter in water. Of particular interest is the effect of surfactant molecules on bubble motion. Results for the {open_quotes}insoluble surfactant{close_quotes} model will be presented. These results extend research by other investigators to finite Reynolds numbers. The results indicate that, by assuming complete coverage of the bubble surface, one obtains good agreement with experimental observations of bubble motion in tap water. The effect of surfactant concentration on the separation angle is discussed.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Zheng You
2013-04-01
Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-04-08
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Peilu Liu
2017-10-01
Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
High-accuracy determination of the neutron flux at n{sub T}OF
Barbagallo, M.; Colonna, N.; Mastromarco, M.; Meaze, M.; Tagliente, G.; Variale, V. [Sezione di Bari, INFN, Bari (Italy); Guerrero, C.; Andriamonje, S.; Boccone, V.; Brugger, M.; Calviani, M.; Cerutti, F.; Chin, M.; Ferrari, A.; Kadi, Y.; Losito, R.; Versaci, R.; Vlachoudis, V. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Tsinganis, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); National Technical University of Athens (NTUA), Athens (Greece); Tarrio, D.; Duran, I.; Leal-Cidoncha, E.; Paradela, C. [Universidade de Santiago de Compostela, Santiago (Spain); Altstadt, S.; Goebel, K.; Langer, C.; Reifarth, R.; Schmidt, S.; Weigand, M. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (Germany); Andrzejewski, J.; Marganiec, J.; Perkowski, J. [Uniwersytet Lodzki, Lodz (Poland); Audouin, L.; Leong, L.S.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Becares, V.; Cano-Ott, D.; Garcia, A.R.; Gonzalez-Romero, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Becvar, F.; Krticka, M.; Kroll, J.; Valenta, S. [Charles University, Prague (Czech Republic); Belloni, F.; Fraval, K.; Gunsing, F.; Lampoudis, C.; Papaevangelou, T. [Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Berthoumieux, E.; Chiaveri, E. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Commissariata l' Energie Atomique (CEA) Saclay - Irfu, Gif-sur-Yvette (France); Billowes, J.; Ware, T.; Wright, T. [University of Manchester, Manchester (United Kingdom); Bosnar, D.; Zugec, P. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Calvino, F.; Cortes, G.; Gomez-Hornillos, M.B.; Riego, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Carrapico, C.; Goncalves, I.F.; Sarmento, R.; Vaz, P. [Universidade Tecnica de Lisboa, Instituto Tecnologico e Nuclear, Instituto Superior Tecnico, Lisboa (Portugal); Cortes-Giraldo, M.A.; Praena, J.; Quesada, J.M.; Sabate-Gilarte, M. [Universidad de Sevilla, Sevilla (Spain); Diakaki, M.; Karadimos, D.; Kokkoris, M.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Domingo-Pardo, C.; Giubrone, G.; Tain, J.L. [CSIC-Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Kivel, N.; Schumann, D.; Steinegger, P. [Paul Scherrer Institut, Villigen PSI (Switzerland); Dzysiuk, N.; Mastinu, P.F. [Laboratori Nazionali di Legnaro, INFN, Rome (Italy); Eleftheriadis, C.; Manousos, A. [Aristotle University of Thessaloniki, Thessaloniki (Greece); Ganesan, S.; Gurusamy, P.; Saxena, A. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Griesmayer, E.; Jericha, E.; Leeb, H. [Technische Universitaet Wien, Atominstitut, Wien (AT); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Jenkins, D.G.; Vermeulen, M.J. [University of York, Heslington, York (GB); Kaeppeler, F. [Institut fuer Kernphysik, Karlsruhe Institute of Technology, Campus Nord, Karlsruhe (DE); Koehler, P. [Oak Ridge National Laboratory (ORNL), Oak Ridge (US); Lederer, C. [Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE); University of Vienna, Faculty of Physics, Vienna (AT); Massimi, C.; Mingrone, F.; Vannini, G. [Universita di Bologna (IT); INFN, Sezione di Bologna, Dipartimento di Fisica, Bologna (IT); Mengoni, A.; Ventura, A. [Agenzia nazionale per le nuove tecnologie, l' energia e lo sviluppo economico sostenibile (ENEA), Bologna (IT); Milazzo, P.M. [Sezione di Trieste, INFN, Trieste (IT); Mirea, M. [Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Mondalaers, W.; Plompen, A.; Schillebeeckx, P. [Institute for Reference Materials and Measurements, European Commission JRC, Geel (BE); Pavlik, A.; Wallner, A. [University of Vienna, Faculty of Physics, Vienna (AT); Rauscher, T. [University of Basel, Department of Physics and Astronomy, Basel (CH); Roman, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Horia Hulubei National Institute of Physics and Nuclear Engineering - IFIN HH, Bucharest - Magurele (RO); Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Laboratori Nazionali del Gran Sasso dell' INFN, Assergi (AQ) (IT); Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (CH); Johann-Wolfgang-Goethe Universitaet, Frankfurt (DE)
2013-12-15
The neutron flux of the n{sub T}OF facility at CERN was measured, after installation of the new spallation target, with four different systems based on three neutron-converting reactions, which represent accepted cross sections standards in different energy regions. A careful comparison and combination of the different measurements allowed us to reach an unprecedented accuracy on the energy dependence of the neutron flux in the very wide range (thermal to 1 GeV) that characterizes the n{sub T}OF neutron beam. This is a pre-requisite for the high accuracy of cross section measurements at n{sub T}OF. An unexpected anomaly in the neutron-induced fission cross section of {sup 235}U is observed in the energy region between 10 and 30keV, hinting at a possible overestimation of this important cross section, well above currently assigned uncertainties. (orig.)
High Accuracy Attitude Control System Design for Satellite with Flexible Appendages
Wenya Zhou
2014-01-01
Full Text Available In order to realize the high accuracy attitude control of satellite with flexible appendages, attitude control system consisting of the controller and structural filter was designed. When the low order vibration frequency of flexible appendages is approximating the bandwidth of attitude control system, the vibration signal will enter the control system through measurement device to bring impact on the accuracy or even the stability. In order to reduce the impact of vibration of appendages on the attitude control system, the structural filter is designed in terms of rejecting the vibration of flexible appendages. Considering the potential problem of in-orbit frequency variation of the flexible appendages, the design method for the adaptive notch filter is proposed based on the in-orbit identification technology. Finally, the simulation results are given to demonstrate the feasibility and effectiveness of the proposed design techniques.
High-accuracy defect sizing for CRDM penetration adapters using the ultrasonic TOFD technique
Atkinson, I.
1995-01-01
Ultrasonic time-of-flight diffraction (TOFD) is the preferred technique for critical sizing of throughwall orientated defects in a wide range of components, primarily because it is intrinsically more accurate than amplitude-based techniques. For the same reason, TOFD is the preferred technique for sizing the cracks in control rod drive mechanism (CRDM) penetration adapters, which have been the subject of much recent attention. Once the considerable problem of restricted access for the UT probes has been overcome, this inspection lends itself to very high accuracy defect sizing using TOFD. In qualification trials under industrial conditions, depth sizing to an accuracy of ≤ 0.5 mm has been routinely achieved throughout the full wall thickness (16 mm) of the penetration adapters, using only a single probe pair and without recourse to signal processing. (author)
High accuracy of family history of melanoma in Danish melanoma cases
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-01-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor...... but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma...
Frey, Bradley J.; Leviton, Douglas B.
2005-01-01
The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.
Numerical Solution of Hamilton-Jacobi Equations in High Dimension
2012-11-23
high dimension FA9550-10-1-0029 Maurizio Falcone Dipartimento di Matematica SAPIENZA-Universita di Roma P. Aldo Moro, 2 00185 ROMA AH930...solution of Hamilton-Jacobi equations in high dimension AFOSR contract n. FA9550-10-1-0029 Maurizio Falcone Dipartimento di Matematica SAPIENZA
Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji
2009-01-01
The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)
High-Accuracy Elevation Data at Large Scales from Airborne Single-Pass SAR Interferometry
Guy Jean-Pierre Schumann
2016-01-01
Full Text Available Digital elevation models (DEMs are essential data sets for disaster risk management and humanitarian relief services as well as many environmental process models. At present, on the hand, globally available DEMs only meet the basic requirements and for many services and modeling studies are not of high enough spatial resolution and lack accuracy in the vertical. On the other hand, LiDAR-DEMs are of very high spatial resolution and great vertical accuracy but acquisition operations can be very costly for spatial scales larger than a couple of hundred square km and also have severe limitations in wetland areas and under cloudy and rainy conditions. The ideal situation would thus be to have a DEM technology that allows larger spatial coverage than LiDAR but without compromising resolution and vertical accuracy and still performing under some adverse weather conditions and at a reasonable cost. In this paper, we present a novel single pass In-SAR technology for airborne vehicles that is cost-effective and can generate DEMs with a vertical error of around 0.3 m for an average spatial resolution of 3 m. To demonstrate this capability, we compare a sample single-pass In-SAR Ka-band DEM of the California Central Valley from the NASA/JPL airborne GLISTIN-A to a high-resolution LiDAR DEM. We also perform a simple sensitivity analysis to floodplain inundation. Based on the findings of our analysis, we argue that this type of technology can and should be used to replace large regions of globally available lower resolution DEMs, particularly in coastal, delta and floodplain areas where a high number of assets, habitats and lives are at risk from natural disasters. We conclude with a discussion on requirements, advantages and caveats in terms of instrument and data processing.
Jeong, Chang-Joon; Okumura, Keisuke; Ishiguro, Yukio; Tanaka, Ken-ichi
1990-01-01
Validation tests were made for the accuracy of cell calculation methods used in analyses of tight lattices of a mixed-oxide (MOX) fuel core in a high conversion light water reactor (HCLWR). A series of cell calculations was carried out for the lattices referred from an international HCLWR benchmark comparison, with emphasis placed on the resonance calculation methods; the NR, IR approximations, the collision probability method with ultra-fine energy group. Verification was also performed for the geometrical modelling; a hexagonal/cylindrical cell, and the boundary condition; mirror/white reflection. In the calculations, important reactor physics parameters, such as the neutron multiplication factor, the conversion ratio and the void coefficient, were evaluated using the above methods for various HCLWR lattices with different moderator to fuel volume ratios, fuel materials and fissile plutonium enrichments. The calculated results were compared with each other, and the accuracy and applicability of each method were clarified by comparison with continuous energy Monte Carlo calculations. It was verified that the accuracy of the IR approximation became worse when the neutron spectrum became harder. It was also concluded that the cylindrical cell model with the white boundary condition was not so suitable for MOX fuelled lattices, as for UO 2 fuelled lattices. (author)
Accuracy of High-Resolution Ultrasonography in the Detection of Extensor Tendon Lacerations.
Dezfuli, Bobby; Taljanovic, Mihra S; Melville, David M; Krupinski, Elizabeth A; Sheppard, Joseph E
2016-02-01
Lacerations to the extensor mechanism are usually diagnosed clinically. Ultrasound (US) has been a growing diagnostic tool for tendon injuries since the 1990s. To date, there has been no publication establishing the accuracy and reliability of US in the evaluation of extensor mechanism lacerations in the hand. The purpose of this study is to determine the accuracy of US to detect extensor tendon injuries in the hand. Sixteen fingers and 4 thumbs in 4 fresh-frozen and thawed cadaveric hands were used. Sixty-eight 0.5-cm transverse skin lacerations were created. Twenty-seven extensor tendons were sharply transected. The remaining skin lacerations were used as sham dissection controls. One US technologist and one fellowship-trained musculoskeletal radiologist performed real-time dynamic US studies in and out of water bath. A second fellowship trained musculoskeletal radiologist subsequently reviewed the static US images. Dynamic and static US interpretation accuracy was assessed using dissection as "truth." All 27 extensor tendon lacerations and controls were identified correctly with dynamic imaging as either injury models that had a transected extensor tendon or sham controls with intact extensor tendons (sensitivity = 100%, specificity = 100%, positive predictive value = 1.0; all significantly greater than chance). Static imaging had a sensitivity of 85%, specificity of 89%, and accuracy of 88% (all significantly greater than chance). The results of the dynamic real time versus static US imaging were clearly different but did not reach statistical significance. Diagnostic US is a very accurate noninvasive study that can identify extensor mechanism injuries. Clinically suspected cases of acute extensor tendon injury scanned by high-frequency US can aid and/or confirm the diagnosis, with dynamic imaging providing added value compared to static. Ultrasonography, to aid in the diagnosis of extensor mechanism lacerations, can be successfully used in a reliable and
Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth
Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus
2013-03-01
Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.
Kaus, M; Steinmeier, R; Sporer, T; Ganslandt, O; Fahlbusch, R
1997-12-01
This study was designed to determine and evaluate the different system-inherent sources of erroneous target localization of a light-emitting diode (LED)-based neuronavigation system (StealthStation, Stealth Technologies, Boulder, CO). The localization accuracy was estimated by applying a high-precision mechanical micromanipulator to move and exactly locate (+/- 0.1 micron) the pointer at multiple positions in the physical three-dimensional space. The localization error was evaluated by calculating the spatial distance between the (known) LED positions and the LED coordinates measured by the neuronavigator. The results are based on a study of approximately 280,000 independent coordinate measurements. The maximum localization error detected was 0.55 +/- 0.29 mm, with the z direction (distance to the camera array) being the most erroneous coordinate. Minimum localization error was found at a distance of 1400 mm from the central camera (optimal measurement position). Additional error due to 1) mechanical vibrations of the camera tripod (+/- 0.15 mm) and the reference frame (+/- 0.08 mm) and 2) extrapolation of the pointer tip position from the LED coordinates of at least +/- 0.12 mm were detected, leading to a total technical error of 0.55 +/- 0.64 mm. Based on this technical accuracy analysis, a set of handling recommendations is proposed, leading to an improved localization accuracy. The localization error could be reduced by 0.3 +/- 0.15 mm by correct camera positioning (1400 mm distance) plus 0.15 mm by vibration-eliminating fixation of the camera. Correct handling of the probe during the operation may improve the accuracy by up to 0.1 mm.
Climate change and high-resolution whole-building numerical modelling
Blocken, B.J.E.; Briggen, P.M.; Schellen, H.L.; Hensen, J.L.M.
2010-01-01
This paper briefly discusses the need of high-resolution whole-building numerical modelling in the context of climate change. High-resolution whole-building numerical modelling can be used for detailed analysis of the potential consequences of climate change on buildings and to evaluate remedial
An angle encoder for super-high resolution and super-high accuracy using SelfA
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-06-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after
An angle encoder for super-high resolution and super-high accuracy using SelfA
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-01-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 2 21 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science and Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 2 33 , that is, corresponding to a 0.0015″ signal period
Ultra-high accuracy optical testing: creating diffraction-limitedshort-wavelength optical systems
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman,Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli,Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-08-03
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-{angstrom} and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date.
Ultra-high accuracy optical testing: creating diffraction-limited short-wavelength optical systems
Goldberg, Kenneth A.; Naulleau, Patrick P.; Rekawa, Senajith B.; Denham, Paul E.; Liddle, J. Alexander; Gullikson, Eric M.; Jackson, KeithH.; Anderson, Erik H.; Taylor, John S.; Sommargren, Gary E.; Chapman, Henry N.; Phillion, Donald W.; Johnson, Michael; Barty, Anton; Soufli, Regina; Spiller, Eberhard A.; Walton, Christopher C.; Bajt, Sasa
2005-01-01
Since 1993, research in the fabrication of extreme ultraviolet (EUV) optical imaging systems, conducted at Lawrence Berkeley National Laboratory (LBNL) and Lawrence Livermore National Laboratory (LLNL), has produced the highest resolution optical systems ever made. We have pioneered the development of ultra-high-accuracy optical testing and alignment methods, working at extreme ultraviolet wavelengths, and pushing wavefront-measuring interferometry into the 2-20-nm wavelength range (60-600 eV). These coherent measurement techniques, including lateral shearing interferometry and phase-shifting point-diffraction interferometry (PS/PDI) have achieved RMS wavefront measurement accuracies of 0.5-1-(angstrom) and better for primary aberration terms, enabling the creation of diffraction-limited EUV optics. The measurement accuracy is established using careful null-testing procedures, and has been verified repeatedly through high-resolution imaging. We believe these methods are broadly applicable to the advancement of short-wavelength optical systems including space telescopes, microscope objectives, projection lenses, synchrotron beamline optics, diffractive and holographic optics, and more. Measurements have been performed on a tunable undulator beamline at LBNL's Advanced Light Source (ALS), optimized for high coherent flux; although many of these techniques should be adaptable to alternative ultraviolet, EUV, and soft x-ray light sources. To date, we have measured nine prototype all-reflective EUV optical systems with NA values between 0.08 and 0.30 (f/6.25 to f/1.67). These projection-imaging lenses were created for the semiconductor industry's advanced research in EUV photolithography, a technology slated for introduction in 2009-13. This paper reviews the methods used and our program's accomplishments to date
Sabchevski, S; Zhelyazkov, I; Benova, E; Atanassov, V; Dankov, P; Thumm, M; Arnold, A; Jin, J; Rzesnicki, T
2006-01-01
Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems
Habibi, M.; Oloumi, M.; Hosseinkhani, H.; Magidi, S. [Plasma and Fusion Research School, Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of)
2015-10-15
A highly nonlinear parabolic partial differential equation that models the electron heat transfer process in laser inertial fusion has been solved numerically. The strong temperature dependence of the electron thermal conductivity and heat loss term (Bremsstrahlung emission) makes this a highly nonlinear process. In this case, an efficient numerical method is developed for the energy transport mechanism from the region of energy deposition into the ablation surface by a combination of the Crank-Nicolson scheme and the Newton-Raphson method. The quantitative behavior of the electron temperature and the comparison between analytic and numerical solutions are also investigated. For more clarification, the accuracy and conservation of energy in the computations are tested. The numerical results can be used to evaluate the nonlinear electron heat conduction, considering the released energy of the laser pulse at the Deuterium-Tritium (DT) targets and preheating by heat conduction ahead of a compression shock in the inertial confinement fusion (ICF) approach. (copyright 2015 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
The use of high accuracy NAA for the certification of NIST botanical standard reference materials
Becker, D.A.; Greenberg, R.R.; Stone, S.F.
1992-01-01
Neutron activation analysis is one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). NAA competes favorably with all other techniques because of it's unique capabilities for high accuracy even at very low concentrations for many elements. In this paper, instrumental and radiochemical NAA results are described for 25 elements in two new NIST SRMs, SRM 1515 (Apple Leaves) and SRM 1547 (Peach Leaves), and are compared to the certified values for 19 elements in these two new botanical reference materials. (author) 7 refs.; 4 tabs
High-accuracy mass determination of unstable nuclei with a Penning trap mass spectrometer
2002-01-01
The mass of a nucleus is its most fundamental property. A systematic study of nuclear masses as a function of neutron and proton number allows the observation of collective and single-particle effects in nuclear structure. Accurate mass data are the most basic test of nuclear models and are essential for their improvement. This is especially important for the astrophysical study of nuclear synthesis. In order to achieve the required high accuracy, the mass of ions captured in a Penning trap is determined via their cyclotron frequency $ \
A new ultra-high-accuracy angle generator: current status and future direction
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
High Accuracy, Miniature Pressure Sensor for Very High Temperatures, Phase I
National Aeronautics and Space Administration — SiWave proposes to develop a compact, low-cost MEMS-based pressure sensor for very high temperatures and low pressures in hypersonic wind tunnels. Most currently...
Junya Lv
2017-01-01
Full Text Available The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which play critical roles in process design and optimization. In this investigation, the true stress-strain data of an Inconel 718 superalloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1153–1353 K and strain rate range of 0.01–10 s−1 on a Gleeble 3500 testing machine (DSI, St. Paul, DE, USA. Then the constitutive relationship was modeled by an optimally-constructed and well-trained back-propagation artificial neural network (ANN. The evaluation of the ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of Inconel 718 superalloy. Consequently, the developed ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the continuous mapping relationship for temperature, strain rate, strain and stress. Finally, the constructed ANN was implanted in a finite element solver though the interface of “URPFLO” subroutine to simulate the isothermal compression tests. The results show that the integration of finite element method with ANN model can significantly promote the accuracy improvement of numerical simulations for hot forming processes.
Numerical simulation of SU(2)c high density state
Muroya, Shin; Nakamura, Atsushi; Nonaka, Chiho
2003-01-01
We report a study of the high baryon number density system with use of the two-color lattice QCD with Wilson fermions[1]. First we investigate thermodynamical quantities such as the Polyakov line, gluon energy density, and baryon number density in the (κ, μ) plane, where κ and μ are the hopping parameter and chemical potential, respectively. Then we calculate propagators of meson (q-barΓq) and baryon (qΓq) states in addition to the potential between quark lines. (author)
Numerical description of creep of highly creep resistant alloys
Preussler, T.
1991-01-01
Fatigue tests have been performed with a series of highly creep resistant materials for gas turbines and related applications for gaining better creep data up to long-term behaviour. The investigations were performed with selected individual materials in the area of the main applications down to strains and stresses relevant to design, and have attained trial durations of 25000 to 60000 h. In continuing former research, creep equations for a selection of characterizing individual materials have been improved and partly newly developed on the basis of a differentiated evaluation. Concerning the single materials, there are: one melt each of the materials IN-738 LC, IN-939, IN-100, FSX-414 and Inconel 617. The applied differentiated evaluation is based on the elastoplastical behaviour from the hot-drawing test, the creep behaviour from the non interrupted or the interrupted fatigue test, and the contraction behaviour from the annealing test. The creep equations developed describe the high temperature deformation behaviour taking into account primary, secondary and partly the tertiary creep dependent of temperature, stress and time. These equations are valid for the whole application area of the respective material. (orig./MM) [de
Numerical study on wake characteristics of high-speed trains
Yao, Shuan-Bao; Sun, Zhen-Xu; Guo, Di-Long; Chen, Da-Wei; Yang, Guo-Wei
2013-12-01
Intensive turbulence exists in the wakes of high speed trains, and the aerodynamic performance of the trailing car could deteriorate rapidly due to complicated features of the vortices in the wake zone. As a result, the safety and amenity of high speed trains would face a great challenge. This paper considers mainly the mechanism of vortex formation and evolution in the train flow field. A real CRH2 model is studied, with a leading car, a middle car and a trailing car included. Different running speeds and cross wind conditions are considered, and the approaches of unsteady Reynold-averaged Navier-Stokes (URANS) and detached eddy simulation (DES) are utilized, respectively. Results reveal that DES has better capability of capturing small eddies compared to URANS. However, for large eddies, the effects of two approaches are almost the same. In conditions without cross winds, two large vortex streets stretch from the train nose and interact strongly with each other in the wake zone. With the reinforcement of the ground, a complicated wake vortex system generates and becomes strengthened as the running speed increases. However, the locations of flow separations on the train surface and the separation mechanism keep unchanged. In conditions with cross winds, three large vortices develop along the leeward side of the train, among which the weakest one has no obvious influence on the wake flow while the other two stretch to the tail of the train and combine with the helical vortices in the train wake. Thus, optimization of the aerodynamic performance of the trailing car should be aiming at reducing the intensity of the wake vortex system.
C. G. Nunalee
2015-08-01
Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
A high accuracy algorithm of displacement measurement for a micro-positioning stage
Xiang Zhang
2017-05-01
Full Text Available A high accuracy displacement measurement algorithm for a two degrees of freedom compliant precision micro-positioning stage is proposed based on the computer micro-vision technique. The algorithm consists of an integer-pixel and a subpixel matching procedure. Series of simulations are conducted to verify the proposed method. The results show that the proposed algorithm possesses the advantages of high precision and stability, the resolution can attain to 0.01 pixel theoretically. In addition, the consuming time is reduced about 6.7 times compared with the classical normalized cross correlation algorithm. To validate the practical performance of the proposed algorithm, a laser interferometer measurement system (LIMS is built up. The experimental results demonstrate that the algorithm has better adaptability than that of the LIMS.
Prediction of novel pre-microRNAs with high accuracy through boosting and SVM.
Zhang, Yuanwei; Yang, Yifan; Zhang, Huan; Jiang, Xiaohua; Xu, Bo; Xue, Yu; Cao, Yunxia; Zhai, Qian; Zhai, Yong; Xu, Mingqing; Cooke, Howard J; Shi, Qinghua
2011-05-15
High-throughput deep-sequencing technology has generated an unprecedented number of expressed short sequence reads, presenting not only an opportunity but also a challenge for prediction of novel microRNAs. To verify the existence of candidate microRNAs, we have to show that these short sequences can be processed from candidate pre-microRNAs. However, it is laborious and time consuming to verify these using existing experimental techniques. Therefore, here, we describe a new method, miRD, which is constructed using two feature selection strategies based on support vector machines (SVMs) and boosting method. It is a high-efficiency tool for novel pre-microRNA prediction with accuracy up to 94.0% among different species. miRD is implemented in PHP/PERL+MySQL+R and can be freely accessed at http://mcg.ustc.edu.cn/rpg/mird/mird.php.
High Accuracy mass Measurement of the very Short-Lived Halo Nuclide $^{11}$Li
Le scornet, G
2002-01-01
The archetypal halo nuclide $^{11}$Li has now attracted a wealth of experimental and theoretical attention. The most outstanding property of this nuclide, its extended radius that makes it as big as $^{48}$Ca, is highly dependent on the binding energy of the two neutrons forming the halo. New generation experiments using radioactive beams with elastic proton scattering, knock-out and transfer reactions, together with $\\textit{ab initio}$ calculations require the tightening of the constraint on the binding energy. Good metrology also requires confirmation of the sole existing precision result to guard against a possible systematic deviation (or mistake). We propose a high accuracy mass determintation of $^{11}$Li, a particularly challenging task due to its very short half-life of 8.6 ms, but one perfectly suiting the MISTRAL spectrometer, now commissioned at ISOLDE. We request 15 shifts of beam time.
Treatment accuracy of hypofractionated spine and other highly conformal IMRT treatments
Sutherland, B.; Hanlon, P.; Charles, P.
2011-01-01
Full text: Spinal cord metastases pose difficult challenges for radiation treatment due to tight dose constraints and a concave PTY. This project aimed to thoroughly test the treatment accuracy of the Eclipse Treatment Planning System (TPS) for highly modulated IMRT treatments, in particular of the thoracic spine, using an Elekta Synergy Linear Accelerator. The increased understanding obtained through different quality assurance techniques allowed recommendations to be made for treatment site commissioning with improved accuracy at the Princess Alexandra Hospital (PAH). Three thoracic spine IMRT plans at the PAH were used for data collection. Complex phantom models were built using CT data, and fields simulated using Monte Carlo modelling. The simulated dose distributions were compared with the TPS using gamma analysis and DYH comparison. High resolution QA was done for all fields using the MatriXX ion chamber array, MapCHECK2 diode array shifted, and the EPlD to determine a procedure for commissioning new treatment sites. Basic spine simulations found the TPS overestimated absorbed dose to bone, however within spinal cord there was good agreement. High resolution QA found the average gamma pass rate of the fields to be 99.1 % for MatriXX, 96.5% for MapCHECK2 shifted and 97.7% for EPlD. Preliminary results indicate agreement between the TPS and delivered dose distributions higher than previously believed for the investigated IMRT plans. The poor resolution of the MatriXX, and normalisation issues with MapCHECK2 leads to probable recommendation of EPlD for future IMRT commissioning due to the high resolution and minimal setup required.
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical
High Accuracy, High Energy He-Erd Analysis of H,C, and T
Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.
1999-01-01
A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several microm's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 microm using incident 11.48MeV 4 He 2 ions with a detection system composed of a range foil and ΔE-E telescope detector. Newly measured cross sections for the elastic recoil scattering of 4 He 2 ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30degree
PACMAN Project: A New Solution for the High-accuracy Alignment of Accelerator Components
Mainaud Durand, Helene; Buzio, Marco; Caiazza, Domenico; Catalán Lasheras, Nuria; Cherif, Ahmed; Doytchinov, Iordan; Fuchs, Jean-Frederic; Gaddi, Andrea; Galindo Munoz, Natalia; Gayde, Jean-Christophe; Kamugasa, Solomon; Modena, Michele; Novotny, Peter; Russenschuck, Stephan; Sanz, Claude; Severino, Giordana; Tshilumba, David; Vlachakis, Vasileios; Wendt, Manfred; Zorzetti, Silvia
2016-01-01
The beam alignment requirements for the next generation of lepton colliders have become increasingly challenging. As an example, the alignment requirements for the three major collider components of the CLIC linear collider are as follows. Before the first beam circulates, the Beam Position Monitors (BPM), Accelerating Structures (AS)and quadrupoles will have to be aligned up to 10 μm w.r.t. a straight line over 200 m long segments, along the 20 km of linacs. PACMAN is a study on Particle Accelerator Components' Metrology and Alignment to the Nanometre scale. It is an Innovative Doctoral Program, funded by the EU and hosted by CERN, providing high quality training to 10 Early Stage Researchers working towards a PhD thesis. The technical aim of the project is to improve the alignment accuracy of the CLIC components by developing new methods and tools addressing several steps of alignment simultaneously, to gain time and accuracy. The tools and methods developed will be validated on a test bench. This paper pr...
High Accuracy Mass Measurement of the Dripline Nuclides $^{12,14}$Be
2002-01-01
State-of-the art, three-body nuclear models that describe halo nuclides require the binding energy of the halo neutron(s) as a critical input parameter. In the case of $^{14}$Be, the uncertainty of this quantity is currently far too large (130 keV), inhibiting efforts at detailed theoretical description. A high accuracy, direct mass deterlnination of $^{14}$Be (as well as $^{12}$Be to obtain the two-neutron separation energy) is therefore required. The measurement can be performed with the MISTRAL spectrometer, which is presently the only possible solution due to required accuracy (10 keV) and short half-life (4.5 ms). Having achieved a 5 keV uncertainty for the mass of $^{11}$Li (8.6 ms), MISTRAL has proved the feasibility of such measurements. Since the current ISOLDE production rate of $^{14}$Be is only about 10/s, the installation of a beam cooler is underway in order to improve MISTRAL transmission. The projected improvement of an order of magnitude (in each transverse direction) will make this measureme...
Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains
Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana
2015-01-01
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.
High Accuracy Beam Current Monitor System for CEBAF'S Experimental Hall A
J. Denard; A. Saha; G. Lavessiere
2001-01-01
CEBAF accelerator delivers continuous wave (CW) electron beams to three experimental Halls. In Hall A, all experiments require continuous, non-invasive current measurements and a few experiments require an absolute accuracy of 0.2 % in the current range from 1 to 180 (micro)A. A Parametric Current Transformer (PCT), manufactured by Bergoz, has an accurate and stable sensitivity of 4 (micro)A/V but its offset drifts at the muA level over time preclude its direct use for continuous measurements. Two cavity monitors are calibrated against the PCT with at least 50 (micro)A of beam current. The calibration procedure suppresses the error due to PCT's offset drifts by turning the beam on and off, which is invasive to the experiment. One of the goals of the system is to minimize the calibration time without compromising the measurement's accuracy. The linearity of the cavity monitors is a critical parameter for transferring the accurate calibration done at high currents over the whole dynamic range. The method for measuring accurately the linearity is described
Eisenberger, Ute; Wüthrich, Rudolf P; Bock, Andreas; Ambühl, Patrice; Steiger, Jürg; Intondi, Allison; Kuranoff, Susan; Maier, Thomas; Green, Damian; DiCarlo, Lorenzo; Feutren, Gilles; De Geest, Sabina
2013-08-15
This open-label single-arm exploratory study evaluated the accuracy of the Ingestible Sensor System (ISS), a novel technology for directly assessing the ingestion of oral medications and treatment adherence. ISS consists of an ingestible event marker (IEM), a microsensor that becomes activated in gastric fluid, and an adhesive personal monitor (APM) that detects IEM activation. In this study, the IEM was combined to enteric-coated mycophenolate sodium (ECMPS). Twenty stable adult kidney transplants received IEM-ECMPS for a mean of 9.2 weeks totaling 1227 cumulative days. Eight patients prematurely discontinued treatment due to ECMPS gastrointestinal symptoms (n=2), skin intolerance to APM (n=2), and insufficient system usability (n=4). Rash or erythema due to APM was reported in 7 (37%) patients, all during the first month of use. No serious or severe adverse events and no rejection episode were reported. IEM detection accuracy was 100% over 34 directly observed ingestions; Taking Adherence was 99.4% over a total of 2824 prescribed IEM-ECMPS ingestions. ISS could detect accurately the ingestion of two IEM-ECMPS capsules taken at the same time (detection rate of 99.3%, n=2376). ISS is a promising new technology that provides highly reliable measurements of intake and timing of intake of drugs that are combined with the IEM.
Wolfgang Peter Fendler
Full Text Available Our aim was to improve the prediction of unfavorable histopathology (UH in neuroblastic tumors through combined imaging and biochemical parameters.123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR, MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction.34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC. TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001. Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001. Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001.Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH.
Enhancing the Accuracy of Advanced High Temperature Mechanical Testing through Thermography
Jonathan Jones
2018-03-01
Full Text Available This paper describes the advantages and enhanced accuracy thermography provides to high temperature mechanical testing. This technique is not only used to monitor, but also to control test specimen temperatures where the infra-red technique enables accurate non-invasive control of rapid thermal cycling for non-metallic materials. Isothermal and dynamic waveforms are employed over a 200–800 °C temperature range to pre-oxidised and coated specimens to assess the capability of the technique. This application shows thermography to be accurate to within ±2 °C of thermocouples, a standardised measurement technique. This work demonstrates the superior visibility of test temperatures previously unobtainable by conventional thermocouples or even more modern pyrometers that thermography can deliver. As a result, the speed and accuracy of thermal profiling, thermal gradient measurements and cold/hot spot identification using the technique has increased significantly to the point where temperature can now be controlled by averaging over a specified area. The increased visibility of specimen temperatures has revealed additional unknown effects such as thermocouple shadowing, preferential crack tip heating within an induction coil, and, fundamental response time of individual measurement techniques which are investigated further.
An output amplitude configurable wideband automatic gain control with high gain step accuracy
He Xiaofeng; Ye Tianchun; Mo Taishan; Ma Chengyan
2012-01-01
An output amplitude configurable wideband automatic gain control (AGC) with high gain step accuracy for the GNSS receiver is presented. The amplitude of an AGC is configurable in order to cooperate with baseband chips to achieve interference suppression and be compatible with different full range ADCs. And what's more, the gain-boosting technology is introduced and the circuit is improved to increase the step accuracy. A zero, which is composed by the source feedback resistance and the source capacity, is introduced to compensate for the pole. The AGC is fabricated in a 0.18 μm CMOS process. The AGC shows a 62 dB gain control range by 1 dB each step with a gain error of less than 0.2 dB. The AGC provides 3 dB bandwidth larger than 80 MHz and the overall power consumption is less than 1.8 mA, and the die area is 800 × 300 μm 2 . (semiconductor integrated circuits)
High accuracy of family history of melanoma in Danish melanoma cases.
Wadt, Karin A W; Drzewiecki, Krzysztof T; Gerdes, Anne-Marie
2015-12-01
The incidence of melanoma in Denmark has immensely increased over the last 10 years making Denmark a high risk country for melanoma. In the last two decades multiple public campaigns have sought to increase the awareness of melanoma. Family history of melanoma is a known major risk factor but previous studies have shown that self-reported family history of melanoma is highly inaccurate. These studies are 15 years old and we wanted to examine if a higher awareness of melanoma has increased the accuracy of self-reported family history of melanoma. We examined the family history of 181 melanoma probands who reported 199 cases of melanoma in relatives, of which 135 cases where in first degree relatives. We confirmed the diagnosis of melanoma in 77% of all relatives, and in 83% of first degree relatives. In 181 probands we validated the negative family history of melanoma in 748 first degree relatives and found only 1 case of melanoma which was not reported in a 3 case melanoma family. Melanoma patients in Denmark report family history of melanoma in first and second degree relatives with a high level of accuracy with a true positive predictive value between 77 and 87%. In 99% of probands reporting a negative family history of melanoma in first degree relatives this information is correct. In clinical practice we recommend that melanoma diagnosis in relatives should be verified if possible, but even unverified reported melanoma cases in relatives should be included in the indication of genetic testing and assessment of melanoma risk in the family.
High accuracy Primary Reference gas Mixtures for high-impact greenhouse gases
Nieuwenkamp, Gerard; Zalewska, Ewelina; Pearce-Hill, Ruth; Brewer, Paul; Resner, Kate; Mace, Tatiana; Tarhan, Tanil; Zellweger, Christophe; Mohn, Joachim
2017-04-01
Climate change, due to increased man-made emissions of greenhouse gases, poses one of the greatest risks to society worldwide. High-impact greenhouse gases (CO2, CH4 and N2O) and indirect drivers for global warming (e.g. CO) are measured by the global monitoring stations for greenhouse gases, operated and organized by the World Meteorological Organization (WMO). Reference gases for the calibration of analyzers have to meet very challenging low level of measurement uncertainty to comply with the Data Quality Objectives (DQOs) set by the WMO. Within the framework of the European Metrology Research Programme (EMRP), a project to improve the metrology for high-impact greenhouse gases was granted (HIGHGAS, June 2014-May 2017). As a result of the HIGHGAS project, primary reference gas mixtures in cylinders for ambient levels of CO2, CH4, N2O and CO in air have been prepared with unprecedented low uncertainties, typically 3-10 times lower than usually previously achieved by the NMIs. To accomplish these low uncertainties in the reference standards, a number of preparation and analysis steps have been studied and improved. The purity analysis of the parent gases had to be performed with lower detection limits than previously achievable. E.g., to achieve an uncertainty of 2•10-9 mol/mol (absolute) on the amount fraction for N2O, the detection limit for the N2O analysis in the parent gases has to be in the sub nmol/mol domain. Results of an OPO-CRDS analyzer set-up in the 5µm wavelength domain, with a 200•10-12 mol/mol detection limit for N2O, will be presented. The adsorption effects of greenhouse gas components at cylinder surfaces are critical, and have been studied for different cylinder passivation techniques. Results of a two-year stability study will be presented. The fit-for-purpose of the reference materials was studied for possible variation on isotopic composition between the reference material and the sample. Measurement results for a suit of CO2 in air
Chourushi, T.
2017-01-01
Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwin...
Accuracy optimization of high-speed AFM measurements using Design of Experiments
Tosello, Guido; Marinello, F.; Hansen, Hans Nørgaard
2010-01-01
Atomic Force Microscopy (AFM) is being increasingly employed in industrial micro/nano manufacturing applications and integrated into production lines. In order to achieve reliable process and product control at high measuring speed, instrument optimization is needed. Quantitative AFM measurement...... results are influenced by a number of scan settings parameters, defining topography sampling and measurement time: resolution (number of profiles and points per profile), scan range and direction, scanning force and speed. Such parameters are influencing lateral and vertical accuracy and, eventually......, the estimated dimensions of measured features. The definition of scan settings is based on a comprehensive optimization that targets maximization of information from collected data and minimization of measurement uncertainty and scan time. The Design of Experiments (DOE) technique is proposed and applied...
Recent high-accuracy measurements of the 1S0 neutron-neutron scattering length
Howell, C.R.; Chen, Q.; Gonzalez Trotter, D.E.; Salinas, F.; Crowell, A.S.; Roper, C.D.; Tornow, W.; Walter, R.L.; Carman, T.S.; Hussein, A.; Gibbs, W.R.; Gibson, B.F.; Morris, C.; Obst, A.; Sterbenz, S.; Whitton, M.; Mertens, G.; Moore, C.F.; Whiteley, C.R.; Pasyuk, E.; Slaus, I.; Tang, H.; Zhou, Z.; Gloeckle, W.; Witala, H.
2000-01-01
This paper reports two recent high-accuracy determinations of the 1 S 0 neutron-neutron scattering length, a nn . One was done at the Los Alamos National Laboratory using the π - d capture reaction to produce two neutrons with low relative momentum. The neutron-deuteron (nd) breakup reaction was used in other measurement, which was conducted at the Triangle Universities Nuclear Laboratory. The results from the two determinations were consistent with each other and with previous values obtained using the π - d capture reaction. The value obtained from the nd breakup measurements is a nn = -18.7 ± 0.1 (statistical) ± 0.6 (systematic) fm, and the value from the π - d capture experiment is a nn = -18.50 ± 0.05 ± 0.53 fm. The recommended value is a nn = -18.5 ± 0.3 fm. (author)
High accuracy amplitude and phase measurements based on a double heterodyne architecture
Zhao Danyang; Wang Guangwei; Pan Weimin
2015-01-01
In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)
Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei
2015-01-01
Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency.
Accuracy and high-speed technique for autoprocessing of Young's fringes
Chen, Wenyi; Tan, Yushan
1991-12-01
In this paper, an accurate and high-speed method for auto-processing of Young's fringes is proposed. A group of 1-D sampled intensity values along three or more different directions are taken from Young's fringes, and the fringe spacings of each direction are obtained by 1-D FFT respectively. Two directions that have smaller fringe spacing are selected from all directions. The accurate fringe spacings along these two directions are obtained by using orthogonal coherent phase detection technique (OCPD). The actual spacing and angle of Young's fringes, therefore, can be calculated. In this paper, the principle of OCPD is introduced in detail. The accuracy of the method is evaluated theoretically and experimentally.
Bouchaib Benzehaf
2016-11-01
Full Text Available The present study aims to longitudinally depict the dynamic and interactive development of Complexity, Accuracy, and Fluency (CAF in multilingual learners’ L2 and L3 writing. The data sources include free writing tasks written in L2 French and L3 English by 45 high school participants over a period of four semesters. CAF dimensions are measured using a variation of Hunt’s T-units (1964. Analysis of the quantitative data obtained suggests that CAF measures develop differently for learners’ L2 French and L3 English. They increase more persistently in L3 English, and they display the characteristics of a dynamic, non-linear system characterized by ups and downs particularly in L2 French. In light of the results, we suggest more and denser longitudinal data to explore the nature of interactions between these dimensions in foreign language development, particularly at the individual level.
Accuracy of thick-walled hollows during piercing on three-high mill
Potapov, I.N.; Romantsev, B.A.; Shamanaev, V.I.; Popov, V.A.; Kharitonov, E.A.
1975-01-01
The results of investigations are presented concerning the accuracy of geometrical dimensions of thick-walled sleeves produced by piercing on a 100-ton trio screw rolling mill MISiS with three schemes of fixing and centering the rod. The use of a spherical thrust journal for the rod and of a long centering bushing makes it possible to diminish the non-uniformity of the wall thickness of the sleeves by 30-50%. It is established that thick-walled sleeves with accurate geometrical dimensions (nonuniformity of the wall thickness being less than 10%) can be produced if the system sleeve - mandrel - rod is highly rigid and the rod has a two- or three-fold stability margin over the length equal to that of the sleeve being pierced. The process of piercing is expedient to be carried out with increased angles of feed (14-16 deg). Blanks have been made from steel 12Kh1MF
Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro
2014-05-01
Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Innovative High-Accuracy Lidar Bathymetric Technique for the Frequent Measurement of River Systems
Gisler, A.; Crowley, G.; Thayer, J. P.; Thompson, G. S.; Barton-Grimley, R. A.
2015-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for understanding how rivers evolve over many timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
Innovative Technique for High-Accuracy Remote Monitoring of Surface Water
Gisler, A.; Barton-Grimley, R. A.; Thayer, J. P.; Crowley, G.
2016-12-01
Lidar (light detection and ranging) provides absolute depth and topographic mapping capability compared to other remote sensing methods, which is useful for mapping rapidly changing environments such as riverine systems and agricultural waterways. Effectiveness of current lidar bathymetric systems is limited by the difficulty in unambiguously identifying backscattered lidar signals from the water surface versus the bottom, limiting their depth resolution to 0.3-0.5 m. Additionally these are large, bulky systems that are constrained to expensive aircraft-mounted platforms and use waveform-processing techniques requiring substantial computation time. These restrictions are prohibitive for many potential users. A novel lidar device has been developed that allows for non-contact measurements of water depth down to 1 cm with an accuracy and precision of shallow to deep water allowing for shoreline charting, measuring water volume, mapping bottom topology, and identifying submerged objects. The scalability of the technique opens up the ability for handheld or UAS-mounted lidar bathymetric systems, which provides for potential applications currently unavailable to the community. The high laser pulse repetition rate allows for very fine horizontal resolution while the photon-counting technique permits real-time depth measurement and object detection. The enhanced measurement capability, portability, scalability, and relatively low-cost creates the opportunity to perform frequent high-accuracy monitoring and measuring of aquatic environments which is crucial for monitoring water resources on fast timescales. Results from recent campaigns measuring water depth in flowing creeks and murky ponds will be presented which demonstrate that the method is not limited by rough water surfaces and can map underwater topology through moderately turbid water.
High-accuracy continuous airborne measurements of greenhouse gases (CO2 and CH4) during BARCA
Chen, H.; Winderlich, J.; Gerbig, C.; Hoefer, A.; Rella, C. W.; Crosson, E. R.; van Pelt, A. D.; Steinbach, J.; Kolle, O.; Beck, V.; Daube, B. C.; Gottlieb, E. W.; Chow, V. Y.; Santoni, G. W.; Wofsy, S. C.
2009-12-01
High-accuracy continuous measurements of greenhouse gases (CO2 and CH4) during the BARCA (Balanço Atmosférico Regional de Carbono na Amazônia) phase B campaign in Brazil in May 2009 were accomplished using a newly available analyzer based on the cavity ring-down spectroscopy (CRDS) technique. This analyzer was flown without a drying system or any in-flight calibration gases. Water vapor corrections associated with dilution and pressure-broadening effects for CO2 and CH4 were derived from laboratory experiments employing measurements of water vapor by the CRDS analyzer. Before the campaign, the stability of the analyzer was assessed by laboratory tests under simulated flight conditions. During the campaign, a comparison of CO2 measurements between the CRDS analyzer and a nondispersive infrared (NDIR) analyzer on board the same aircraft showed a mean difference of 0.22±0.09 ppm for all flights over the Amazon rain forest. At the end of the campaign, CO2 concentrations of the synthetic calibration gases used by the NDIR analyzer were determined by the CRDS analyzer. After correcting for the isotope and the pressure-broadening effects that resulted from changes of the composition of synthetic vs. ambient air, and applying those concentrations as calibrated values of the calibration gases to reprocess the CO2 measurements made by the NDIR, the mean difference between the CRDS and the NDIR during BARCA was reduced to 0.05±0.09 ppm, with the mean standard deviation of 0.23±0.05 ppm. The results clearly show that the CRDS is sufficiently stable to be used in flight without drying the air or calibrating in flight and the water corrections are fully adequate for high-accuracy continuous airborne measurements of CO2 and CH4.
High numerical aperture imaging by using multimode fibers with micro-fabricated optics
Bianchi, Silvio; Rajamanickam, V.; Ferrara, Lorenzo; Di Fabrizio, Enzo M.; Di Leonardo, Roberto; Liberale, Carlo
2014-01-01
Controlling light propagation into multimode optical fibers through spatial light modulators provides highly miniaturized endoscopes and optical micromanipulation probes. We increase the numerical aperture up to nearly 1 by micro-optics fabricated on the fiber-end.
Component-oriented approach to the development and use of numerical models in high energy physics
Amelin, N.S.; Komogorov, M.Eh.
2002-01-01
We discuss the main concepts of a component approach to the development and use of numerical models in high energy physics. This approach is realized as the NiMax software system. The discussed concepts are illustrated by numerous examples of the system user session. In appendix chapter we describe physics and numerical algorithms of the model components to perform simulation of hadronic and nuclear collisions at high energies. These components are members of hadronic application modules that have been developed with the help of the NiMax system. Given report is served as an early release of the NiMax manual mainly for model component users
Lenarcic, M; Eichhorn, M; Schoder, S J; Bauer, Ch
2015-01-01
In this work the incompressible turbulent flow in a high head Francis turbine under steady operating conditions is investigated using the open source CFD software package FOAM-extend- 3.1. By varying computational domains (cyclic model, full model), coupling methods between stationary and rotating frames (mixing-plane, frozen-rotor) and turbulence models (kω-SST, κε), numerical flow simulations are performed at the best efficiency point as well as at operating points in part load and high load. The discretization is adjusted according the y + -criterion with y + mean > 30. A grid independence study quantifies the discretization error and the corresponding computational costs for the appropriate simulations, reaching a GCI < 1% for the chosen grid. Specific quantities such as efficiency, head, runner shaft torque as well as static pressure and velocity components are computed and compared with experimental data and commercial code. Focusing on the computed results of integral quantities and static pressures, the highest level of accuracy is obtained using FOAM in combination with the full model discretization, the mixing-plane coupling method and the κω-SST turbulence model. The corresponding relative deviations regarding the efficiency reach values of Δη rel ∼ 7% at part load, Δη rel ∼ 0.5% at best efficiency point and Δη rel ∼ 5.6% at high load. The computed static pressures deviate from the measurements by a maximum of Δp rel = 9.3% at part load, Δp rel = 4.3% at best efficiency point and Δp rel = 6.7% at high load. Commercial code in turn yields slightly better predictions for the velocity components in the draft tube cone, reaching a good accordance with the measurements at part load. Although FOAM also shows an adequate correspondence to the experimental data at part load, local effects near the runner hub are captured less accurate at best efficiency point and high load. Nevertheless, FOAM is a reasonable alternative to commercial code
Esterhazy, Sofi; Schneider, Felix; Schöberl, Joachim; Perugia, Ilaria; Bokelmann, Götz
2016-04-01
The research on purely numerical methods for modeling seismic waves has been more and more intensified over last decades. This development is mainly driven by the fact that on the one hand for subsurface models of interest in exploration and global seismology exact analytic solutions do not exist, but, on the other hand, retrieving full seismic waveforms is important to get insides into spectral characteristics and for the interpretation of seismic phases and amplitudes. Furthermore, the computational potential has dramatically increased in the recent past such that it became worthwhile to perform computations for large-scale problems as those arising in the field of computational seismology. Algorithms based on the Finite Element Method (FEM) are becoming increasingly popular for the propagation of acoustic and elastic waves in geophysical models as they provide more geometrical flexibility in terms of complexity as well as heterogeneity of the materials. In particular, we want to demonstrate the benefit of high-order FEMs as they also provide a better control on the accuracy. Our computations are done with the parallel Finite Element Library NGSOLVE ontop of the automatic 2D/3D mesh generator NETGEN (http://sourceforge.net/projects/ngsolve/). Further we are interested in the generation of synthetic seismograms including direct, refracted and converted waves in correlation to the presence of an underground cavity and the detailed simulation of the comprehensive wave field inside and around such a cavity that would have been created by a nuclear explosion. The motivation of this application comes from the need to find evidence of a nuclear test as they are forbidden by the Comprehensive Nuclear-Test Ban Treaty (CTBT). With this approach it is possible for us to investigate the wave field over a large bandwidth of wave numbers. This again will help to provide a better understanding on the characteristic signatures of an underground cavity, improve the protocols for
Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method
Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio
Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV
A new device for liver cancer biomarker detection with high accuracy
Shuaipeng Wang
2015-06-01
Full Text Available A novel cantilever array-based bio-sensor was batch-fabricated with IC compatible MEMS technology for precise liver cancer bio-marker detection. A micro-cavity was designed in the free end of the cantilever for local antibody-immobilization, thus adsorption of the cancer biomarker is localized in the micro-cavity, and the adsorption-induced k variation can be dramatically reduced with comparison to that caused by adsorption of the whole lever. The cantilever is pizeoelectrically driven into vibration which is pizeoresistively sensed by Wheatstone bridge. These structural features offer several advantages: high sensitivity, high throughput, high mass detection accuracy, and small volume. In addition, an analytical model has been established to eliminate the effect of adsorption-induced lever stiffness change and has been applied to precise mass detection of cancer biomarker AFP, the detected AFP antigen mass (7.6 pg/ml is quite close to the calculated one (5.5 pg/ml, two orders of magnitude better than the value by the fully antibody-immobilized cantilever sensor. These approaches will promote real application of the cantilever sensors in early diagnosis of cancer.
Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan
2018-05-15
Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network.
Qi, Jun; Liu, Guo-Ping
2017-11-06
This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS). The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN) node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF) module, which is only used for time synchronization between different nodes, with accuracy up to 1 μ s. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF) for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM). The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS) signal.
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
A Robust High-Accuracy Ultrasound Indoor Positioning System Based on a Wireless Sensor Network
Jun Qi
2017-11-01
Full Text Available This paper describes the development and implementation of a robust high-accuracy ultrasonic indoor positioning system (UIPS. The UIPS consists of several wireless ultrasonic beacons in the indoor environment. Each of them has a fixed and known position coordinate and can collect all the transmissions from the target node or emit ultrasonic signals. Every wireless sensor network (WSN node has two communication modules: one is WiFi, that transmits the data to the server, and the other is the radio frequency (RF module, which is only used for time synchronization between different nodes, with accuracy up to 1 μs. The distance between the beacon and the target node is calculated by measuring the time-of-flight (TOF for the ultrasonic signal, and then the position of the target is computed by some distances and the coordinate of the beacons. TOF estimation is the most important technique in the UIPS. A new time domain method to extract the envelope of the ultrasonic signals is presented in order to estimate the TOF. This method, with the envelope detection filter, estimates the value with the sampled values on both sides based on the least squares method (LSM. The simulation results show that the method can achieve envelope detection with a good filtering effect by means of the LSM. The highest precision and variance can reach 0.61 mm and 0.23 mm, respectively, in pseudo-range measurements with UIPS. A maximum location error of 10.2 mm is achieved in the positioning experiments for a moving robot, when UIPS works on the line-of-sight (LOS signal.
Nelson, E.M.
1993-12-01
Some two-dimensional finite element electromagnetic field solvers are described and tested. For TE and TM modes in homogeneous cylindrical waveguides and monopole modes in homogeneous axisymmetric structures, the solvers find approximate solutions to a weak formulation of the wave equation. Second-order isoparametric lagrangian triangular elements represent the field. For multipole modes in axisymmetric structures, the solver finds approximate solutions to a weak form of the curl-curl formulation of Maxwell's equations. Second-order triangular edge elements represent the radial (ρ) and axial (z) components of the field, while a second-order lagrangian basis represents the azimuthal (φ) component of the field weighted by the radius ρ. A reduced set of basis functions is employed for elements touching the axis. With this basis the spurious modes of the curl-curl formulation have zero frequency, so spurious modes are easily distinguished from non-static physical modes. Tests on an annular ring, a pillbox and a sphere indicate the solutions converge rapidly as the mesh is refined. Computed eigenvalues with relative errors of less than a few parts per million are obtained. Boundary conditions for symmetric, periodic and symmetric-periodic structures are discussed and included in the field solver. Boundary conditions for structures with inversion symmetry are also discussed. Special corner elements are described and employed to improve the accuracy of cylindrical waveguide and monopole modes with singular fields at sharp corners. The field solver is applied to three problems: (1) cross-field amplifier slow-wave circuits, (2) a detuned disk-loaded waveguide linear accelerator structure and (3) a 90 degrees overmoded waveguide bend. The detuned accelerator structure is a critical application of this high accuracy field solver. To maintain low long-range wakefields, tight design and manufacturing tolerances are required
Model Accuracy Comparison for High Resolution Insar Coherence Statistics Over Urban Areas
Zhang, Yue; Fu, Kun; Sun, Xian; Xu, Guangluan; Wang, Hongqi
2016-06-01
The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR) images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR) coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
MODEL ACCURACY COMPARISON FOR HIGH RESOLUTION INSAR COHERENCE STATISTICS OVER URBAN AREAS
Y. Zhang
2016-06-01
Full Text Available The interferometric coherence map derived from the cross-correlation of two complex registered synthetic aperture radar (SAR images is the reflection of imaged targets. In many applications, it can act as an independent information source, or give additional information complementary to the intensity image. Specially, the statistical properties of the coherence are of great importance in land cover classification, segmentation and change detection. However, compared to the amount of work on the statistical characters of SAR intensity, there are quite fewer researches on interferometric SAR (InSAR coherence statistics. And to our knowledge, all of the existing work that focuses on InSAR coherence statistics, models the coherence with Gaussian distribution with no discrimination on data resolutions or scene types. But the properties of coherence may be different for different data resolutions and scene types. In this paper, we investigate on the coherence statistics for high resolution data over urban areas, by making a comparison of the accuracy of several typical statistical models. Four typical land classes including buildings, trees, shadow and roads are selected as the representatives of urban areas. Firstly, several regions are selected from the coherence map manually and labelled with their corresponding classes respectively. Then we try to model the statistics of the pixel coherence for each type of region, with different models including Gaussian, Rayleigh, Weibull, Beta and Nakagami. Finally, we evaluate the model accuracy for each type of region. The experiments on TanDEM-X data show that the Beta model has a better performance than other distributions.
The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images
Wang, Y.; Hu, C.; Xia, G.; Xue, H.
2018-04-01
The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.
Volpi, Giorgio; Crosta, Giovanni B.; Colucci, Francesca; Fischer, Thomas; Magri, Fabien
2017-04-01
Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. However, nowadays its utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. This is mainly due to the uncertainties associated with it, as for example the lack of appropriate computational tools, necessary to perform effective analyses. The aim of the present study is to build an accurate 3D numerical model, to simulate the exploitation process of the deep geothermal reservoir of Castel Giorgio - Torre Alfina (central Italy), and to compare results and performances of parallel simulations performed with TOUGH2 (Pruess et al. 1999), FEFLOW (Diersch 2014) and the open source software OpenGeoSys (Kolditz et al. 2012). Detailed geological, structural and hydrogeological data, available for the selected area since early 70s, show that Castel Giorgio - Torre Alfina is a potential geothermal reservoir with high thermal characteristics (120 ° C - 150 ° C) and fluids such as pressurized water and gas, mainly CO2, hosted in a carbonate formation. Our two steps simulations firstly recreate the undisturbed natural state of the considered system and then perform the predictive analysis of the industrial exploitation process. The three adopted software showed a strong numerical simulations accuracy, which has been verified by comparing the simulated and measured temperature and pressure values of the geothermal wells in the area. The results of our simulations have demonstrated the sustainability of the investigated geothermal field for the development of a 5 MW pilot plant with total fluids reinjection in the same original formation. From the thermal point of view, a very efficient buoyant circulation inside the geothermal system has been observed, thus allowing the reservoir to support the hypothesis of a 50 years production time with a flow rate of 1050 t
Numerical simulations of novel high-power high-brightness diode laser structures
Boucke, Konstantin; Rogg, Joseph; Kelemen, Marc T.; Poprawe, Reinhart; Weimann, Guenter
2001-07-01
One of the key topics in today's semiconductor laser development activities is to increase the brightness of high-power diode lasers. Although structures showing an increased brightness have been developed specific draw-backs of these structures lead to a still strong demand for investigation of alternative concepts. Especially for the investigation of basically novel structures easy-to-use and fast simulation tools are essential to avoid unnecessary, cost and time consuming experiments. A diode laser simulation tool based on finite difference representations of the Helmholtz equation in 'wide-angle' approximation and the carrier diffusion equation has been developed. An optimized numerical algorithm leads to short execution times of a few seconds per resonator round-trip on a standard PC. After each round-trip characteristics like optical output power, beam profile and beam parameters are calculated. A graphical user interface allows online monitoring of the simulation results. The simulation tool is used to investigate a novel high-power, high-brightness diode laser structure, the so-called 'Z-Structure'. In this structure an increased brightness is achieved by reducing the divergency angle of the beam by angular filtering: The round trip path of the beam is two times folded using internal total reflection at surfaces defined by a small index step in the semiconductor material, forming a stretched 'Z'. The sharp decrease of the reflectivity for angles of incidence above the angle of total reflection leads to a narrowing of the angular spectrum of the beam. The simulations of the 'Z-Structure' indicate an increase of the beam quality by a factor of five to ten compared to standard broad-area lasers.
McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D.; DeJong, Pim A.; Loeve, Martine; Tiddens, Harm A.W.M.; McKone, Edward; Gallagher, Charles G.
2012-01-01
To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R 2 = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)
McEvoy, Sinead; Lavelle, Lisa; Kilcoyne, Aoife; McCarthy, Colin; Dodd, Jonathan D. [St. Vincent' s University Hospital, Department of Radiology, Dublin (Ireland); DeJong, Pim A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Loeve, Martine; Tiddens, Harm A.W.M. [Erasmus MC-Sophia Children' s Hospital, Department of Radiology, Department of Pediatric Pulmonology and Allergology, Rotterdam (Netherlands); McKone, Edward; Gallagher, Charles G. [St. Vincent' s University Hospital, Department of Respiratory Medicine and National Referral Centre for Adult Cystic Fibrosis, Dublin (Ireland)
2012-12-15
To determine the diagnostic accuracy of high-resolution computed tomography (HRCT) for the detection of nontuberculous mycobacterium infection (NTM) in adult cystic fibrosis (CF) patients. Twenty-seven CF patients with sputum-culture-proven NTM (NTM+) underwent HRCT. An age, gender and spirometrically matched group of 27 CF patients without NTM (NTM-) was included as controls. Images were randomly and blindly analysed by two readers in consensus and scored using a modified Bhalla scoring system. Significant differences were seen between NTM (+) and NTM (-) patients in the severity of the bronchiectasis subscore [45 % (1.8/4) vs. 35 % (1.4/4), P = 0.029], collapse/consolidation subscore [33 % (1.3/3) vs. 15 % (0.6/3)], tree-in-bud/centrilobular nodules subscore [43 % (1.7/3) vs. 25 % (1.0/3), P = 0.002] and the total CT score [56 % (18.4/33) vs. 46 % (15.2/33), P = 0.002]. Binary logistic regression revealed BMI, peribronchial thickening, collapse/consolidation and tree-in-bud/centrilobular nodules to be predictors of NTM status (R{sup 2} = 0.43). Receiver-operator curve analysis of the regression model showed an area under the curve of 0.89, P < 0.0001. In adults with CF, seven or more bronchopulmonary segments showing tree-in-bud/centrilobular nodules on HRCT is highly suggestive of NTM colonisation. (orig.)
Rigorous Training of Dogs Leads to High Accuracy in Human Scent Matching-To-Sample Performance.
Sophie Marchal
Full Text Available Human scent identification is based on a matching-to-sample task in which trained dogs are required to compare a scent sample collected from an object found at a crime scene to that of a suspect. Based on dogs' greater olfactory ability to detect and process odours, this method has been used in forensic investigations to identify the odour of a suspect at a crime scene. The excellent reliability and reproducibility of the method largely depend on rigor in dog training. The present study describes the various steps of training that lead to high sensitivity scores, with dogs matching samples with 90% efficiency when the complexity of the scents presented during the task in the sample is similar to that presented in the in lineups, and specificity reaching a ceiling, with no false alarms in human scent matching-to-sample tasks. This high level of accuracy ensures reliable results in judicial human scent identification tests. Also, our data should convince law enforcement authorities to use these results as official forensic evidence when dogs are trained appropriately.
Takase, Kazuyuki
1994-11-01
The turbulent heat transfer of a fuel rod with three-dimensional trapezoidal spacer ribs for high temperature gas-cooled reactors was analyzed numerically using the k-ε turbulence model, and investigated experimentally using a simulated fuel rod under the helium gas condition of a maximum outlet temperature of 1000degC and pressure of 4MPa. From the experimental results, it found that the turbulent heat transfer coefficients of the fuel rod were 18 to 80% higher than those of a concentric smooth annulus at a region of Reynolds number exceeding 2000. On the other hand, the predicted average Nusselt number of the fuel rod agreed well with the heat transfer correlation obtained from the experimental data within a relative error of 10% with Reynolds number of more than 5000. It was verified that the numerical analysis results had sufficient accuracy. Furthermore, the numerical prediction could clarify quantitatively the effects of the heat transfer augmentation by the spacer rib and the axial velocity increase due to a reduction in the annular channel cross-section. (author)
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degrees apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly at about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ±2x10 -4 and tracking error of ±5x10 -4 was needed. 3 refs., 5 figs
High accuracy line positions of the ν 1 fundamental band of 14 N 2 16 O
Alsaif, Bidoor
2018-03-08
The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240 – 1310 cm−1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10−6 cm−1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.
Arnoldi, E.; Ramos-Duran, L.; Abro, J.A.; Costello, P.; Zwerner, P.L.; Schoepf, U.J.; Nikolaou, K.; Reiser, M.F.
2010-01-01
The purpose of this study was to evaluate the diagnostic performance of coronary CT angiography (coronary CTA) using prospective ECG triggering (PT) for the detection of significant coronary artery stenosis compared to invasive coronary angiography (ICA). A total of 20 patients underwent coronary CTA with PT using a 128-slice CT scanner (Definition trademark AS+, Siemens) and ICA. All coronary CTA studies were evaluated for significant coronary artery stenoses (≥50% luminal narrowing) by 2 observers in consensus using the AHA-15-segment model. Findings in CTA were compared to those in ICA. Coronary CTA using PT had 88% sensitivity in comparison to 100% with ICA, 95% to 88% specificity, 80% to 92% positive predictive value and 97% to 100% negative predictive value for diagnosing significant coronary artery stenosis on per segment per patient analysis, respectively. Mean effective radiation dose-equivalent of CTA was 2.6±1 mSv. Coronary CTA using PT enables non-invasive diagnosis of significant coronary artery stenosis with high diagnostic accuracy in comparison to ICA and is associated with comparably low radiation exposure. (orig.) [de
High accuracy line positions of the ν1 fundamental band of 14N216O
AlSaif, Bidoor; Lamperti, Marco; Gatti, Davide; Laporta, Paolo; Fermann, Martin; Farooq, Aamir; Lyulin, Oleg; Campargue, Alain; Marangoni, Marco
2018-05-01
The ν1 fundamental band of N2O is examined by a novel spectrometer that relies on the frequency locking of an external-cavity quantum cascade laser around 7.8 μm to a near-infrared Tm:based frequency comb at 1.9 μm. Due to the large tunability, nearly 70 lines in the 1240-1310 cm-1 range of the ν1 band of N2O, from P(40) to R(31), are for the first time measured with an absolute frequency calibration and an uncertainty from 62 to 180 kHz, depending on the line. Accurate values of the spectroscopic constants of the upper state are derived from a fit of the line centers (rms ≈ 4.8 × 10-6 cm-1 or 144 kHz). The ν1 transitions presently measured in a Doppler regime validate high accuracy predictions based on sub-Doppler measurements of the ν3 and ν3-ν1 transitions.
On the impact of improved dosimetric accuracy on head and neck high dose rate brachytherapy.
Peppa, Vasiliki; Pappas, Eleftherios; Major, Tibor; Takácsi-Nagy, Zoltán; Pantelis, Evaggelos; Papagiannis, Panagiotis
2016-07-01
To study the effect of finite patient dimensions and tissue heterogeneities in head and neck high dose rate brachytherapy. The current practice of TG-43 dosimetry was compared to patient specific dosimetry obtained using Monte Carlo simulation for a sample of 22 patient plans. The dose distributions were compared in terms of percentage dose differences as well as differences in dose volume histogram and radiobiological indices for the target and organs at risk (mandible, parotids, skin, and spinal cord). Noticeable percentage differences exist between TG-43 and patient specific dosimetry, mainly at low dose points. Expressed as fractions of the planning aim dose, percentage differences are within 2% with a general TG-43 overestimation except for the spine. These differences are consistent resulting in statistically significant differences of dose volume histogram and radiobiology indices. Absolute differences of these indices are however small to warrant clinical importance in terms of tumor control or complication probabilities. The introduction of dosimetry methods characterized by improved accuracy is a valuable advancement. It does not appear however to influence dose prescription or call for amendment of clinical recommendations for the mobile tongue, base of tongue, and floor of mouth patient cohort of this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Design and simulation of high accuracy power supplies for injector synchrotron dipole magnets
Fathizadeh, M.
1991-01-01
The ring magnet of the injector synchrotron consists of 68 dipole magnets. These magnets are connected in series and are energized from two feed points 180 degree apart by two identical 12-phase power supplies. The current in the magnet will be raised linearly to about 1 kA level, and after a small transition period (1 ms to 10 ms typical) the current will be reduced to below the injection level of 60 A. The repetition time for the current waveform is 500 ms. A relatively fast voltage loop along with a high gain current loop are utilized to control the current in the magnet with the required accuracy. Only one regulator circuit is used to control the firing pulses of the two sets of identical 12-phase power supplies. Pspice software was used to design and simulate the power supply performance under ramping and investigate the effect of current changes on the utility voltage and input power factor. A current ripple of ± 2 x 10 -4 and tracking error of ± 5 x 10 -4 was needed
Quantitative accuracy of serotonergic neurotransmission imaging with high-resolution 123I SPECT
Kuikka, J.T.
2004-01-01
Aim: Serotonin transporter (SERT) imaging can be used to study the role of regional abnormalities of neurotransmitter release in various mental disorders and to study the mechanism of action of therapeutic drugs or drugs' abuse. We examine the quantitative accuracy and reproducibility that can be achieved with high-resolution SPECT of serotonergic neurotransmission. Method: Binding potential (BP) of 123 I labeled tracer specific for midbrain SERT was assessed in 20 healthy persons. The effects of scatter, attenuation, partial volume, misregistration and statistical noise were estimated using phantom and human studies. Results: Without any correction, BP was underestimated by 73%. The partial volume error was the major component in this underestimation whereas the most critical error for the reproducibility was misplacement of region of interest (ROI). Conclusion: The proper ROI registration, the use of the multiple head gamma camera with transmission based scatter correction introduce more relevant results. However, due to the small dimensions of the midbrain SERT structures and poor spatial resolution of SPECT, the improvement without the partial volume correction is not great enough to restore the estimate of BP to that of the true one. (orig.) [de
High Accuracy Ground-based near-Earth-asteroid Astrometry using Synthetic Tracking
Zhai, Chengxing; Shao, Michael; Saini, Navtej; Sandhu, Jagmit; Werne, Thomas; Choi, Philip; Ely, Todd A.; Jacobs, Chirstopher S.; Lazio, Joseph; Martin-Mur, Tomas J.; Owen, William M.; Preston, Robert; Turyshev, Slava; Michell, Adam; Nazli, Kutay; Cui, Isaac; Monchama, Rachel
2018-01-01
Accurate astrometry is crucial for determining the orbits of near-Earth-asteroids (NEAs). Further, the future of deep space high data rate communications is likely to be optical communications, such as the Deep Space Optical Communications package that is part of the baseline payload for the planned Psyche Discovery mission to the Psyche asteroid. We have recently upgraded our instrument on the Pomona College 1 m telescope, at JPL's Table Mountain Facility, for conducting synthetic tracking by taking many short exposure images. These images can be then combined in post-processing to track both asteroid and reference stars to yield accurate astrometry. Utilizing the precision of the current and future Gaia data releases, the JPL-Pomona College effort is now demonstrating precision astrometry on NEAs, which is likely to be of considerable value for cataloging NEAs. Further, treating NEAs as proxies of future spacecraft that carry optical communication lasers, our results serve as a measure of the astrometric accuracy that could be achieved for future plane-of-sky optical navigation.
Optimal design of a high accuracy photoelectric auto-collimator based on position sensitive detector
Yan, Pei-pei; Yang, Yong-qing; She, Wen-ji; Liu, Kai; Jiang, Kai; Duan, Jing; Shan, Qiusha
2018-02-01
A kind of high accuracy Photo-electric auto-collimator based on PSD was designed. The integral structure composed of light source, optical lens group, Position Sensitive Detector (PSD) sensor, and its hardware and software processing system constituted. Telephoto objective optical type is chosen during the designing process, which effectively reduces the length, weight and volume of the optical system, as well as develops simulation-based design and analysis of the auto-collimator optical system. The technical indicators of auto-collimator presented by this paper are: measuring resolution less than 0.05″; a field of view is 2ω=0.4° × 0.4° measuring range is +/-5' error of whole range measurement is less than 0.2″. Measuring distance is 10m, which are applicable to minor-angle precise measuring environment. Aberration analysis indicates that the MTF close to the diffraction limit, the spot in the spot diagram is much smaller than the Airy disk. The total length of the telephoto lens is only 450mm by the design of the optical machine structure optimization. The autocollimator's dimension get compact obviously under the condition of the image quality is guaranteed.
Global communication schemes for the numerical solution of high-dimensional PDEs
Hupp, Philipp; Heene, Mario; Jacob, Riko
2016-01-01
The numerical treatment of high-dimensional partial differential equations is among the most compute-hungry problems and in urgent need for current and future high-performance computing (HPC) systems. It is thus also facing the grand challenges of exascale computing such as the requirement...
Reduced Set of Virulence Genes Allows High Accuracy Prediction of Bacterial Pathogenicity in Humans
Iraola, Gregorio; Vazquez, Gustavo; Spangenberg, Lucía; Naya, Hugo
2012-01-01
Although there have been great advances in understanding bacterial pathogenesis, there is still a lack of integrative information about what makes a bacterium a human pathogen. The advent of high-throughput sequencing technologies has dramatically increased the amount of completed bacterial genomes, for both known human pathogenic and non-pathogenic strains; this information is now available to investigate genetic features that determine pathogenic phenotypes in bacteria. In this work we determined presence/absence patterns of different virulence-related genes among more than finished bacterial genomes from both human pathogenic and non-pathogenic strains, belonging to different taxonomic groups (i.e: Actinobacteria, Gammaproteobacteria, Firmicutes, etc.). An accuracy of 95% using a cross-fold validation scheme with in-fold feature selection is obtained when classifying human pathogens and non-pathogens. A reduced subset of highly informative genes () is presented and applied to an external validation set. The statistical model was implemented in the BacFier v1.0 software (freely available at ), that displays not only the prediction (pathogen/non-pathogen) and an associated probability for pathogenicity, but also the presence/absence vector for the analyzed genes, so it is possible to decipher the subset of virulence genes responsible for the classification on the analyzed genome. Furthermore, we discuss the biological relevance for bacterial pathogenesis of the core set of genes, corresponding to eight functional categories, all with evident and documented association with the phenotypes of interest. Also, we analyze which functional categories of virulence genes were more distinctive for pathogenicity in each taxonomic group, which seems to be a completely new kind of information and could lead to important evolutionary conclusions. PMID:22916122
Kim, Ji Hyun; Kim, Sung Eun; Cho, Yu Kyung; Lim, Chul-Hyun; Park, Moo In; Hwang, Jin Won; Jang, Jae-Sik; Oh, Minkyung
2018-01-30
Although high-resolution manometry (HRM) has the advantage of visual intuitiveness, its diagnostic validity remains under debate. The aim of this study was to evaluate the diagnostic accuracy of HRM for esophageal motility disorders. Six staff members and 8 trainees were recruited for the study. In total, 40 patients enrolled in manometry studies at 3 institutes were selected. Captured images of 10 representative swallows and a single swallow in analyzing mode in both high-resolution pressure topography (HRPT) and conventional line tracing formats were provided with calculated metrics. Assessments of esophageal motility disorders showed fair agreement for HRPT and moderate agreement for conventional line tracing (κ = 0.40 and 0.58, respectively). With the HRPT format, the k value was higher in category A (esophagogastric junction [EGJ] relaxation abnormality) than in categories B (major body peristalsis abnormalities with intact EGJ relaxation) and C (minor body peristalsis abnormalities or normal body peristalsis with intact EGJ relaxation). The overall exact diagnostic accuracy for the HRPT format was 58.8% and rater's position was an independent factor for exact diagnostic accuracy. The diagnostic accuracy for major disorders was 63.4% with the HRPT format. The frequency of major discrepancies was higher for category B disorders than for category A disorders (38.4% vs 15.4%; P < 0.001). The interpreter's experience significantly affected the exact diagnostic accuracy of HRM for esophageal motility disorders. The diagnostic accuracy for major disorders was higher for achalasia than distal esophageal spasm and jackhammer esophagus.
DIRECT GEOREFERENCING : A NEW STANDARD IN PHOTOGRAMMETRY FOR HIGH ACCURACY MAPPING
A. Rizaldy
2012-07-01
Full Text Available Direct georeferencing is a new method in photogrammetry, especially in the digital camera era. Theoretically, this method does not require ground control points (GCP and the Aerial Triangulation (AT, to process aerial photography into ground coordinates. Compared with the old method, this method has three main advantages: faster data processing, simple workflow and less expensive project, at the same accuracy. Direct georeferencing using two devices, GPS and IMU. GPS recording the camera coordinates (X, Y, Z, and IMU recording the camera orientation (omega, phi, kappa. Both parameters merged into Exterior Orientation (EO parameter. This parameters required for next steps in the photogrammetric projects, such as stereocompilation, DSM generation, orthorectification and mosaic. Accuracy of this method was tested on topographic map project in Medan, Indonesia. Large-format digital camera Ultracam X from Vexcel is used, while the GPS / IMU is IGI AeroControl. 19 Independent Check Point (ICP were used to determine the accuracy. Horizontal accuracy is 0.356 meters and vertical accuracy is 0.483 meters. Data with this accuracy can be used for 1:2.500 map scale project.
Numerical Analysis on the High-Strength Concrete Beams Ultimate Behaviour
Smarzewski, Piotr; Stolarski, Adam
2017-10-01
Development of technologies of high-strength concrete (HSC) beams production, with the aim of creating a secure and durable material, is closely linked with the numerical models of real objects. The three-dimensional nonlinear finite element models of reinforced high-strength concrete beams with a complex geometry has been investigated in this study. The numerical analysis is performed using the ANSYS finite element package. The arc-length (A-L) parameters and the adaptive descent (AD) parameters are used with Newton-Raphson method to trace the complete load-deflection curves. Experimental and finite element modelling results are compared graphically and numerically. Comparison of these results indicates the correctness of failure criteria assumed for the high-strength concrete and the steel reinforcement. The results of numerical simulation are sensitive to the modulus of elasticity and the shear transfer coefficient for an open crack assigned to high-strength concrete. The full nonlinear load-deflection curves at mid-span of the beams, the development of strain in compressive concrete and the development of strain in tensile bar are in good agreement with the experimental results. Numerical results for smeared crack patterns are qualitatively agreeable as to the location, direction, and distribution with the test data. The model was capable of predicting the introduction and propagation of flexural and diagonal cracks. It was concluded that the finite element model captured successfully the inelastic flexural behaviour of the beams to failure.
Konovalov, N.V.
The accuracy of the calculation of the characteristics of a radiation field in a plane layer is investigated by solving the transfer equation in dependence on the error in the specification of the scattering indicatrix. It is shown that a small error in the specification of the indicatrix can lead to a large error in the solution at large optical depths. An estimate is given for the region of optical thicknesses for which the emission field can be determined with sufficient degree of accuracy from the transfer equation with a known error in the specification of the indicatrix. For an estimation of the error involved in various numerical methods, and also for a determination of the region of their applicability, the results of calculations of problems with strongly anisotropic indicatrix are given
Towards Building Reliable, High-Accuracy Solar Irradiance Database For Arid Climates
Munawwar, S.; Ghedira, H.
2012-12-01
Middle East's growing interest in renewable energy has led to increased activity in solar technology development with the recent commissioning of several utility-scale solar power projects and many other commercial installations across the Arabian Peninsula. The region, lying in a virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2, is also one of the most promising candidates for solar energy deployment. However, it is not the availability of resource, but its characterization and reasonably accurate assessment that determines the application potential. Solar irradiance, magnitude and variability inclusive, is the key input in assessing the economic feasibility of a solar system. The accuracy of such data is of critical importance for realistic on-site performance estimates. This contribution aims to identify the key stages in developing a robust solar database for desert climate by focusing on the challenges that an arid environment presents to parameterization of solar irradiance attenuating factors. Adjustments are proposed based on the currently available resource assessment tools to produce high quality data for assessing bankability. Establishing and maintaining ground solar irradiance measurements is an expensive affair and fairly limited in time (recently operational) and space (fewer sites) in the Gulf region. Developers within solar technology industry, therefore, rely on solar radiation models and satellite-derived data for prompt resource assessment needs. It is imperative that such estimation tools are as accurate as possible. While purely empirical models have been widely researched and validated in the Arabian Peninsula's solar modeling history, they are known to be intrinsically site-specific. A primal step to modeling is an in-depth understanding of the region's climate, identifying the key players attenuating radiation and their appropriate characterization to determine solar irradiance. Physical approach
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
R.K. Mohanty
2014-01-01
Full Text Available In this paper, we report new three level implicit super stable methods of order two in time and four in space for the solution of hyperbolic damped wave equations in one, two and three space dimensions subject to given appropriate initial and Dirichlet boundary conditions. We use uniform grid points both in time and space directions. Our methods behave like fourth order accurate, when grid size in time-direction is directly proportional to the square of grid size in space-direction. The proposed methods are super stable. The resulting system of algebraic equations is solved by the Gauss elimination method. We discuss new alternating direction implicit (ADI methods for two and three dimensional problems. Numerical results and the graphical representation of numerical solution are presented to illustrate the accuracy of the proposed methods.
The research of digital circuit system for high accuracy CCD of portable Raman spectrometer
Yin, Yu; Cui, Yongsheng; Zhang, Xiuda; Yan, Huimin
2013-08-01
The Raman spectrum technology is widely used for it can identify various types of molecular structure and material. The portable Raman spectrometer has become a hot direction of the spectrometer development nowadays for its convenience in handheld operation and real-time detection which is superior to traditional Raman spectrometer with heavy weight and bulky size. But there is still a gap for its measurement sensitivity between portable and traditional devices. However, portable Raman Spectrometer with Shell-Isolated Nanoparticle-Enhanced Raman Spectroscopy (SHINERS) technology can enhance the Raman signal significantly by several orders of magnitude, giving consideration in both measurement sensitivity and mobility. This paper proposed a design and implementation of driver and digital circuit for high accuracy CCD sensor, which is core part of portable spectrometer. The main target of the whole design is to reduce the dark current generation rate and increase signal sensitivity during the long integration time, and in the weak signal environment. In this case, we use back-thinned CCD image sensor from Hamamatsu Corporation with high sensitivity, low noise and large dynamic range. In order to maximize this CCD sensor's performance and minimize the whole size of the device simultaneously to achieve the project indicators, we delicately designed a peripheral circuit for the CCD sensor. The design is mainly composed with multi-voltage circuit, sequential generation circuit, driving circuit and A/D transition parts. As the most important power supply circuit, the multi-voltage circuits with 12 independent voltages are designed with reference power supply IC and set to specified voltage value by the amplifier making up the low-pass filter, which allows the user to obtain a highly stable and accurate voltage with low noise. What's more, to make our design easy to debug, CPLD is selected to generate sequential signal. The A/D converter chip consists of a correlated
In-depth, high-accuracy proteomics of sea urchin tooth organic matrix
Mann Matthias
2008-12-01
Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly
Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.
Andre F Marquand
Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.
Nabavizadeh, S.A.; Assadsangabi, R.; Hajmomenian, M.; Vossough, A. [Perelman School of Medicine of the University of Pennsylvania, Department of Radiology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States); Santi, M. [Perelman School of Medicine of the University of Pennsylvania, Department of Pathology, Children' s Hospital of Philadelphia, Philadelphia, PA (United States)
2015-05-01
Pilomyxoid astrocytoma (PMA) is a relatively new tumor entity which has been added to the 2007 WHO Classification of tumors of the central nervous system. The goal of this study is to utilize arterial spin labeling (ASL) perfusion imaging to differentiate PMA from pilocytic astrocytoma (PA). Pulsed ASL and conventional MRI sequences of patients with PMA and PA in the past 5 years were retrospectively evaluated. Patients with history of radiation or treatment with anti-angiogenic drugs were excluded. A total of 24 patients (9 PMA, 15 PA) were included. There were statistically significant differences between PMA and PA in mean tumor/gray matter (GM) cerebral blood flow (CBF) ratios (1.3 vs 0.4, p < 0.001) and maximum tumor/GM CBF ratio (2.3 vs 1, p < 0.001). Area under the receiver operating characteristic (ROC) curves for differentiation of PMA from PA was 0.91 using mean tumor CBF, 0.95 using mean tumor/GM CBF ratios, and 0.89 using maximum tumor/GM CBF. Using a threshold value of 0.91, the mean tumor/GM CBF ratio was able to diagnose PMA with 77 % sensitivity, 100 % specificity, and a threshold value of 0.7, provided 88 % sensitivity and 86 % specificity. There was no statistically significant difference between the two tumors in enhancement pattern (p = 0.33), internal architecture (p = 0.15), or apparent diffusion coefficient (ADC) values (p = 0.07). ASL imaging has high accuracy in differentiating PMA from PA. The result of this study may have important applications in prognostication and treatment planning especially in patients with less accessible tumors such as hypothalamic-chiasmatic gliomas. (orig.)
Haynie, A.; Min, T.-J.; Luan, L.; Mu, W.; Ketterson, J. B.
2009-01-01
We describe an extension of the total-internal-reflection microscopy technique that permits direct in-plane distance measurements with high accuracy (<10 nm) over a wide range of separations. This high position accuracy arises from the creation of a standing evanescent wave and the ability to sweep the nodal positions (intensity minima of the standing wave) in a controlled manner via both the incident angle and the relative phase of the incoming laser beams. Some control over the vertical resolution is available through the ability to scan the incoming angle and with it the evanescent penetration depth.
Andrea Lani
2006-01-01
Full Text Available Object-oriented platforms developed for the numerical solution of PDEs must combine flexibility and reusability, in order to ease the integration of new functionalities and algorithms. While designing similar frameworks, a built-in support for high performance should be provided and enforced transparently, especially in parallel simulations. The paper presents solutions developed to effectively tackle these and other more specific problems (data handling and storage, implementation of physical models and numerical methods that have arisen in the development of COOLFluiD, an environment for PDE solvers. Particular attention is devoted to describe a data storage facility, highly suitable for both serial and parallel computing, and to discuss the application of two design patterns, Perspective and Method-Command-Strategy, that support extensibility and run-time flexibility in the implementation of physical models and generic numerical algorithms respectively.
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
Jeong, Hae Sun
2010-02-01
Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm 2 in muscle and bone, and less than 3 x 3 cm 2 in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm 2 ), 10% to 1% (0.7 x 0.7 cm 2 ), and 42% to 7% (0.5 x 0
High-accuracy dosimetry study for intensity-modulated radiation therapy(IMRT) commissioning
Jeong, Hae Sun
2010-02-15
Intensity-modulated radiation therapy (IMRT), an advanced modality of high-precision radiotherapy, allows for an increase in dose to the tumor volume without increasing the dose to nearby critical organs. In order to successfully achieve the treatment, intensive dosimetry with accurate dose verification is necessary. A dosimetry for IMRT, however, is a challenging task due to dosimetric ally unfavorable phenomena such as dramatic changes of the dose at the field boundaries, dis-equilibrium of the electrons, non-uniformity between the detector and the phantom materials, and distortion of scanner-read doses. In the present study, therefore, the LEGO-type multi-purpose dosimetry phantom was developed and used for the studies on dose measurements and correction. Phantom materials for muscle, fat, bone, and lung tissue were selected after considering mass density, atomic composition, effective atomic number, and photon interaction coefficients. The phantom also includes dosimeter holders for several different types of detectors including films, which accommodates a construction of different designs of phantoms as necessary. In order to evaluate its performance, the developed phantom was tested by measuring the point dose and the percent depth dose (PDD) for small size fields under several heterogeneous conditions. However, the measurements with the two types of dosimeter did not agree well for the field sizes less than 1 x 1 cm{sup 2} in muscle and bone, and less than 3 x 3 cm{sup 2} in air cavity. Thus, it was recognized that several studies on small fields dosimetry and correction methods for the calculation with a PMCEPT code are needed. The under-estimated values from the ion chamber were corrected with a convolution method employed to eliminate the volume effect of the chamber. As a result, the discrepancies between the EBT film and the ion chamber measurements were significantly decreased, from 14% to 1% (1 x 1 cm{sup 2}), 10% to 1% (0.7 x 0.7 cm{sup 2}), and 42
High-accuracy waveforms for binary black hole inspiral, merger, and ringdown
Scheel, Mark A.; Boyle, Michael; Chu, Tony; Matthews, Keith D.; Pfeiffer, Harald P.; Kidder, Lawrence E.
2009-01-01
The first spectral numerical simulations of 16 orbits, merger, and ringdown of an equal-mass nonspinning binary black hole system are presented. Gravitational waveforms from these simulations have accumulated numerical phase errors through ringdown of f /M=0.951 62±0.000 02, and the final black hole spin is S f /M f 2 =0.686 46±0.000 04.
Crittenden, P. E.; Balachandar, S.
2018-03-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+ -up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.
2009-01-01
We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data
Analysis of the plasmodium falciparum proteome by high-accuracy mass spectrometry
Lasonder, Edwin; Ishihama, Yasushi; Andersen, Jens S
2002-01-01
-accuracy (average deviation less than 0.02 Da at 1,000 Da) mass spectrometric proteome analysis of selected stages of the human malaria parasite Plasmodium falciparum. The analysis revealed 1,289 proteins of which 714 proteins were identified in asexual blood stages, 931 in gametocytes and 645 in gametes. The last...
High-accuracy interferometric measurements of flatness and parallelism of a step gauge
Kruger, OA
2001-01-01
Full Text Available The most commonly used method in the calibration of step gauges is the coordinate measuring machine (CMM), equipped with a laser interferometer for the highest accuracy. This paper describes a modification to a length-bar measuring machine...
Scarani, C.; Tampieri, F.; Tibaldi, S.
1983-01-01
The effect of increasing the resolution of the topography in models of numerical weather prediction is assessed. Different numerical experiments have been performed, referring to a case of cyclogenesis in the lee of the Alps. From the comparison, it appears that the lower atmospheric levels are better described by the model with higherresolution topography; comparable horizontal resolution runs with smoother topography appear to be less satisfactory in this respect. It turns out also that the vertical propagation of the signal due to the front-mountain interaction is faster in the high-resolution experiment
[Accuracy of placenta accreta prenatal diagnosis by ultrasound and MRI in a high-risk population].
Daney de Marcillac, F; Molière, S; Pinton, A; Weingertner, A-S; Fritz, G; Viville, B; Roedlich, M-N; Gaudineau, A; Sananes, N; Favre, R; Nisand, I; Langer, B
2016-02-01
Main objective was to compare accuracy of ultrasonography and MRI for antenatal diagnosis of placenta accreta. Secondary objectives were to specify the most common sonographic and RMI signs associated with diagnosis of placenta accreta. This retrospective study used data collected from all potential cases of placenta accreta (patients with an anterior placenta praevia with history of scarred uterus) admitted from 01/2010 to 12/2014 in a level III maternity unit in Strasbourg, France. High-risk patients beneficiated antenatally from ultrasonography and MRI. Sonographic signs registered were: abnormal placental lacunae, increased vascularity on color Doppler, absence of the retroplacental clear space, interrupted bladder line. MRI signs registered were: abnormal uterine bulging, intraplacental bands of low signal intensity on T2-weighted images, increased vascularity, heterogeneous signal of the placenta on T2-weighed, interrupted bladder line, protrusion of the placenta into the cervix. Diagnosis of placenta accreta was confirmed histologically after hysterectomy or clinically in case of successful conservative treatment. Twenty-two potential cases of placenta accreta were referred to our center and underwent both ultrasonography and MRI. All cases of placenta accreta had a placenta praevia associated with history of scarred uterus. Sensibility and specificity for ultrasonography were, respectively, 0.92 and 0.67, for MRI 0.84 and 0.78 without significant difference (p>0.05). The most relevant signs associated with diagnosis of placenta accreta in ultrasonography were increased vascularity on color Doppler (sensibility 0.85/specificity 0.78), abnormal placental lacunae (sensibility 0.92/specificity 0.55) and loss of retroplacental clear space (sensibility 0.76/specificity 1.0). The most relevant signs in MRI were: abnormal uterine bulging (sensitivity 0.92/specificity 0.89), dark intraplacental bands on T2-weighted images (sensitivity 0.83/specificity 0.80) or
Iannicelli, Elsa; Di Renzo, Sara; Ferri, Mario; Pilozzi, Emanuela; Di Girolamo, Marco; Sapori, Alessandra; Ziparo, Vincenzo; David, Vincenzo
2014-01-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting
Iannicelli, Elsa; Di Renzo, Sara [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ferri, Mario [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Pilozzi, Emanuela [Department of Clinical and Molecular Sciences, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Di Girolamo, Marco; Sapori, Alessandra [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Ziparo, Vincenzo [Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); David, Vincenzo [Radiology Institute, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy); Department of Surgical and Medical Sciences and Translational Medicine, Faculty of Medicine and Psychology, University of Rome, Sapienza, Sant' Andrea Hospital, Rome 00189 (Italy)
2014-07-01
To evaluate the accuracy of magnetic resonance imaging (MRI) with lumen distention for rectal cancer staging and circumferential resection margin (CRM) involvement prediction. Seventy-three patients with primary rectal cancer underwent high-resolution MRI with a phased-array coil performed using 60-80 mL room air rectal distention, 1-3 weeks before surgery. MRI results were compared to postoperative histopathological findings. The overall MRI T staging accuracy was calculated. CRM involvement prediction and the N staging, the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were assessed for each T stage. The agreement between MRI and histological results was assessed using weighted-kappa statistics. The overall MRI accuracy for T staging was 93.6% (k = 0.85). The accuracy, sensitivity, specificity, PPV and NPV for each T stage were as follows: 91.8%, 86.2%, 95.5%, 92.6% and 91.3% for the group ≤ T2; 90.4%, 94.6%, 86.1%, 87.5% and 94% for T3; 98,6%, 85.7%, 100%, 100% and 98.5% for T4, respectively. The predictive CRM accuracy was 94.5% (k = 0.86); the sensitivity, specificity, PPV and NPV were 89.5%, 96.3%, 89.5%, and 96.3% respectively. The N staging accuracy was 68.49% (k = 0.4). MRI performed with rectal lumen distention has proved to be an effective technique both for rectal cancer staging and involved CRM predicting.
Numerical analysis of energy density and particle density in high energy heavy-ion collisions
Fu Yuanyong; Lu Zhongdao
2004-01-01
Energy density and particle density in high energy heavy-ion collisions are calculated with infinite series expansion method and Gauss-Laguerre formulas in numerical integration separately, and the results of these two methods are compared, the higher terms and linear terms in series expansion are also compared. The results show that Gauss-Laguerre formulas is a good method in calculations of high energy heavy-ion collisions. (author)
Koncar, Bostjan; Simonovski, Igor; Norajitra, Prachai
2009-01-01
Numerical analyses of jet impingement cooling presented in this paper were performed as a part of helium-cooled divertor studies for post-ITER generation of fusion reactors. The cooling ability of divertor cooled by multiple helium jets was analysed. Thermal-hydraulic characteristics and temperature distributions in the solid structures were predicted for the reference geometry of one cooling finger. To assess numerical errors, different meshes (hexagonal, tetra, tetra-prism) and discretisation schemes were used. The temperatures in the solid structures decrease with finer mesh and higher order discretisation and converge towards finite values. Numerical simulations were validated against high heat flux experiments, performed at Efremov Institute, St. Petersburg. The predicted design parameters show reasonable agreement with measured data. The calculated maximum thimble temperature was below the tile-thimble brazing temperature, indicating good heat removal capability of reference divertor design. (author)
SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation
Euiho Kim
2017-09-01
Full Text Available In the Federal Aviation Administration’s (FAA performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.
Automatic J–A Model Parameter Tuning Algorithm for High Accuracy Inrush Current Simulation
Xishan Wen
2017-04-01
Full Text Available Inrush current simulation plays an important role in many tasks of the power system, such as power transformer protection. However, the accuracy of the inrush current simulation can hardly be ensured. In this paper, a Jiles–Atherton (J–A theory based model is proposed to simulate the inrush current of power transformers. The characteristics of the inrush current curve are analyzed and results show that the entire inrush current curve can be well featured by the crest value of the first two cycles. With comprehensive consideration of both of the features of the inrush current curve and the J–A parameters, an automatic J–A parameter estimation algorithm is proposed. The proposed algorithm can obtain more reasonable J–A parameters, which improve the accuracy of simulation. Experimental results have verified the efficiency of the proposed algorithm.
Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori
2011-01-01
In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.
Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications
Van-Tang PHAM; Dinh-Chinh NGUYEN; Quang-Huy TRAN; Duc-Trinh CHU; Duc-Tan TRAN
2015-01-01
Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The se...
New perspectives for high accuracy SLR with second generation geodesic satellites
Lund, Glenn
1993-01-01
This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return
A high-accuracy optical linear algebra processor for finite element applications
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Xiangbin Zhu
Full Text Available Human activity recognition(HAR from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
Zhao, Ying; Pang, Xiaodan; Deng, Lei
2011-01-01
A novel approach for broadband microwave frequency measurement by employing a single-drive dual-parallel Mach-Zehnder modulator is proposed and experimentally demonstrated. Based on bias manipulations of the modulator, conventional frequency-to-power mapping technique is developed by performing a...... 10−3 relative error. This high accuracy frequency measurement technique is a promising candidate for high-speed electronic warfare and defense applications....
Mark Lyons
2013-06-01
Full Text Available Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player's achievement motivation characteristics. 13 expert (7 male, 6 female and 17 non-expert (13 male, 4 female tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70% and high-intensities (90% set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test. Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA's revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player's achievement goal indicators. Future research is required to explore the effects of fatigue on
High accuracy prediction of beta-turns and their types using propensities and multiple alignments.
Fuchs, Patrick F J; Alix, Alain J P
2005-06-01
We have developed a method that predicts both the presence and the type of beta-turns, using a straightforward approach based on propensities and multiple alignments. The propensities were calculated classically, but the way to use them for prediction was completely new: starting from a tetrapeptide sequence on which one wants to evaluate the presence of a beta-turn, the propensity for a given residue is modified by taking into account all the residues present in the multiple alignment at this position. The evaluation of a score is then done by weighting these propensities by the use of Position-specific score matrices generated by PSI-BLAST. The introduction of secondary structure information predicted by PSIPRED or SSPRO2 as well as taking into account the flanking residues around the tetrapeptide improved the accuracy greatly. This latter evaluated on a database of 426 reference proteins (previously used on other studies) by a sevenfold crossvalidation gave very good results with a Matthews Correlation Coefficient (MCC) of 0.42 and an overall prediction accuracy of 74.8%; this places our method among the best ones. A jackknife test was also done, which gave results within the same range. This shows that it is possible to reach neural networks accuracy with considerably less computional cost and complexity. Furthermore, propensities remain excellent descriptors of amino acid tendencies to belong to beta-turns, which can be useful for peptide or protein engineering and design. For beta-turn type prediction, we reached the best accuracy ever published in terms of MCC (except for the irregular type IV) in the range of 0.25-0.30 for types I, II, and I' and 0.13-0.15 for types VIII, II', and IV. To our knowledge, our method is the only one available on the Web that predicts types I' and II'. The accuracy evaluated on two larger databases of 547 and 823 proteins was not improved significantly. All of this was implemented into a Web server called COUDES (French acronym
Wybranski, Christian; Eberhardt, Benjamin; Fischbach, Katharina; Fischbach, Frank; Walke, Mathias; Hass, Peter; Röhl, Friedrich-Wilhelm; Kosiek, Ortrud; Kaiser, Mandy; Pech, Maciej; Lüdemann, Lutz; Ricke, Jens
2015-01-01
Background and purpose: To evaluate the reconstruction accuracy of brachytherapy (BT) applicators tips in vitro and in vivo in MRI-guided 192 Ir-high-dose-rate (HDR)-BT of inoperable liver tumors. Materials and methods: Reconstruction accuracy of plastic BT applicators, visualized by nitinol inserts, was assessed in MRI phantom measurements and in MRI 192 Ir-HDR-BT treatment planning datasets of 45 patients employing CT co-registration and vector decomposition. Conspicuity, short-term dislocation, and reconstruction errors were assessed in the clinical data. The clinical effect of applicator reconstruction accuracy was determined in follow-up MRI data. Results: Applicator reconstruction accuracy was 1.6 ± 0.5 mm in the phantom measurements. In the clinical MRI datasets applicator conspicuity was rated good/optimal in ⩾72% of cases. 16/129 applicators showed not time dependent deviation in between MRI/CT acquisition (p > 0.1). Reconstruction accuracy was 5.5 ± 2.8 mm, and the average image co-registration error was 3.1 ± 0.9 mm. Vector decomposition revealed no preferred direction of reconstruction errors. In the follow-up data deviation of planned dose distribution and irradiation effect was 6.9 ± 3.3 mm matching the mean co-registration error (6.5 ± 2.5 mm; p > 0.1). Conclusion: Applicator reconstruction accuracy in vitro conforms to AAPM TG 56 standard. Nitinol-inserts are feasible for applicator visualization and yield good conspicuity in MRI treatment planning data. No preferred direction of reconstruction errors were found in vivo
Horizontal Positional Accuracy of Google EarthÃ¢Â€Â™s High-Resolution Imagery Archive
David Potere
2008-12-01
Full Text Available Google Earth now hosts high-resolution imagery that spans twenty percent of the EarthÃ¢Â€Â™s landmass and more than a third of the human population. This contemporary highresolution archive represents a significant, rapidly expanding, cost-free and largely unexploited resource for scientific inquiry. To increase the scientific utility of this archive, we address horizontal positional accuracy (georegistration by comparing Google Earth with Landsat GeoCover scenes over a global sample of 436 control points located in 109 cities worldwide. Landsat GeoCover is an orthorectified product with known absolute positional accuracy of less than 50 meters root-mean-squared error (RMSE. Relative to Landsat GeoCover, the 436 Google Earth control points have a positional accuracy of 39.7 meters RMSE (error magnitudes range from 0.4 to 171.6 meters. The control points derived from satellite imagery have an accuracy of 22.8 meters RMSE, which is significantly more accurate than the 48 control-points based on aerial photography (41.3 meters RMSE; t-test p-value < 0.01. The accuracy of control points in more-developed countries is 24.1 meters RMSE, which is significantly more accurate than the control points in developing countries (44.4 meters RMSE; t-test p-value < 0.01. These findings indicate that Google Earth highresolution imagery has a horizontal positional accuracy that is sufficient for assessing moderate-resolution remote sensing products across most of the worldÃ¢Â€Â™s peri-urban areas.
High accuracy mapping with cartographic assessment for a fixed-wing remotely piloted aircraft system
Alves Júnior, Leomar Rufino; Ferreira, Manuel Eduardo; Côrtes, João Batista Ramos; de Castro Jorge, Lúcio André
2018-01-01
The lack of updated maps on large scale representations has encouraged the use of remotely piloted aircraft systems (RPAS) to generate maps for a wide range of professionals. However, some questions arise: do the orthomosaics generated by these systems have the cartographic precision required to use them? Which problems can be identified in stitching orthophotos to generate orthomosaics? To answer these questions, an aerophotogrammetric survey was conducted in an environmental conservation unit in the city of Goiânia. The flight plan was set up using the E-motion software, provided by Sensefly-a Swiss manufacturer of the RPAS Swinglet CAM used in this work. The camera installed in the RPAS was the Canon IXUS 220 HS, with the number of pixels in the sensor array of 12.1 megapixel, complementary metal oxide semiconductor 1 ∶ 2.3 ? (4000 × 3000 pixel), horizontal and vertical pixel sizes of 1.54 μm. Using the orthophotos, four orthomosaics were generated in the Pix4D mapper software. The first orthomosaic was generated without using the control points. The other three mosaics were generated using 4, 8, and 16 premarked ground control points. To check the precision and accuracy of the orthomosaics, 46 premarked targets were uniformly distributed in the block. The three-dimensional (3-D) coordinates of the premarked targets were read on the orthomosaic and compared with the coordinates obtained by the geodetic survey real-time kinematic positioning method using the global navigation satellite system receiver signals. The cartographic accuracy standard was evaluated by discrepancies between these coordinates. The bias was analyzed by the Student's t test and the accuracy by the chi-square probability considering the orthomosaic on a scale of 1 ∶ 250, in which 90% of the points tested must have a planimetric error of control points the scale was 10-fold smaller (1 ∶ 3000).
High-Accuracy Near-Surface Large-Eddy Simulation with Planar Topography
2015-08-03
parabolic profile characteristic of laminar Newtonian channel flow. Thus, although the LES equation contains no true frictional term, the numerical LES...least five times than the spurious viscous length scale. The inequality (30) is satisfied when * , so that the first and second criteria are met...lines of constant slope 2 1 / N . However the variation in 1 is not so great as to obscure the strong inverse relationship between the slopes
High-accuracy resolver-to-digital conversion via phase locked loop based on PID controller
Li, Yaoling; Wu, Zhong
2018-03-01
The problem of resolver-to-digital conversion (RDC) is transformed into the problem of angle tracking control, and a phase locked loop (PLL) method based on PID controller is proposed in this paper. This controller comprises a typical PI controller plus an incomplete differential which can avoid the amplification of higher-frequency noise components by filtering the phase detection error with a low-pass filter. Compared with conventional ones, the proposed PLL method makes the converter a system of type III and thus the conversion accuracy can be improved. Experimental results demonstrate the effectiveness of the proposed method.
KLEIN: Coulomb functions for real lambda and positive energy to high accuracy
Barnett, A.R.
1981-01-01
KLEIN computes relativistic Schroedinger (Klein-Gordon) equation solutions, i.e. Coulomb functions for real lambda > - 1, Fsub(lambda)(eta,x), Gsub(lambda)(eta,x), F'sub(lambda)(eta,x) and G'sub(lambda)(eta,x) for real kappa > 0 and real eta, - 10 4 4 . Hence it is also suitable for Bessel and spherical Bessel functions. Accuracies are in the range 10 -14 -10 -16 in oscillating region, and approx. equal to 10 -30 on an extended precision compiler. The program is suitable for generating Klein-Gordon wavefunctions for matching in pion and kaon physics. (orig.)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Carlos Humberto Galeano Urueña
2009-05-01
Full Text Available This article describes the streamline upwind Petrov-Galerkin (SUPG method as being a stabilisation technique for resolving the diffusion-advection-reaction equation by finite elements. The first part of this article has a short analysis of the importance of this type of differential equation in modelling physical phenomena in multiple fields. A one-dimensional description of the SUPG me- thod is then given to extend this basis to two and three dimensions. The outcome of a strongly advective and a high numerical complexity experiment is presented. The results show how the version of the implemented SUPG technique allowed stabilised approaches in space, even for high Peclet numbers. Additional graphs of the numerical experiments presented here can be downloaded from www.gnum.unal.edu.co.
Numerical and experimental study of blowing jet on a high lift airfoil
Bobonea, A.; Pricop, M. V.
2013-10-01
Active manipulation of separated flows over airfoils at moderate and high angles of attack in order to improve efficiency or performance has been the focus of a number of numerical and experimental investigations for many years. One of the main methods used in active flow control is the usage of blowing devices with constant and pulsed blowing. Through CFD simulation over a 2D high-lift airfoil, this study is trying to highlight the impact of pulsed blowing over its aerodynamic characteristics. The available wind tunnel data from INCAS low speed facility are also beneficial for the validation of the numerical analysis. This study intends to analyze the impact of the blowing jet velocity and slot geometry on the efficiency of an active flow control.
Development of high velocity gas gun with a new trigger system-numerical analysis
Husin, Z.; Homma, H.
2018-02-01
In development of high performance armor vests, we need to carry out well controlled experiments using bullet speed of more than 900 m/sec. After reviewing trigger systems used for high velocity gas guns, this research intends to develop a new trigger system, which can realize precise and reproducible impact tests at impact velocity of more than 900 m/sec. A new trigger system developed here is called a projectile trap. A projectile trap is placed between a reservoir and a barrel. A projectile trap has two functions of a sealing disk and triggering. Polyamidimide is selected for the trap material and dimensions of the projectile trap are determined by numerical analysis for several levels of launching pressure to change the projectile velocity. Numerical analysis results show that projectile trap designed here can operate reasonably and stresses caused during launching operation are less than material strength. It means a projectile trap can be reused for the next shooting.
High Fidelity, Numerical Investigation of Cross Talk in a Multi-Qubit Xmon Processor
Najafi-Yazdi, Alireza; Kelly, Julian; Martinis, John
Unwanted electromagnetic interference between qubits, transmission lines, flux lines and other elements of a superconducting quantum processor poses a challenge in engineering such devices. This problem is exacerbated with scaling up the number of qubits. High fidelity, massively parallel computational toolkits, which can simulate the 3D electromagnetic environment and all features of the device, are instrumental in addressing this challenge. In this work, we numerically investigated the crosstalk between various elements of a multi-qubit quantum processor designed and tested by the Google team. The processor consists of 6 superconducting Xmon qubits with flux lines and gatelines. The device also consists of a Purcell filter for readout. The simulations are carried out with a high fidelity, massively parallel EM solver. We will present our findings regarding the sources of crosstalk in the device, as well as numerical model setup, and a comparison with available experimental data.
Numerical simulations of stripping effects in high-intensity hydrogen ion linacs
J.-P. Carneiro
2009-04-01
Full Text Available Numerical simulations of H^{-} stripping losses from blackbody radiation, electromagnetic fields, and residual gas have been implemented into the beam dynamics code TRACK. Estimates of the stripping losses along two high-intensity H^{-} linacs are presented: the Spallation Neutron Source linac currently being operated at Oak Ridge National Laboratory and an 8 GeV superconducting linac currently being designed at Fermi National Accelerator Laboratory.
Automated aberration correction of arbitrary laser modes in high numerical aperture systems
Hering, Julian; Waller, Erik H.; Freymann, Georg von
2016-01-01
Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...
2015-09-01
NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios
High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow
Savel'ev, A. D.
2018-02-01
On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.
Numerical aspects of the modelling of the local effects of a high level waste repository
Ferreri, J.C.; Ventura, M.A.
1985-01-01
The numerical approximations adapted for the development of the computational models for the prediction of the effects of the emplacement of a high level waste repository are reviewed. The problems considered include: the thermal history of the rocky mass constituting the burial media, the flow of underground water and the associated migration of radionuclides in the same media. Results associated with verification of the implemented codes are presented. Their limitations and advantages are discussed. (Author) [es
Numerical simulation of transient, incongruent vaporization induced by high power laser
Tsai, C.H.
1981-01-01
A mathematical model and numerical calculations were developed to solve the heat and mass transfer problems specifically for uranum oxide subject to laser irradiation. It can easily be modified for other heat sources or/and other materials. In the uranium-oxygen system, oxygen is the preferentially vaporizing component, and as a result of the finite mobility of oxygen in the solid, an oxygen deficiency is set up near the surface. Because of the bivariant behavior of uranium oxide, the heat transfer problem and the oxygen diffusion problem are coupled and a numerical method of simultaneously solving the two boundary value problems is studied. The temperature dependence of the thermal properties and oxygen diffusivity, as well as the highly ablative effect on the surface, leads to considerable non-linearities in both the governing differential equations and the boundary conditions. Based on the earlier work done in this laboratory by Olstad and Olander on Iron and on Zirconium hydride, the generality of the problem is expanded and the efficiency of the numerical scheme is improved. The finite difference method, along with some advanced numerical techniques, is found to be an efficient way to solve this problem
Remote Numerical Simulations of the Interaction of High Velocity Clouds with Random Magnetic Fields
Santillan, Alfredo; Hernandez--Cervantes, Liliana; Gonzalez--Ponce, Alejandro; Kim, Jongsoo
The numerical simulations associated with the interaction of High Velocity Clouds (HVC) with the Magnetized Galactic Interstellar Medium (ISM) are a powerful tool to describe the evolution of the interaction of these objects in our Galaxy. In this work we present a new project referred to as Theoretical Virtual i Observatories. It is oriented toward to perform numerical simulations in real time through a Web page. This is a powerful astrophysical computational tool that consists of an intuitive graphical user interface (GUI) and a database produced by numerical calculations. In this Website the user can make use of the existing numerical simulations from the database or run a new simulation introducing initial conditions such as temperatures, densities, velocities, and magnetic field intensities for both the ISM and HVC. The prototype is programmed using Linux, Apache, MySQL, and PHP (LAMP), based on the open source philosophy. All simulations were performed with the MHD code ZEUS-3D, which solves the ideal MHD equations by finite differences on a fixed Eulerian mesh. Finally, we present typical results that can be obtained with this tool.
Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.
2010-01-01
Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.
Towards high fidelity numerical wave tanks for modelling coastal and ocean engineering processes
Cozzuto, G.; Dimakopoulos, A.; de Lataillade, T.; Kees, C. E.
2017-12-01
With the increasing availability of computational resources, the engineering and research community is gradually moving towards using high fidelity Comutational Fluid Mechanics (CFD) models to perform numerical tests for improving the understanding of physical processes pertaining to wave propapagation and interaction with the coastal environment and morphology, either physical or man-made. It is therefore important to be able to reproduce in these models the conditions that drive these processes. So far, in CFD models the norm is to use regular (linear or nonlinear) waves for performing numerical tests, however, only random waves exist in nature. In this work, we will initially present the verification and validation of numerical wave tanks based on Proteus, an open-soruce computational toolkit based on finite element analysis, with respect to the generation, propagation and absorption of random sea states comprising of long non-repeating wave sequences. Statistical and spectral processing of results demonstrate that the methodologies employed (including relaxation zone methods and moving wave paddles) are capable of producing results of similar quality to the wave tanks used in laboratories (Figure 1). Subsequently cases studies of modelling complex process relevant to coastal defences and floating structures such as sliding and overturning of composite breakwaters, heave and roll response of floating caissons are presented. Figure 1: Wave spectra in the numerical wave tank (coloured symbols), compared against the JONSWAP distribution
Muenstermann, Sebastian [RWTH Aachen (Germany). Dept. of Ferrous Metallurgy; Vajragupta, Napat [RWTH Aachen (Germany). Materials Mechanics Group; Weisgerber, Bernadette [ThyssenKrupp Steel Europe AG (Germany). Patent Dept.; Kern, Andreas [ThyssenKrupp Steel Europe AG (Germany). Dept. of Quality Affairs
2013-06-01
The demand for lightweight construction in mechanical and civil engineering has strongly promoted the development of high strength steels with excellent damage tolerance. Nowadays, the requirements from mechanical and civil engineering are even more challenging, as gradients in mechanical properties are demanded increasingly often for components that are utilized close to the limit state of load bearing capacity. A metallurgical solution to this demand is given by composite rolling processes. In this process components with different chemical compositions were jointed, which develop after heat treatment special properties. These are actually evaluated in order to verify that structural steels with the desired gradients in mechanical properties can be processed. A numerical study was performed aiming to numerically predict strenght and toughness properties, as well as the procesing behaviour using Finite Element (FE) simulations with damage mechanics approaches. For determination of mechanical properties, simulations of tensile specimen, SENB sample, and a mobile crane have been carried out for different configurations of composite rolled materias out of high strebght structural steels. As a parameter study, both the geometrical and the metallurgical configurations of the composite rolled steels were modified. Thickness of each steel layer and materials configuration have been varied. Like this, a numerical procedure to define optimum tailored configurations of high strenght steels could be established.
Ribeiro, Felipe Lopes; Pinto, Joao Pedro C.T.A.
2013-01-01
The 4 th generation Very High Temperature Reactor (VHTR) most popular concept uses a graphite-moderated and helium cooled core with an outlet gas temperature of approximately 1000 deg C. The high output temperature allows the use of the process heat and the production of hydrogen through the thermochemical iodine-sulfur process as well as highly efficient electricity generation. There are two concepts of VHTR core: the prismatic block and the pebble bed core. The prismatic block core has two popular concepts for the fuel element: multihole and annular. In the multi-hole fuel element, prismatic graphite blocks contain cylindrical flow channels where the helium coolant flows removing heat from cylindrical fuel rods positioned in the graphite. In the other hand, the annular type fuel element has annular channels around the fuel. This paper shows the numerical evaluations of prismatic multi-hole and annular VHTR fuel elements and does a comparison between the results of these assembly reactors. In this study the analysis were performed using the CFD code ANSYS CFX 14.0. The simulations were made in 1/12 fuel element models. A numerical validation was performed through the energy balance, where the theoretical and the numerical generated heat were compared for each model. (author)
Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping
Gangchen Hua
2017-01-01
Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.
The use of high accuracy NAA for the certification of NIST Standard Reference Materials
Becker, D.A.; Greenberg, R.R.; Stone, S.
1991-01-01
Neutron activation analysis (NAA) is only one of many analytical techniques used at the National Institute of Standards and Technology (NIST) for the certification of NIST Standard Reference Materials (SRMs). We compete daily against all of the other available analytical techniques in terms of accuracy, precision, and the cost required to obtain that requisite accuracy and precision. Over the years, the authors have found that NAA can and does compete favorably with these other techniques because of its' unique capabilities for redundancy and quality assurance. Good examples are the two new NIST leaf SRMs, Apple Leaves (SRM 1515) and Peach Leaves (SRM 1547). INAA was used to measure the homogeneity of 12 elements in 15 samples of each material at the 100 mg sample size. In addition, instrumental and radiochemical NAA combined for 27 elemental determinations, out of a total of 54 elemental determinations made on each material with all NIST techniques combined. This paper describes the NIST NAA procedures used in these analyses, the quality assurance techniques employed, and the analytical results for the 24 elements determined by NAA in these new botanical SRMs. The NAA results are also compared to the final certified values for these SRMs
High-accuracy 3-D modeling of cultural heritage: the digitizing of Donatello's "Maddalena".
Guidi, Gabriele; Beraldin, J Angelo; Atzeni, Carlo
2004-03-01
Three-dimensional digital modeling of Heritage works of art through optical scanners, has been demonstrated in recent years with results of exceptional interest. However, the routine application of three-dimensional (3-D) modeling to Heritage conservation still requires the systematic investigation of a number of technical problems. In this paper, the acquisition process of the 3-D digital model of the Maddalena by Donatello, a wooden statue representing one of the major masterpieces of the Italian Renaissance which was swept away by the Florence flood of 1966 and successively restored, is described. The paper reports all the steps of the acquisition procedure, from the project planning to the solution of the various problems due to range camera calibration and to material non optically cooperative. Since the scientific focus is centered on the 3-D model overall dimensional accuracy, a methodology for its quality control is described. Such control has demonstrated how, in some situations, the ICP-based alignment can lead to incorrect results. To circumvent this difficulty we propose an alignment technique based on the fusion of ICP with close-range digital photogrammetry and a non-invasive procedure in order to generate a final accurate model. In the end detailed results are presented, demonstrating the improvement of the final model, and how the proposed sensor fusion ensure a pre-specified level of accuracy.
Vision-based algorithms for high-accuracy measurements in an industrial bakery
Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao
2002-02-01
This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.
Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration
2016-03-01
Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.
Hyun, Yil Sik; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-01-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM. PMID:23678267
Hyun, Yil Sik; Han, Dong Soo; Bae, Joong Ho; Park, Hye Sun; Eun, Chang Soo
2013-05-01
Accurate diagnosis of gastric intestinal metaplasia is important; however, conventional endoscopy is known to be an unreliable modality for diagnosing gastric intestinal metaplasia (IM). The aims of the study were to evaluate the interobserver variation in diagnosing IM by high-definition (HD) endoscopy and the diagnostic accuracy of this modality for IM among experienced and inexperienced endoscopists. Selected 50 cases, taken with HD endoscopy, were sent for a diagnostic inquiry of gastric IM through visual inspection to five experienced and five inexperienced endoscopists. The interobserver agreement between endoscopists was evaluated to verify the diagnostic reliability of HD endoscopy in diagnosing IM, and the diagnostic accuracy, sensitivity, and specificity were evaluated for validity of HD endoscopy in diagnosing IM. Interobserver agreement among the experienced endoscopists was "poor" (κ = 0.38) and it was also "poor" (κ = 0.33) among the inexperienced endoscopists. The diagnostic accuracy of the experienced endoscopists was superior to that of the inexperienced endoscopists (P = 0.003). Since diagnosis through visual inspection is unreliable in the diagnosis of IM, all suspicious areas for gastric IM should be considered to be biopsied. Furthermore, endoscopic experience and education are needed to raise the diagnostic accuracy of gastric IM.
López, R., E-mail: ralope1@ing.uc3m.es; Lecuona, A., E-mail: lecuona@ing.uc3m.es; Nogueira, J., E-mail: goriba@ing.uc3m.es; Vereda, C., E-mail: cvereda@ing.uc3m.es
2017-03-15
Highlights: • A two-phase flows numerical algorithm with high order temporal schemes is proposed. • Transient solutions route depends on the temporal high order scheme employed. • ESDIRK scheme for two-phase flows events exhibits high computational performance. • Computational implementation of the ESDIRK scheme can be done in a very easy manner. - Abstract: An extension for 1-D transient two-phase flows of the SIMPLE-ESDIRK method, initially developed for incompressible viscous flows by Ijaz is presented. This extension is motivated by the high temporal order of accuracy demanded to cope with fast phase change events. This methodology is suitable for boiling heat exchangers, solar thermal receivers, etc. The methodology of the solution consist in a finite volume staggered grid discretization of the governing equations in which the transient terms are treated with the explicit first stage singly diagonally implicit Runge-Kutta (ESDIRK) method. It is suitable for stiff differential equations, present in instant boiling or condensation processes. It is combined with the semi-implicit pressure linked equations algorithm (SIMPLE) for the calculation of the pressure field. The case of study consists of the numerical reproduction of the Bartolomei upward boiling pipe flow experiment. The steady-state validation of the numerical algorithm is made against these experimental results and well known numerical results for that experiment. In addition, a detailed study reveals the benefits over the first order Euler Backward method when applying 3rd and 4th order schemes, making emphasis in the behaviour when the system is subjected to periodic square wave wall heat function disturbances, concluding that the use of the ESDIRK method in two-phase calculations presents remarkable accuracy and computational advantages.
Yamamoto, Yoshinobu; Kunugi, Tomoaki
2015-01-01
Graphical abstract: - Highlights: • For the first time, the MHD heat transfer DNS database corresponding to the typical nondimensional parameters of the fusion blanket design using molten salt, were established. • MHD heat transfer correlation was proposed and about 20% of the heat transfer degradation was evaluated under the design conditions. • The contribution of the turbulent diffusion to heat transfer is increased drastically with increasing Hartmann number. - Abstract: The high-Prandtl number passive scalar transport of the turbulent channel flow imposed a wall-normal magnetic field is investigated through the large-scale direct numerical simulation (DNS). All essential turbulence scales of velocities and temperature are resolved by using 2048 × 870 × 1024 computational grid points in stream, vertical, and spanwise directions. The heat transfer phenomena for a Prandtl number of 25 were observed under the following flow conditions: the bulk Reynolds number of 14,000 and Hartman number of up to 28. These values were equivalent to the typical nondimensional parameters of the fusion blanket design proposed by Wong et al. As a result, a high-accuracy DNS database for the verification of magnetohydrodynamic turbulent heat transfer models was established for the first time, and it was confirmed that the heat transfer correlation for a Prandtl number of 5.25 proposed by Yamamoto and Kunugi was applicable to the Prandtl number of 25 used in this study
Yamamoto, Yoshinobu, E-mail: yamamotoy@yamanashi.ac.jp [Department of Mechanical Systems Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu 400-8511 (Japan); Kunugi, Tomoaki [Department of Nuclear Engineering, Kyoto University Yoshida, Sakyo, Kyoto 606-8501 (Japan)
2015-01-15
Graphical abstract: - Highlights: • For the first time, the MHD heat transfer DNS database corresponding to the typical nondimensional parameters of the fusion blanket design using molten salt, were established. • MHD heat transfer correlation was proposed and about 20% of the heat transfer degradation was evaluated under the design conditions. • The contribution of the turbulent diffusion to heat transfer is increased drastically with increasing Hartmann number. - Abstract: The high-Prandtl number passive scalar transport of the turbulent channel flow imposed a wall-normal magnetic field is investigated through the large-scale direct numerical simulation (DNS). All essential turbulence scales of velocities and temperature are resolved by using 2048 × 870 × 1024 computational grid points in stream, vertical, and spanwise directions. The heat transfer phenomena for a Prandtl number of 25 were observed under the following flow conditions: the bulk Reynolds number of 14,000 and Hartman number of up to 28. These values were equivalent to the typical nondimensional parameters of the fusion blanket design proposed by Wong et al. As a result, a high-accuracy DNS database for the verification of magnetohydrodynamic turbulent heat transfer models was established for the first time, and it was confirmed that the heat transfer correlation for a Prandtl number of 5.25 proposed by Yamamoto and Kunugi was applicable to the Prandtl number of 25 used in this study.
A high-order solver for aerodynamic flow simulations and comparison of different numerical schemes
Mikhaylov, Sergey; Morozov, Alexander; Podaruev, Vladimir; Troshin, Alexey
2017-11-01
An implementation of high order of accuracy Discontinuous Galerkin method is presented. Reconstruction is done for the conservative variables. Gradients are calculated using the BR2 method. Coordinate transformations are done by serendipity elements. In computations with schemes of order higher than 2, curvature of the mesh lines is taken into account. A comparison with finite volume methods is performed, including WENO method with linear weights and single quadrature point on a cell side. The results of the following classical tests are presented: subsonic flow around a circular cylinder in an ideal gas, convection of two-dimensional isentropic vortex, and decay of the Taylor-Green vortex.
Thermal Stability of Magnetic Compass Sensor for High Accuracy Positioning Applications
Van-Tang PHAM
2015-12-01
Full Text Available Using magnetic compass sensors in angle measurements have a wide area of application such as positioning, robot, landslide, etc. However, one of the most phenomenal that affects to the accuracy of the magnetic compass sensor is the temperature. This paper presents two thermal stability schemes for improving performance of a magnetic compass sensor. The first scheme uses the feedforward structure to adjust the angle output of the compass sensor adapt to the variation of the temperature. The second scheme increases both the temperature working range and steady error performance of the sensor. In this scheme, we try to keep the temperature of the sensor is stable at the certain value (e.g. 25 oC by using a PID (proportional-integral-derivative controller and a heating/cooling generator. Many experiment scenarios have implemented to confirm the effectivity of these solutions.
A method of high accuracy clock synchronization by frequency following with VCXO
Ma Yichao; Wu Jie; Zhang Jie; Song Hongzhi; Kong Yang
2011-01-01
In this paper, the principle of the synchronous protocol of the IEEE1588 is analyzed, and the factors that affect the accuracy of synchronization is summarized. Through the hardware timer in a microcontroller, we give the exactly the time when a package is sent or received. So synchronization of the distributed clocks can reach 1 μs in this way. Another method to improve precision of the synchronization is to replace the traditional fixed frequency crystal of the slave device, which needs to follow up the master clock, by an adjustable VCXO. So it is possible to fine tune the frequency of the distributed clocks, and reduce the drift of clock, which shows great benefit for the clock synchronization. A test measurement shows the synchronization of distribute clocks can be better than 10 ns using this method, which is more accurate than the method realized by software. (authors)
Numerical simulations on a high-temperature particle moving in coolant
Li Xiaoyan; Shang Zhi; Xu Jijun
2006-01-01
This study considers the coupling effect between film boiling heat transfer and evaporation drag around a hot-particle in cold liquid. Taking momentum and energy equations of the vapor film into account, a transient single particle model under FCI conditions has been established. The numerical simulations on a high-temperature particle moving in coolant have been performed using Gear algorithm. Adaptive dynamic boundary method is adopted during simulating to matching the dynamic boundary that is caused by vapor film changing. Based on the method presented above, the transient process of high-temperature particles moving in coolant can be simulated. The experimental results prove the validity of the HPMC model. (authors)
Mathieu Zellhuber
2014-03-01
Full Text Available Flame dynamics related to high-frequency instabilities in gas turbine combustors are investigated using experimental observations and numerical simulations. Two different combustor types are studied, a premix swirl combustor (experiment and a generic reheat combustor (simulation. In both cases, a very similar dynamic behaviour of the reaction zone is observed, with the appearance of transverse displacement and coherent flame wrinkling. From these observations, a model for the thermoacoustic feedback linked to transverse modes is proposed. The model splits heat release rate fluctuations into distinct contributions that are related to flame displacement and variations of the mass burning rate. The decomposition procedure is applied on the numerical data and successfully verified by comparing a reconstructed Rayleigh index with the directly computed value. It thus allows to quantify the relative importance of various feedback mechanisms for a given setup.
Gao Zhi; Shen Yi-Qing
2012-01-01
The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme
Wen Zheng; Liu Yu; Yang Wenjiang; Qiu Ming
2007-01-01
In this paper, we present a study of the quasi-static and dynamic behaviour of high-T c superconductors (HTS hereafter) using a model suspension vibration testing system based on the magnetic launch assistance concept. The stiffness and damping of the levitation system under specified vibration circumstances was calculated by drawing on harmonic response analysis and half-power points method. Also, the equation of motion of the suspension system was presented in this paper, and with an attempt to analyse and predict mechanical characteristics of HTS in dynamic conditions. The obtained results of the suspending motion behaviour by numerical calculation are compared with experimental analytical results. Experimental technique combined with a numerical simulation method is a useful tool for measuring and analysing motion-dependent magnetic forces for the prediction and control of suspension systems
Numerical simulation and experimental research of the integrated high-power LED radiator
Xiang, J. H.; Zhang, C. L.; Gan, Z. J.; Zhou, C.; Chen, C. G.; Chen, S.
2017-01-01
The thermal management has become an urgent problem to be solved with the increasing power and the improving integration of the LED (light emitting diode) chip. In order to eliminate the contact resistance of the radiator, this paper presented an integrated high-power LED radiator based on phase-change heat transfer, which realized the seamless connection between the vapor chamber and the cooling fins. The radiator was optimized by combining the numerical simulation and the experimental research. The effects of the chamber diameter and the parameters of fin on the heat dissipation performance were analyzed. The numerical simulation results were compared with the measured values by experiment. The results showed that the fin thickness, the fin number, the fin height and the chamber diameter were the factors which affected the performance of radiator from primary to secondary.
Hallstrom, Jason; Ni, Zheng Richard
2018-05-15
This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a
Development of numerical simulation technology for high resolution thermal hydraulic analysis
Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.
2012-04-01
A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed
Zhou, Dan; Niu, Jiqiang
2017-01-01
Trains with different numbers of cars running in the open air were simulated using the delayed detached-eddy simulation (DDES). The numbers of cars included in the simulation are 3, 4, 5 and 8. The aim of this study was to investigate how train length influences the boundary layer, the wake flow, the surface pressure, the aerodynamic drag and the friction drag. To certify the accuracy of the mesh and methods, the drag coefficients from numerical simulation of trains with 3 cars were compared with those from the wind tunnel test, and agreement was obtained. The results show that the boundary layer is thicker and the wake vortices are less symmetric as the train length increases. As a result, train length greatly affects pressure. The upper surface pressure of the tail car reduced by 2.9%, the side surface pressure of the tail car reduced by 8.3% and the underneath surface pressure of the tail car reduced by 19.7% in trains that included 3 cars to those including 8 cars. In addition, train length also has a significant effect on the friction drag coefficient and the drag coefficient. The friction drag coefficient of each car in a configuration decreases along the length of the train. In a comparison between trains consisting of 3 cars to those consisting of 8 cars, the friction drag coefficient of the tail car reduced by 8.6% and the drag coefficient of the tail car reduced by 3.7%. PMID:29261758
Gorroño, Javier; Banks, Andrew C.; Fox, Nigel P.; Underwood, Craig
2017-08-01
Optical earth observation (EO) satellite sensors generally suffer from drifts and biases relative to their pre-launch calibration, caused by launch and/or time in the space environment. This places a severe limitation on the fundamental reliability and accuracy that can be assigned to satellite derived information, and is particularly critical for long time base studies for climate change and enabling interoperability and Analysis Ready Data. The proposed TRUTHS (Traceable Radiometry Underpinning Terrestrial and Helio-Studies) mission is explicitly designed to address this issue through re-calibrating itself directly to a primary standard of the international system of units (SI) in-orbit and then through the extension of this SI-traceability to other sensors through in-flight cross-calibration using a selection of Committee on Earth Observation Satellites (CEOS) recommended test sites. Where the characteristics of the sensor under test allows, this will result in a significant improvement in accuracy. This paper describes a set of tools, algorithms and methodologies that have been developed and used in order to estimate the radiometric uncertainty achievable for an indicative target sensor through in-flight cross-calibration using a well-calibrated hyperspectral SI-traceable reference sensor with observational characteristics such as TRUTHS. In this study, Multi-Spectral Imager (MSI) of Sentinel-2 and Landsat-8 Operational Land Imager (OLI) is evaluated as an example, however the analysis is readily translatable to larger-footprint sensors such as Sentinel-3 Ocean and Land Colour Instrument (OLCI) and Visible Infrared Imaging Radiometer Suite (VIIRS). This study considers the criticality of the instrumental and observational characteristics on pixel level reflectance factors, within a defined spatial region of interest (ROI) within the target site. It quantifies the main uncertainty contributors in the spectral, spatial, and temporal domains. The resultant tool
Improvements in numerical modelling of highly injected crystalline silicon solar cells
Altermatt, P.P. [University of New South Wales, Centre for Photovoltaic Engineering, 2052 Sydney (Australia); Sinton, R.A. [Sinton Consulting, 1132 Green Circle, 80303 Boulder, CO (United States); Heiser, G. [University of NSW, School of Computer Science and Engineering, 2052 Sydney (Australia)
2001-01-01
We numerically model crystalline silicon concentrator cells with the inclusion of band gap narrowing (BGN) caused by injected free carriers. In previous studies, the revised room-temperature value of the intrinsic carrier density, n{sub i}=1.00x10{sup 10}cm{sup -3}, was inconsistent with the other material parameters of highly injected silicon. In this paper, we show that high-injection experiments can be described consistently with the revised value of n{sub i} if free-carrier induced BGN is included, and that such BGN is an important effect in silicon concentrator cells. The new model presented here significantly improves the ability to model highly injected silicon cells with a high level of precision.
The high accuracy data processing system of laser interferometry signals based on MSP430
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
A new phase-shift microscope designed for high accuracy stitching interferometry
Thomasset, Muriel; Idir, Mourad; Polack, François; Bray, Michael; Servant, Jean-Jacques
2013-01-01
Characterizing nanofocusing X-ray mirrors for the soon coming nano-imaging beamlines of synchrotron light sources motivates the development of new instruments with improved performances. The sensitivity and accuracy goal is now fixed well under the nm level and, at the same time, the spatial frequency range of the measurement should be pushed toward 50 mm −1 . SOLEIL synchrotron facility has therefore undertaken to equip with an interferential microscope suitable for stitching interferometry at this performance level. In order to keep control on the whole metrology chain it was decided to build a custom instrument in partnership with two small optics companies EOTECH and MBO. The new instrument is a Michelson micro-interferometer equipped with a custom-designed telecentric objective. It achieves the large depth of focus suitable for performing reliable calibrations and measurements. The concept has been validated with a predevelopment set-up, delivered in July 2010, which showed a static repeatability below 1 nm PV despite a non-thermally stabilized environment. The final instrument was delivered early this year and was installed inside SOLEIL's controlled environment facility, where thorough characterization tests are under way. Latest test results and first stitching measurements are presented
Experimental study of very low permeability rocks using a high accuracy permeameter
Larive, Elodie
2002-01-01
The measurement of fluid flow through 'tight' rocks is important to provide a better understanding of physical processes involved in several industrial and natural problems. These include deep nuclear waste repositories, management of aquifers, gas, petroleum or geothermal reservoirs, or earthquakes prevention. The major part of this work consisted of the design, construction and use of an elaborate experimental apparatus allowing laboratory permeability measurements (fluid flow) of very low permeability rocks, on samples at a centimetric scale, to constrain their hydraulic behaviour at realistic in-situ conditions. The accuracy permeameter allows the use of several measurement methods, the steady-state flow method, the transient pulse method, and the sinusoidal pore pressure oscillation method. Measurements were made with the pore pressure oscillation method, using different waveform periods, at several pore and confining pressure conditions, on different materials. The permeability of one natural standard, Westerly granite, and an artificial one, a micro-porous cement, were measured, and results obtained agreed with previous measurements made on these materials showing the reliability of the permeameter. A study of a Yorkshire sandstone shows a relationship between rock microstructure, permeability anisotropy and thermal cracking. Microstructure, porosity and permeability concepts, and laboratory permeability measurements specifications are presented, the permeameter is described, and then permeability results obtained on the investigated materials are reported [fr
Demonstrating High-Accuracy Orbital Access Using Open-Source Tools
Gilbertson, Christian; Welch, Bryan
2017-01-01
Orbit propagation is fundamental to almost every space-based analysis. Currently, many system analysts use commercial software to predict the future positions of orbiting satellites. This is one of many capabilities that can replicated, with great accuracy, without using expensive, proprietary software. NASAs SCaN (Space Communication and Navigation) Center for Engineering, Networks, Integration, and Communications (SCENIC) project plans to provide its analysis capabilities using a combination of internal and open-source software, allowing for a much greater measure of customization and flexibility, while reducing recurring software license costs. MATLAB and the open-source Orbit Determination Toolbox created by Goddard Space Flight Center (GSFC) were utilized to develop tools with the capability to propagate orbits, perform line-of-sight (LOS) availability analyses, and visualize the results. The developed programs are modular and can be applied for mission planning and viability analysis in a variety of Solar System applications. The tools can perform 2 and N-body orbit propagation, find inter-satellite and satellite to ground station LOS access (accounting for intermediate oblate spheroid body blocking, geometric restrictions of the antenna field-of-view (FOV), and relativistic corrections), and create animations of planetary movement, satellite orbits, and LOS accesses. The code is the basis for SCENICs broad analysis capabilities including dynamic link analysis, dilution-of-precision navigation analysis, and orbital availability calculations.
A study for high accuracy measurement of residual stress by deep hole drilling technique
Kitano, Houichi; Okano, Shigetaka; Mochizuki, Masahito
2012-08-01
The deep hole drilling technique (DHD) received much attention in recent years as a method for measuring through-thickness residual stresses. However, some accuracy problems occur when residual stress evaluation is performed by the DHD technique. One of the reasons is that the traditional DHD evaluation formula applies to the plane stress condition. The second is that the effects of the plastic deformation produced in the drilling process and the deformation produced in the trepanning process are ignored. In this study, a modified evaluation formula, which is applied to the plane strain condition, is proposed. In addition, a new procedure is proposed which can consider the effects of the deformation produced in the DHD process by investigating the effects in detail by finite element (FE) analysis. Then, the evaluation results obtained by the new procedure are compared with that obtained by traditional DHD procedure by FE analysis. As a result, the new procedure evaluates the residual stress fields better than the traditional DHD procedure when the measuring object is thick enough that the stress condition can be assumed as the plane strain condition as in the model used in this study.
On a novel low cost high accuracy experimental setup for tomographic particle image velocimetry
Discetti, Stefano; Ianiro, Andrea; Astarita, Tommaso; Cardone, Gennaro
2013-01-01
This work deals with the critical aspects related to cost reduction of a Tomo PIV setup and to the bias errors introduced in the velocity measurements by the coherent motion of the ghost particles. The proposed solution consists of using two independent imaging systems composed of three (or more) low speed single frame cameras, which can be up to ten times cheaper than double shutter cameras with the same image quality. Each imaging system is used to reconstruct a particle distribution in the same measurement region, relative to the first and the second exposure, respectively. The reconstructed volumes are then interrogated by cross-correlation in order to obtain the measured velocity field, as in the standard tomographic PIV implementation. Moreover, differently from tomographic PIV, the ghost particle distributions of the two exposures are uncorrelated, since their spatial distribution is camera orientation dependent. For this reason, the proposed solution promises more accurate results, without the bias effect of the coherent ghost particles motion. Guidelines for the implementation and the application of the present method are proposed. The performances are assessed with a parametric study on synthetic experiments. The proposed low cost system produces a much lower modulation with respect to an equivalent three-camera system. Furthermore, the potential accuracy improvement using the Motion Tracking Enhanced MART (Novara et al 2010 Meas. Sci. Technol. 21 035401) is much higher than in the case of the standard implementation of tomographic PIV. (paper)
Numerical simulation of thermal loading produced by shaped high power laser onto engine parts
Song Hongwei; Li Shaoxia; Zhang Ling; Yu Gang; Zhou Liang; Tan Jiansong
2010-01-01
Recently a new method for simulating the thermal loading on pistons of diesel engines was reported. The spatially shaped high power laser is employed as the heat source, and some preliminary experimental and numerical work was carried out. In this paper, a further effort was made to extend this simulation method to some other important engine parts such as cylinder heads. The incident Gaussian beam was transformed into concentric multi-circular patterns of specific intensity distributions, with the aid of diffractive optical elements (DOEs). By incorporating the appropriate repetitive laser pulses, the designed transient temperature fields and thermal loadings in the engine parts could be simulated. Thermal-structural numerical models for pistons and cylinder heads were built to predict the transient temperature and thermal stress. The models were also employed to find the optimal intensity distributions of the transformed laser beam that could produce the target transient temperature fields. Comparison of experimental and numerical results demonstrated that this systematic approach is effective in simulating the thermal loading on the engine parts.
Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code
Paolucci, Roberto; Stupazzini, Marco
2008-01-01
Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion
Measurement and numerical simulation of high intensity focused ultrasound field in water
Lee, Kang Il
2017-11-01
In the present study, the acoustic field of a high intensity focused ultrasound (HIFU) transducer in water was measured by using a commercially available needle hydrophone intended for HIFU use. To validate the results of hydrophone measurements, numerical simulations of HIFU fields were performed by integrating the axisymmetric Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation from the frequency-domain perspective with the help of a MATLAB-based software package developed for HIFU simulation. Quantitative values for the focal waveforms, the peak pressures, and the size of the focal spot were obtained in various regimes of linear, quasilinear, and nonlinear propagation up to the source pressure levels when the shock front was formed in the waveform. The numerical results with the HIFU simulator solving the KZK equation were compared with the experimental data and found to be in good agreement. This confirms that the numerical simulation based on the KZK equation is capable of capturing the nonlinear pressure field of therapeutic HIFU transducers well enough to make it suitable for HIFU treatment planning.
Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.
2014-01-01
A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.
Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan
2013-03-01
Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.
Tak, Nam-il; Kim, Min-Hwan; Lee, Won Jae
2008-01-01
The complex geometry of the hexagonal fuel blocks of the prismatic fuel assembly in a very high temperature reactor (VHTR) hinders accurate evaluations of the temperature profile within the fuel assembly without elaborate numerical calculations. Therefore, simplified models such as a unit cell model have been widely applied for the analyses and designs of prismatic VHTRs since they have been considered as effective approaches reducing the computational efforts. In a prismatic VHTR, however, the simplified models cannot consider a heat transfer within a fuel assembly as well as a coolant flow through a bypass gap between the fuel assemblies, which may significantly affect the maximum fuel temperature. In this paper, a three-dimensional computational fluid dynamics (CFD) analysis has been carried out on a typical fuel assembly of a prismatic VHTR. Thermal behaviours and heat transfer within the fuel assembly are intensively investigated using the CFD solutions. In addition, the accuracy of the unit cell approach is assessed against the CFD solutions. Two example situations are illustrated to demonstrate the deficiency of the unit cell model caused by neglecting the effects of the bypass gap flow and the radial power distribution within the fuel assembly
Numerical analysis of the slipstream development around a high-speed train in a double-track tunnel.
Fu, Min; Li, Peng; Liang, Xi-Feng
2017-01-01
Analysis of the slipstream development around the high-speed trains in tunnels would provide references for assessing the transient gust loads on trackside workers and trackside furniture in tunnels. This paper focuses on the computational analysis of the slipstream caused by high-speed trains passing through double-track tunnels with a cross-sectional area of 100 m2. Three-dimensional unsteady compressible Reynolds-averaged Navier-Stokes equations and a realizable k-ε turbulence model were used to describe the airflow characteristics around a high-speed train in the tunnel. The moving boundary problem was treated using the sliding mesh technology. Three cases were simulated in this paper, including two tunnel lengths and two different configurations of the train. The train speed in these three cases was 250 km/h. The accuracy of the numerical method was validated by the experimental data from full-scale tests, and reasonable consistency was obtained. The results show that the flow field around the high-speed trains can be divided into three distinct regions: the region in front of the train nose, the annular region and the wake region. The slipstream development along the two sides of train is not in balance and offsets to the narrow side in the double-track tunnels. Due to the piston effect, the slipstream has a larger peak value in the tunnel than in open air. The tunnel length, train length and length ratio affect the slipstream velocities; in particular, the velocities increase with longer trains. Moreover, the propagation of pressure waves also induces the slipstream fluctuations: substantial velocity fluctuations mainly occur in front of the train, and weaken with the decrease in amplitude of the pressure wave.
Zhong Ting
2009-01-01
Starting from the concepts of statistical symmetry we consider different aspects of the connections between nonlinear dynamics and high energy physics. We pay special attention to the interplay between number theory and dynamics. We subsequently utilize the so obtained insight to compute vital constants relevant to the program of grand unification and quantum gravity.
Ostoich, Christopher Mark
due to a dome-induced horseshoe vortex scouring the panel's surface. Comparisons with reduced-order models of heat transfer indicate that they perform with varying levels of accuracy around some portions of the geometry while completely failing to predict significant heat loads in re- gions where the dome-influenced flow impacts the ceramic panel. Cumulative effects of flow-thermal coupling at later simulation times on the reduction of panel drag and surface heat transfer are quantified. The second fluid-structure study investigates the interaction between a thin metallic panel and a Mach 2.25 turbulent boundary layer with an ini- tial momentum thickness Reynolds number of 1200. A transient, non-linear, large deformation, 3D finite element solver is developed to compute the dynamic response of the panel. The solver is coupled at the fluid-structure interface with the compressible Navier-Stokes solver, the latter of which is used for a direct numerical simulation of the turbulent boundary layer. In this approach, no simplifying assumptions regarding the structural solution or turbulence modeling are made in order to get detailed solution data. It is found that the thin panel state evolves into a flutter type response char- acterized by high-amplitude, high-frequency oscillations into the flow. The oscillating panel disturbs the supersonic flow by introducing compression waves, modifying the turbulence, and generating fluctuations in the power exiting the top of the flow domain. The work in this thesis serves as a step forward in structural response prediction in high-speed flows. The results demonstrate the ability of high- fidelity numerical approaches to serve as a guide for reduced-order model improvement and as well as provide accurate and detailed solution data in scenarios where experimental approaches are difficult or impossible.
ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000
2000-01-01
ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)
Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug
2016-01-01
We explore the use of a modal expansion technique, Fourier modal method (FMM), for investigating the optical properties of vertical cavities employing high-contrast gratings (HCGs). Three techniques for determining the resonance frequency and quality factor (Q-factor) of a cavity mode are compared......, the scattering losses of several HCG-based vertical cavities with inplane heterostructures which have promising prospects for fundamental physics studies and on-chip laser applications, are investigated. This type of parametric study of 3D structures would be numerically very demanding using spatial...
State of the art in numerical simulation of high head Francis turbines
Trivedi Chirag
2016-01-01
Full Text Available The Francis-99 test case consists in a high head Francis turbine model, which geometry together with meshes and detailed experimental measurements is freely available at www.francis-99.org. Three workshops were initially planned to exchange experience on numerical investigations of the test case concerning steady state operating conditions, transient operating conditions and fluid structure analysis. The first workshop was held in Trondheim, Norway in December 2014. Some results of the 14 contributions are presented. They are concerned with the influence of the near wall space discretization and turbulence modelling in order to capture hydraulic efficiency, torque, pressure and velocity with a good uncertainty at three operating conditions.
Gated viewing and high-accuracy three-dimensional laser radar
Busck, Jens; Heiselberg, Henning
2004-01-01
, a high PRF of 32 kHz, and a high-speed camera with gate times down to 200 ps and delay steps down to 100 ps. The electronics and the software also allow for gated viewing with automatic gain control versus range, whereby foreground backscatter can be suppressed. We describe our technique for the rapid...
McDonell, Vincent; Hill, Scott; Akbari, Amin; McDonell, Vincent
2011-09-30
As simulation capability improves exponentially with increasingly more cost effective CPUs and hardware, it can be used ?routinely? for engineering applications. Many commercial products are available and they are marketed as increasingly powerful and easy to use. The question remains as to the overall accuracy of results obtained. To support the validation of the CFD, a hierarchical experiment was established in which the type of fuel injection (radial, axial) as well as level of swirl (non-swirling, swirling) could be systematically varied. The effort was limited to time efficient approaches (i.e., generally RANS approaches) although limited assessment of time resolved methods (i.e., unsteady RANS and LES) were considered. Careful measurements of the flowfield velocity and fuel concentration were made using both intrusive and non-intrusive methods. This database was then used as the basis for the assessment of the CFD approach. The numerical studies were carried out with a statistically based matrix. As a result, the effect of turbulence model, fuel type, axial plane, turbulent Schmidt number, and injection type could be studied using analysis of variance. The results for the non-swirling cases could be analyzed as planned, and demonstrate that turbulence model selection, turbulence Schmidt number, and the type of injection will strongly influence the agreement with measured values. Interestingly, the type of fuel used (either hydrogen or methane) has no influence on the accuracy of the simulations. For axial injection, the selection of proper turbulence Schmidt number is important, whereas for radial injection, the results are relatively insensitive to this parameter. In general, it was found that the nature of the flowfield influences the performance of the predictions. This result implies that it is difficult to establish a priori the ?best? simulation approach to use. However, the insights from the relative orientation of the jet and flow do offer some
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji
2017-01-01
In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.
Yamaguchi, S; Koterayama, W [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
1996-04-10
The differential global positioning system (DGPS) can eliminate most of errors in ship velocity measurement by GPS positioning alone. Through two rounds of marine observations by towing an observation robot in summer 1995, the authors attempted high-accuracy measurement of ship velocities by DGPS, and also carried out both positioning by GPS alone and measurement using the bottom track of ADCP (acoustic Doppler current profiler). In this paper, the results obtained by these measurement methods were examined through comparison among them, and the accuracy of the measured ship velocities was considered. In DGPS measurement, both translocation method and interference positioning method were used. ADCP mounted on the observation robot allowed measurement of the velocity of current meter itself by its bottom track in shallow sea areas less than 350m. As the result of these marine observations, it was confirmed that the accuracy equivalent to that of direct measurement by bottom track is possible to be obtained by DGPS. 3 refs., 5 figs., 1 tab.
Wening, Stefanie; Keith, Nina; Abele, Andrea E
2016-06-01
In negotiations, a focus on interests (why negotiators want something) is key to integrative agreements. Yet, many negotiators spontaneously focus on positions (what they want), with suboptimal outcomes. Our research applies construal-level theory to negotiations and proposes that a high construal level instigates a focus on interests during negotiations which, in turn, positively affects outcomes. In particular, we tested the notion that the effect of construal level on outcomes was mediated by information exchange and judgement accuracy. Finally, we expected the mere mode of presentation of task material to affect construal levels and manipulated construal levels using concrete versus abstract negotiation tasks. In two experiments, participants negotiated in dyads in either a high- or low-construal-level condition. In Study 1, high-construal-level dyads outperformed dyads in the low-construal-level condition; this main effect was mediated by information exchange. Study 2 replicated both the main and mediation effects using judgement accuracy as mediator and additionally yielded a positive effect of a high construal level on a second, more complex negotiation task. These results not only provide empirical evidence for the theoretically proposed link between construal levels and negotiation outcomes but also shed light on the processes underlying this effect. © 2015 The British Psychological Society.
Neutrino mass from cosmology: impact of high-accuracy measurement of the Hubble constant
Sekiguchi, Toyokazu [Institute for Cosmic Ray Research, University of Tokyo, Kashiwa 277-8582 (Japan); Ichikawa, Kazuhide [Department of Micro Engineering, Kyoto University, Kyoto 606-8501 (Japan); Takahashi, Tomo [Department of Physics, Saga University, Saga 840-8502 (Japan); Greenhill, Lincoln, E-mail: sekiguti@icrr.u-tokyo.ac.jp, E-mail: kazuhide@me.kyoto-u.ac.jp, E-mail: tomot@cc.saga-u.ac.jp, E-mail: greenhill@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2010-03-01
Non-zero neutrino mass would affect the evolution of the Universe in observable ways, and a strong constraint on the mass can be achieved using combinations of cosmological data sets. We focus on the power spectrum of cosmic microwave background (CMB) anisotropies, the Hubble constant H{sub 0}, and the length scale for baryon acoustic oscillations (BAO) to investigate the constraint on the neutrino mass, m{sub ν}. We analyze data from multiple existing CMB studies (WMAP5, ACBAR, CBI, BOOMERANG, and QUAD), recent measurement of H{sub 0} (SHOES), with about two times lower uncertainty (5 %) than previous estimates, and recent treatments of BAO from the Sloan Digital Sky Survey (SDSS). We obtained an upper limit of m{sub ν} < 0.2eV (95 % C.L.), for a flat ΛCDM model. This is a 40 % reduction in the limit derived from previous H{sub 0} estimates and one-third lower than can be achieved with extant CMB and BAO data. We also analyze the impact of smaller uncertainty on measurements of H{sub 0} as may be anticipated in the near term, in combination with CMB data from the Planck mission, and BAO data from the SDSS/BOSS program. We demonstrate the possibility of a 5σ detection for a fiducial neutrino mass of 0.1 eV or a 95 % upper limit of 0.04 eV for a fiducial of m{sub ν} = 0 eV. These constraints are about 50 % better than those achieved without external constraint. We further investigate the impact on modeling where the dark-energy equation of state is constant but not necessarily -1, or where a non-flat universe is allowed. In these cases, the next-generation accuracies of Planck, BOSS, and 1 % measurement of H{sub 0} would all be required to obtain the limit m{sub ν} < 0.05−0.06 eV (95 % C.L.) for the fiducial of m{sub ν} = 0 eV. The independence of systematics argues for pursuit of both BAO and H{sub 0} measurements.
Challenges in high accuracy surface replication for micro optics and micro fluidics manufacture
Tosello, Guido; Hansen, Hans Nørgaard; Calaon, Matteo
2014-01-01
Patterning the surface of polymer components with microstructured geometries is employed in optical and microfluidic applications. Mass fabrication of polymer micro structured products is enabled by replication technologies such as injection moulding. Micro structured tools are also produced...... by replication technologies such as nickel electroplating. All replication steps are enabled by a high precision master and high reproduction fidelity to ensure that the functionalities associated with the design are transferred to the final component. Engineered surface micro structures can be either...
Experimental Preparation and Numerical Simulation of High Thermal Conductive Cu/CNTs Nanocomposites
Muhsan Ali Samer
2014-07-01
Full Text Available Due to the rapid growth of high performance electronics devices accompanied by overheating problem, heat dissipater nanocomposites material having ultra-high thermal conductivity and low coefficient of thermal expansion was proposed. In this work, a nanocomposite material made of copper (Cu reinforced by multi-walled carbon nanotubes (CNTs up to 10 vol. % was prepared and their thermal behaviour was measured experimentally and evaluated using numerical simulation. In order to numerically predict the thermal behaviour of Cu/CNTs composites, three different prediction methods were performed. The results showed that rules of mixture method records the highest thermal conductivity for all predicted composites. In contrast, the prediction model which takes into account the influence of the interface thermal resistance between CNTs and copper particles, has shown the lowest thermal conductivity which considered as the closest results to the experimental measurement. The experimentally measured thermal conductivities showed remarkable increase after adding 5 vol.% CNTs and higher than the thermal conductivities predicted via Nan models, indicating that the improved fabrication technique of powder injection molding that has been used to produced Cu/CNTs nanocomposites has overcome the challenges assumed in the mathematical models.
David Palko
2008-01-01
Full Text Available A numerical investigation of the heat transfer deterioration (HTD phenomena is performed using the low-Re k-ω turbulence model. Steady-state Reynolds-averaged Navier-Stokes equations are solved together with equations for the transport of enthalpy and turbulence. Equations are solved for the supercritical water flow at different pressures, using water properties from the standard IAPWS (International Association for the Properties of Water and Steam tables. All cases are extensively validated against experimental data. The influence of buoyancy on the HTD is demonstrated for different mass flow rates in the heated pipes. Numerical results prove that the RANS low-Re turbulence modeling approach is fully capable of simulating the heat transfer in pipes with the water flow at supercritical pressures. A study of buoyancy influence shows that for the low-mass flow rates of coolant, the influence of buoyancy forces on the heat transfer in heated pipes is significant. For the high flow rates, buoyancy influence could be neglected and there are clearly other mechanisms causing the decrease in heat transfer at high coolant flow rates.
Nakajima, Daiki; Kikuchi, Tatsuya; Natsui, Shungo; Sakaguchi, Norihito; Suzuki, Ryosuke O.
2015-11-01
The formation behavior of anodic alumina nanofibers via anodizing in a concentrated pyrophosphoric acid under various conditions was investigated using electrochemical measurements and SEM/TEM observations. Pyrophosphoric acid anodizing at 293 K resulted in the formation of numerous anodic alumina nanofibers on an aluminum substrate through a thin barrier oxide and honeycomb oxide with narrow walls. However, long-term anodizing led to the chemical dissolution of the alumina nanofibers. The density of the anodic alumina nanofibers decreased as the applied voltage increased in the 10-75 V range. However, active electrochemical dissolution of the aluminum substrate occurred at a higher voltage of 90 V. Low temperature anodizing at 273 K resulted in the formation of long alumina nanofibers measuring several micrometers in length, even though a long processing time was required due to the low current density during the low temperature anodizing. In contrast, high temperature anodizing easily resulted in the formation and chemical dissolution of alumina nanofibers. The structural nanofeatures of the anodic alumina nanofibers were controlled by choosing of the appropriate electrochemical conditions, and numerous high-aspect-ratio alumina nanofibers (>100) can be successfully fabricated. The anodic alumina nanofibers consisted of a pure amorphous aluminum oxide without anions from the employed electrolyte.
Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth
2009-02-01
The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.
Herrera, VM; Casas, JP; Miranda, JJ; Perel, P; Pichardo, R; González, A; Sanchez, JR; Ferreccio, C; Aguilera, X; Silva, E; Oróstegui, M; Gómez, LF; Chirinos, JA; Medina-Lezama, J; Pérez, CM; Suárez, E; Ortiz, AP; Rosero, L; Schapochnik, N; Ortiz, Z; Ferrante, D; Diaz, M; Bautista, LE
2009-01-01
Background Cut points for defining obesity have been derived from mortality data among Whites from Europe and the United States and their accuracy to screen for high risk of coronary heart disease (CHD) in other ethnic groups has been questioned. Objective To compare the accuracy and to define ethnic and gender-specific optimal cut points for body mass index (BMI), waist circumference (WC) and waist-to-hip ratio (WHR) when they are used in screening for high risk of CHD in the Latin-American and the US populations. Methods We estimated the accuracy and optimal cut points for BMI, WC and WHR to screen for CHD risk in Latin Americans (n=18 976), non-Hispanic Whites (Whites; n=8956), non-Hispanic Blacks (Blacks; n=5205) and Hispanics (n=5803). High risk of CHD was defined as a 10-year risk ≥20% (Framingham equation). The area under the receiver operator characteristic curve (AUC) and the misclassification-cost term were used to assess accuracy and to identify optimal cut points. Results WHR had the highest AUC in all ethnic groups (from 0.75 to 0.82) and BMI had the lowest (from 0.50 to 0.59). Optimal cut point for BMI was similar across ethnic/gender groups (27 kg/m2). In women, cut points for WC (94 cm) and WHR (0.91) were consistent by ethnicity. In men, cut points for WC and WHR varied significantly with ethnicity: from 91 cm in Latin Americans to 102 cm in Whites, and from 0.94 in Latin Americans to 0.99 in Hispanics, respectively. Conclusion WHR is the most accurate anthropometric indicator to screen for high risk of CHD, whereas BMI is almost uninformative. The same BMI cut point should be used in all men and women. Unique cut points for WC and WHR should be used in all women, but ethnic-specific cut points seem warranted among men. PMID:19238159
Calaon, Matteo; Tosello, Guido; Elsborg, René
2016-01-01
The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...
Wen Wan Xin
2002-01-01
The energy resolution and time resolution of two phi 75 x 100 BGO detectors for high energy gamma ray newly made were measured with sup 1 sup 3 sup 7 Cs and sup 6 sup 0 Co resources. The two characteristic gamma rays of high energy emitted from the thermal neutron capture of germanium in BGO crystal were used for the energy calibration of gamma spectra. The intrinsic photopeak efficiency, single escape probability and double escape probabilities of BGO detectors in photon energy range of 4-30 MeV are numerically calculated with GEANT code. The real count response and count ratio of the uniformly distributed incident photons in energy range of 0-30 MeV are also calculated. The distortion of gamma spectra caused by the photon energy loss extension to lower energy in detection medium is discussed
Numerical investigation of heat transfer in high-temperature gas-cooled reactors
Chen, g.; Anghaie, S. [Univ. of Florida, Gainesville, FL (United States)
1995-09-01
This paper proposes a computational model for analysis of flow and heat transfer in high-temperature gas-cooled reactors. The formulation of the problem is based on using the axisymmetric, thin layer Navier-Stokes equations. A hybrid implicit-explicit method based on finite volume approach is used to numerically solve the governing equations. A fast converging scheme is developed to accelerate the Gauss-Siedel iterative method for problems involving the wall heat flux boundary condition. Several cases are simulated and results of temperature and pressure distribution in the core are presented. Results of a parametric analysis for the assessment of the impact of power density on the convective heat transfer rate and wall temperature are discussed. A comparative analysis is conducted to identify the Nusselt number correlation that best fits the physical conditions of the high-temperature gas-cooled reactors.
Numerical evaluation of electromagnetic force induced in high Tc superconductor with grain boundary
Hashizume, Hidetoshi; Toda, Saburo; Maeda, Koutaro
1996-01-01
After high T c superconducting material was discovered, its superconducting characteristic has been improved so that its critical current density becomes comparable with that of metal alloy superconductors. Together with this progress of the high T c material, it is considered to apply the materials to generating levitation force in combination with permanent magnets. In this case, it becomes very important to evaluate quantitatively the electromagnetic force for designing of the devices. Some researches have used numerical analysis to evaluate the force, where the grain boundary was ignored or treated as nonconducting. In the real materials, however, some part of the screening current can pass through the grain boundary. In this paper, therefore, two dimensional electromagnetic analysis was performed with a new method to treat the grain boundaries, and its effect on the levitation force was discussed
Song, J.L.; Li, Y.T.; Liu, Z.Q.; Fu, J.H.; Ting, K.L.
2009-01-01
According to the disadvantages of conventional bar cutting technology such as low-cutting speed, inferior section quality, high-processing cost and so on, a kind of novel precision bar cutting technology has been proposed. The cutting mechanism has also been analyzed. Finite element numerical simulation of the bar cutting process under different working conditions has been carried out with DEFORM. The stress and strain fields at different cutting speed and the variation curves of the cutting force and appropriate cutting parameters have been obtained. Scanning electron microscopy analysis of the cutting surface showed that the finite-element simulation result is correct and better cutting quality can be obtained with the developed bar cutting technology and equipment based on high speed and restrained state
Numerical study of droplet evaporation in coupled high-temperature and electrostatic fields
Ziwen Zuo
2015-03-01
Full Text Available The evaporation of a sessile water droplet under the coupled electrostatic and high-temperature fields is studied numerically. The leaky dielectric model and boiling point evaporation model are used for calculating the electric force and heat mass transfer. The free surface is captured using the volume of fluid method accounting for the variable surface tension and the transition of physical properties across the interface. The flow behaviors and temperature evolutions in different applied fields are predicted. It shows that in the coupled fields, the external electrostatic field restrains the flow inside the droplet and keeps a steady circulation. The flow velocity is reduced due to the interaction between electric body force and the force caused by temperature gradient. The heat transfer from air into the droplet is reduced by the lower flow velocity. The evaporation rate of the droplet in the high-temperature field is decreased.
Experimental and numerical study of plastic shear instability under high-speed loading conditions
Sokovikov, Mikhail; Chudinov, Vasiliy; Bilalov, Dmitry; Oborin, Vladimir; Uvarov, Sergey; Plekhov, Oleg; Terekhina, Alena; Naimark, Oleg
2014-01-01
The behavior of specimens dynamically loaded during the split Hopkinson (Kolsky) bar tests in a regime close to simple shear conditions was studied. The lateral surface of the specimens was investigated in a real-time mode with the aid of a high-speed infra-red camera CEDIP Silver 450M. The temperature field distribution obtained at different time made it possible to trace the evolution of plastic strain localization. The process of target perforation involving plug formation and ejection was examined using a high-speed infra-red camera and a VISAR velocity measurement system. The microstructure of tested specimens was analyzed using an optical interferometer-profilometer and a scanning electron microscope. The development of plastic shear instability regions has been simulated numerically
High Accuracy Three-dimensional Simulation of Micro Injection Moulded Parts
Tosello, Guido; Costa, F. S.; Hansen, Hans Nørgaard
2011-01-01
Micro injection moulding (μIM) is the key replication technology for high precision manufacturing of polymer micro products. Data analysis and simulations on micro-moulding experiments have been conducted during the present validation study. Detailed information about the μIM process was gathered...
Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei
2015-01-01
Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998–2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are
Yancong Cai
Full Text Available Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM Multisatellite Precipitation Analysis (TMPA 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS. This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%. Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these
Numerical study of the propagation of high power microwave pulses in air breakdown environment
Kim, J.; Kuo, S.P.
1992-01-01
A theoretical model based on a set of two modal equations has been developed to describe self-consistently the propagation of an intense microwave pulse in an air breakdown environment. It includes Poynting's equation for the continuity of the power flux of the pulse and the rate equation of the electron density. A forward wave approximation is used to simplify Poynting's equation and a semi-empirical formula for the ionization frequency as a function of the wave field amplitude is adopted for this model. In order to improve the numerical efficiency of the model in terms of the required computation time and available subroutines for numerical analysis of pulse propagation over a long distance, a transformation to the frame of local time of the pulse is introduced. The effect of space-time dependence of the group velocity of the pulse is included in this properly designed transformation. The inhomogeneous feature of the background pressure is also preserved in the model. The resultant equations are reduced to the forms which can be solved directly by the available subroutine of ODE solver. In this work, a comprehensive numerical analysis of the propagation of high power microwave pulse through the atmosphere is performed. It is shown that the pulse energy can severely be attenuated by the self-generated plasma. Therefore, the aim of the present study is to identify the optimum parameters of the pulse so that the energy loss of the pulse before reaching the destination can be minimized. These parameters include the power, frequency, shape and length of the pulse. The conditions for maximizing the ionization at a destinated region in the upper atmosphere will also be determined
Museum genomics: low-cost and high-accuracy genetic data from historical specimens.
Rowe, Kevin C; Singhal, Sonal; Macmanes, Matthew D; Ayroles, Julien F; Morelli, Toni Lyn; Rubidge, Emily M; Bi, Ke; Moritz, Craig C
2011-11-01
Natural history collections are unparalleled repositories of geographical and temporal variation in faunal conditions. Molecular studies offer an opportunity to uncover much of this variation; however, genetic studies of historical museum specimens typically rely on extracting highly degraded and chemically modified DNA samples from skins, skulls or other dried samples. Despite this limitation, obtaining short fragments of DNA sequences using traditional PCR amplification of DNA has been the primary method for genetic study of historical specimens. Few laboratories have succeeded in obtaining genome-scale sequences from historical specimens and then only with considerable effort and cost. Here, we describe a low-cost approach using high-throughput next-generation sequencing to obtain reliable genome-scale sequence data from a traditionally preserved mammal skin and skull using a simple extraction protocol. We show that single-nucleotide polymorphisms (SNPs) from the genome sequences obtained independently from the skin and from the skull are highly repeatable compared to a reference genome. © 2011 Blackwell Publishing Ltd.
Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system
Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.
2017-11-01
Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.
Burkhard, Boeckem
1999-01-01
In the course of the progressive developments of sophisticated geodetic systems utilizing electromagnetic waves in the visible or near IR-range a more detailed knowledge of the propagation medium and coevally solutions of atmospherically induced limitations will become important. An alignment system based on atmospherical dispersion, called a dispersometer, is a metrological solution to the atmospherically induced limitations, in optical alignment and direction observations of high accuracy. In the dispersometer we are using the dual-wavelength method for dispersive air to obtain refraction compensated angle measurements, the detrimental impact of atmospheric turbulence notwithstanding. The principle of the dual-wavelength method utilizes atmospherical dispersion, i.e. the wavelength dependence of the refractive index. The difference angle between two light beams of different wavelengths, which is called the dispersion angle Δβ, is to first approximation proportional to the refraction angle: β IR ν(β blue - β IR ) = ν Δβ, this equation implies that the dispersion angle has to be measured at least 42 times more accurate than the desired accuracy of the refraction angle for the wavelengths used in the present dispersometer. This required accuracy constitutes one major difficulty for the instrumental performance in applying the dispersion effect. However, the dual-wavelength method can only be successfully used in an optimized transmitter-receiver combination. Beyond the above mentioned resolution requirement for the detector, major difficulties in instrumental realization arise in the availability of a suitable dual-wavelength laser light source, laser light modulation with a very high extinction ratio and coaxial emittance of mono-mode radiation at both wavelengths. Therefore, this paper focuses on the solutions of the dual-wavelength transmitter introducing a new hardware approach and a complete re-design of the in [1] proposed conception of the dual
High-accuracy phase-field models for brittle fracture based on a new family of degradation functions
Sargado, Juan Michael; Keilegavlen, Eirik; Berre, Inga; Nordbotten, Jan Martin
2018-02-01
Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients. These in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations that enforce stress equilibrium and govern phase-field evolution. These equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.
Peng, Junzheng; Wang, Qingquan; Peng, Xiang; Yu, Yingjie
2015-11-01
Stitching interferometry is a common method for measuring the figure error of high numerical aperture optics. However, subaperture measurement usually requires a fringe-nulling routine, thus making the stitching procedure complex and time-consuming. The challenge when measuring a surface without a fringe-nulling routine is that the rays no longer perpendicularly hit the surface. This violation of the null-test condition can lead to high fringe density and introduce high-order misalignment aberrations into the measurement result. This paper demonstrates that the high-order misalignment aberrations can be characterized by low-order misalignment aberrations; then, an efficient method is proposed to separate the high-order misalignment aberrations from subaperture data. With the proposed method, the fringe-nulling routine is not required. Instead, the subaperture data is measured under a nonzero fringe pattern. Then, all possible misalignment aberrations are removed with the proposed method. Finally, the full aperture map is acquired by connecting all subaperture data together. Experimental results showing the feasibility of the proposed procedure are presented.
Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.
2018-03-01
Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.
Numerical Study on Mass Transfer of a Vapor Bubble Rising in Very High Viscous Fluid
T. Kunugi
2014-09-01
Full Text Available This study focused on a bubble rising behavior in a molten glass because it is important to improve the efficiency of removal of bubbles from the molten glass. On the other hand, it is expected that some gas species which exists in a bubble are transferred into the molten glass through the bubble interface, i.e., the mass transfer, subsequently, it may cause a bubble contraction in the molten glass. In this paper, in order to understand the bubble rising behavior with its contraction caused by the mass transfer through the bubble interface in the very high viscous fluid such as the molten glass, a bubble contraction model has been developed. The direct numerical simulations based on the MARS (Multi-interface Advection and Reconstruction Solver coupled with the mass transfer equation and the bubble contraction model regarding the mass transfer from the rising bubble in very high viscous fluid have been performed. Here, the working fluids were water vapor as the gas species and the molten glass as the very high viscous fluid. Also, the jump conditions at the bubble interface for the mass transfer were examined. Furthermore, the influence of the bubble contraction for the bubble rising compared to that in the water as a normal viscous fluid was investigated. From the result of the numerical simulations, it was found that the bubble rising behavior was strongly affected not only by the viscosity of the working fluid but also by the bubble contraction due to the mass transfer through the bubble interface.
Numerical simulation of proton exchange membrane fuel cells at high operating temperature
Peng, Jie; Lee, Seung Jae
A three-dimensional, single-phase, non-isothermal numerical model for proton exchange membrane (PEM) fuel cell at high operating temperature (T ≥ 393 K) was developed and implemented into a computational fluid dynamic (CFD) code. The model accounts for convective and diffusive transport and allows predicting the concentration of species. The heat generated from electrochemical reactions, entropic heat and ohmic heat arising from the electrolyte ionic resistance were considered. The heat transport model was coupled with the electrochemical and mass transport models. The product water was assumed to be vaporous and treated as ideal gas. Water transportation across the membrane was ignored because of its low water electro-osmosis drag force in the polymer polybenzimidazole (PBI) membrane. The results show that the thermal effects strongly affect the fuel cell performance. The current density increases with the increasing of operating temperature. In addition, numerical prediction reveals that the width and distribution of gas channel and current collector land area are key optimization parameters for the cell performance improvement.
Numerical simulation of proton exchange membrane fuel cells at high operating temperature
Peng, Jie; Lee, Seung Jae [Energy Lab, Samsung Advanced Institute of Technology, Mt. 14-1 Nongseo-Dong, Giheung-Gu, Yongin-Si, Gyeonggi-Do 446-712 (Korea, Republic of)
2006-11-22
A three-dimensional, single-phase, non-isothermal numerical model for proton exchange membrane (PEM) fuel cell at high operating temperature (T>=393K) was developed and implemented into a computational fluid dynamic (CFD) code. The model accounts for convective and diffusive transport and allows predicting the concentration of species. The heat generated from electrochemical reactions, entropic heat and ohmic heat arising from the electrolyte ionic resistance were considered. The heat transport model was coupled with the electrochemical and mass transport models. The product water was assumed to be vaporous and treated as ideal gas. Water transportation across the membrane was ignored because of its low water electro-osmosis drag force in the polymer polybenzimidazole (PBI) membrane. The results show that the thermal effects strongly affect the fuel cell performance. The current density increases with the increasing of operating temperature. In addition, numerical prediction reveals that the width and distribution of gas channel and current collector land area are key optimization parameters for the cell performance improvement. (author)
Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor
2018-03-01
In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.
Numerical Model of Fluid Flow through Heterogeneous Rock for High Level Radioactive Waste Disposal
Shirai, M.; Chiba, R.; Takahashi, T.; Hashida, T.; Fomin, S.; Chugunov, V.; Niibori, Y.
2007-01-01
An international consensus has emerged that deep geological disposal on land is one of the most appropriate means for high level radioactive wastes (HLW). The fluid transport is slow and radioactive elements are dangerous, so it's impossible to experiment over thousands of years. Instead, numerical model in such natural barrier as fractured underground needs to be considered. Field observations reveal that the equation with fractional derivative is more appropriate for describing physical phenomena than the equation which is based on the Fick's law. Thus, non-Fickian diffusion into inhomogeneous underground appears to be important in the assessment of HLW disposal. A solute transport equation with fractional derivative has been suggested and discussed in literature. However, no attempts were made to apply this equation for modeling of HLW disposal with account for the radioactive decay. In this study, we suggest the use of a novel fractional advection-diffusion equation which accounts for the effect of radioactive disintegration and for interactions between major, macro pores and fractal micro pores. This model is fundamentally different from previous proposed model of HLW, particularly in utilizing fractional derivative. Breakthrough curves numerically obtained by the present model are presented for a variety of rock types with respect to some important nuclides. Results of the calculation showed that for longer distance our model tends to be more conservative than the conventional Fickian model, therefore our model can be said to be safer
Numerical solution of the Navier--Stokes equations at high Reynolds numbers
Shestakov, A.I.
1974-01-01
A numerical method is presented which is designed to solve the Navier-Stokes equations for two-dimensional, incompressible flow. The method is intended for use on problems with high Reynolds numbers for which calculations via finite difference methods have been unattainable or unreliable. The proposed scheme is a hybrid utilizing a time-splitting finite difference method in areas away from the boundaries. In areas neighboring the boundaries, the equations of motion are solved by the newly proposed vortex method by Chorin. The major accomplishment of the new scheme is that it contains a simple way for merging the two methods at the interface of the two subdomains. The proposed algorithm is designed for use on the time-dependent equations but can be used on steady state problems as well. The method is tested on the popular, time-independent, square cavity problem, an example of a separated flow with closed streamlines. Numerical results are presented for a Reynolds number of 10 3 . (auth)
WATERLOPP V2/64: A highly parallel machine for numerical computation
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
COMSOL-PHREEQC: a tool for high performance numerical simulation of reactive transport phenomena
Nardi, Albert; Vries, Luis Manuel de; Trinchero, Paolo; Idiart, Andres; Molinero, Jorge
2012-01-01
Document available in extended abstract form only. Comsol Multiphysics (COMSOL, from now on) is a powerful Finite Element software environment for the modelling and simulation of a large number of physics-based systems. The user can apply variables, expressions or numbers directly to solid and fluid domains, boundaries, edges and points, independently of the computational mesh. COMSOL then internally compiles a set of equations representing the entire model. The availability of extremely powerful pre and post processors makes COMSOL a numerical platform well known and extensively used in many branches of sciences and engineering. On the other hand, PHREEQC is a freely available computer program for simulating chemical reactions and transport processes in aqueous systems. It is perhaps the most widely used geochemical code in the scientific community and is openly distributed. The program is based on equilibrium chemistry of aqueous solutions interacting with minerals, gases, solid solutions, exchangers, and sorption surfaces, but also includes the capability to model kinetic reactions with rate equations that are user-specified in a very flexible way by means of Basic statements directly written in the input file. Here we present COMSOL-PHREEQC, a software interface able to communicate and couple these two powerful simulators by means of a Java interface. The methodology is based on Sequential Non Iterative Approach (SNIA), where PHREEQC is compiled as a dynamic subroutine (iPhreeqc) that is called by the interface to solve the geochemical system at every element of the finite element mesh of COMSOL. The numerical tool has been extensively verified by comparison with computed results of 1D, 2D and 3D benchmark examples solved with other reactive transport simulators. COMSOL-PHREEQC is parallelized so that CPU time can be highly optimized in multi-core processors or clusters. Then, fully 3D detailed reactive transport problems can be readily simulated by means of
Picatoste Ruilope, Ricardo; Masi, Alessandro
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatl...
High accuracy injection circuit for the calibration of a large pixel sensor matrix
Quartieri, E.; Comotti, D.; Manghisoni, M.
2013-01-01
Semiconductor pixel detectors, for particle tracking and vertexing in high energy physics experiments as well as for X-ray imaging, in particular for synchrotron light sources and XFELs, require a large area sensor matrix. This work will discuss the design and the characterization of a high-linearity, low dispersion injection circuit to be used for pixel-level calibration of detector readout electronics in a large pixel sensor matrix. The circuit provides a useful tool for the characterization of the readout electronics of the pixel cell unit for both monolithic active pixel sensors and hybrid pixel detectors. In the latter case, the circuit allows for precise analogue test of the readout channel already at the chip level, when no sensor is connected. Moreover, it provides a simple means for calibration of readout electronics once the detector has been connected to the chip. Two injection techniques can be provided by the circuit: one for a charge sensitive amplification and the other for a transresistance readout channel. The aim of the paper is to describe the architecture and the design guidelines of the calibration circuit, which has been implemented in a 130 nm CMOS technology. Moreover, experimental results of the proposed injection circuit will be presented in terms of linearity and dispersion
Melendez, J; Hogeweg, L; Sánchez, C I; Philipsen, R H H M; Aldridge, R W; Hayward, A C; Abubakar, I; van Ginneken, B; Story, A
2018-05-01
Tuberculosis (TB) screening programmes can be optimised by reducing the number of chest radiographs (CXRs) requiring interpretation by human experts. To evaluate the performance of computerised detection software in triaging CXRs in a high-throughput digital mobile TB screening programme. A retrospective evaluation of the software was performed on a database of 38 961 postero-anterior CXRs from unique individuals seen between 2005 and 2010, 87 of whom were diagnosed with TB. The software generated a TB likelihood score for each CXR. This score was compared with a reference standard for notified active pulmonary TB using receiver operating characteristic (ROC) curve and localisation ROC (LROC) curve analyses. On ROC curve analysis, software specificity was 55.71% (95%CI 55.21-56.20) and negative predictive value was 99.98% (95%CI 99.95-99.99), at a sensitivity of 95%. The area under the ROC curve was 0.90 (95%CI 0.86-0.93). Results of the LROC curve analysis were similar. The software could identify more than half of the normal images in a TB screening setting while maintaining high sensitivity, and may therefore be used for triage.
Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu
2016-10-01
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
High accuracy velocity control method for the french moving-coil watt balance
Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick
2004-01-01
We describe a novel method of velocity control dedicated to the French moving-coil watt balance. In this project, a coil has to move in a magnetic field at a velocity of 2 mm s -1 with a relative uncertainty of 10 -9 over 60 mm. Our method is based on the use of both a heterodyne Michelson's interferometer, a two-level translation stage, and a homemade high frequency phase-shifting electronic circuit. To quantify the stability of the velocity, the output of the interferometer is sent into a frequency counter and the Doppler frequency shift is recorded. The Allan standard deviation has been used to calculate the stability and a σ y (τ) of about 2.2x10 -9 over 400 s has been obtained
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
A high-accuracy image registration algorithm using phase-only correlation for dental radiographs
Ito, Koichi; Nikaido, Akira; Aoki, Takafumi; Kosuge, Eiko; Kawamata, Ryota; Kashima, Isamu
2008-01-01
Dental radiographs have been used for the accurate assessment and treatment of dental diseases. The nonlinear deformation between two dental radiographs may be observed, even if they are taken from the same oral regions of the subject. For an accurate diagnosis, the complete geometric registration between radiographs is required. This paper presents an efficient dental radiograph registration algorithm using Phase-Only Correlation (POC) function. The use of phase components in 2D (two-dimensional) discrete Fourier transforms of dental radiograph images makes possible to achieve highly robust image registration and recognition. Experimental evaluation using a dental radiograph database indicates that the proposed algorithm exhibits efficient recognition performance even for distorted radiographs. (author)
Burress, Jacob; Bethea, Donald; Troub, Brandon
2017-05-01
The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.
Bartram, Jason C; Thewlis, Dominic; Martin, David T; Norton, Kevin I
2017-10-16
With knowledge of an individual's critical power (CP) and W' the SKIBA 2 model provides a framework with which to track W' balance during intermittent high intensity work bouts. There are fears the time constant controlling the recovery rate of W' (τ W' ) may require refinement to enable effective use in an elite population. Four elite endurance cyclists completed an array of intermittent exercise protocols to volitional exhaustion. Each protocol lasted approximately 3.5-6 minutes and featured a range of recovery intensities, set in relation to athlete's CPs (DCP). Using the framework of the SKIBA 2 model, the τ W ' values were modified for each protocol to achieve an accurate W' at volitional exhaustion. Modified τ W ' values were compared to equivalent SKIBA 2 τ W ' values to assess the difference in recovery rates for this population. Plotting modified τ W ' values against DCP showed the adjusted relationship between work-rate and recovery-rate. Comparing modified τ W' values against the SKIBA 2 τ W' values showed a negative bias of 112±46s (mean±95%CL), suggesting athlete's recovered W' faster than predicted by SKIBA 2 (p=0.0001). The modified τ W' to DCP relationship was best described by a power function: τ W' =2287.2∗D CP -0.688 (R 2 = 0.433). The current SKIBA 2 model is not appropriate for use in elite cyclists as it under predicts the recovery rate of W'. The modified τ W' equation presented will require validation, but appears more appropriate for high performance athletes. Individual τ W' relationships may be necessary in order to maximise the model's validity.
Good, Ryan J; Leroue, Matthew K; Czaja, Angela S
2018-06-07
Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.
Automatic camera to laser calibration for high accuracy mobile mapping systems using INS
Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta
2013-09-01
A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.
Han, Song; Zhang, Wei; Zhang, Jie
2017-09-01
A fast sweeping method (FSM) determines the first arrival traveltimes of seismic waves by sweeping the velocity model in different directions meanwhile applying a local solver. It is an efficient way to numerically solve Hamilton-Jacobi equations for traveltime calculations. In this study, we develop an improved FSM to calculate the first arrival traveltimes of quasi-P (qP) waves in 2-D tilted transversely isotropic (TTI) media. A local solver utilizes the coupled slowness surface of qP and quasi-SV (qSV) waves to form a quartic equation, and solve it numerically to obtain possible traveltimes of qP-wave. The proposed quartic solver utilizes Fermat's principle to limit the range of the possible solution, then uses the bisection procedure to efficiently determine the real roots. With causality enforced during sweepings, our FSM converges fast in a few iterations, and the exact number depending on the complexity of the velocity model. To improve the accuracy, we employ high-order finite difference schemes and derive the second-order formulae. There is no weak anisotropy assumption, and no approximation is made to the complex slowness surface of qP-wave. In comparison to the traveltimes calculated by a horizontal slowness shooting method, the validity and accuracy of our FSM is demonstrated.
Analysis of high accuracy, quantitative proteomics data in the MaxQB database.
Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias
2012-03-01
MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database
Automated aberration correction of arbitrary laser modes in high numerical aperture systems.
Hering, Julian; Waller, Erik H; Von Freymann, Georg
2016-12-12
Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.
Numerical solutions to the critical state in a magnet-high temperature superconductor interaction
Ruiz-Alonso, D; Coombs, T A; Campbell, A M [Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ (United Kingdom)
2005-02-01
This paper presents an algorithm to simulate the electromagnetic behaviour of devices containing high temperature superconductors in axially symmetric problems. The numerical method is built within the finite element method. The electromagnetic properties of HTSCs are described through the critical-state model. Measurements of the axial force between a permanent magnet and a melt-textured YBCO puck are obtained in order to validate the method. This simple system is modelled so that the proposed method obtains the current distribution and electromagnetic fields in the HTSC. The forces in the interaction between the magnet and the HTSC puck can then be calculated. A comparison between experimental and simulation results shows good matching. The simplification of using the critical-state model and ignoring flux creep in this type of configuration is also explored.
Numerical modeling of heat outflux from a vitrified high level waste
Aravind, Arun; Jayaraj, Aparna; Seshadri, H.; Balasubramaniyan, V.
2018-01-01
Heat generating vitrified high-level waste is initially stored in interim storage facility with adequate cooling for sufficient period of time, and then proposed to be disposed of in deep geological repositories. Heat flux from the waste form can cause thermo mechanical changes within the disposal module and also in the surrounding rock. It may change the permeability of rock fractures over a period of time. It is very essential to study the long term performance of deep geological repository to build confidence in the design and over all operation of the disposal facility. In this study a numerical model was developed to study the temperature distribution in the waste matrix and also the heat out flux to the surrounding rock matrix
Numerical study of the Columbia high-beta device: Torus-II
Izzo, R.
1981-01-01
The ionization, heating and subsequent long-time-scale behavior of the helium plasma in the Columbia fusion device, Torus-II, is studied. The purpose of this work is to perform numerical simulations while maintaining a high level of interaction with experimentalists. The device is operated as a toroidal z-pinch to prepare the gas for heating. This ionization of helium is studied using a zero-dimensional, two-fluid code. It is essentially an energy balance calculation that follows the development of the various charge states of the helium and any impurities (primarily silicon and oxygen) that are present. The code is an atomic physics model of Torus-II. In addition to ionization, we include three-body and radiative recombination processes
Annealing of ion irradiated high TC Josephson junctions studied by numerical simulations
Sirena, M.; Matzen, S.; Bergeal, N.; Lesueur, J.; Faini, G.; Bernard, R.; Briatico, J.; Crete, D. G.
2009-01-01
Recently, annealing of ion irradiated high T c Josephson iunctions (JJs) has been studied experimentally in the perspective of improving their reproducibility. Here we present numerical simulations based on random walk and Monte Carlo calculations of the evolution of JJ characteristics such as the transition temperature T c ' and its spread ΔT c ' , and compare them with experimental results on junctions irradiated with 100 and 150 keV oxygen ions, and annealed at low temperatures (below 80 deg. C). We have successfully used a vacancy-interstitial annihilation mechanism to describe the evolution of the T c ' and the homogeneity of a JJ array, analyzing the evolution of the defects density mean value and its distribution width. The annealing first increases the spread in T c ' for short annealing times due to the stochastic nature of the process, but then tends to reduce it for longer times, which is interesting for technological applications
Numerical study of the Columbia high-beta device: Torus-II
Izzo, R.
1981-01-01
The ionization, heating and subsequent long-time-scale behavior of the helium plasma in the Columbia fusion device, Torus-II, is studied. The purpose of this work is to perform numerical simulations while maintaining a high level of interaction with experimentalists. The device is operated as a toroidal z-pinch to prepare the gas for heating. This ionization of helium is studied using a zero-dimensional, two-fluid code. It is essentially an energy balance calculation that follows the development of the various charge states of the helium and any impurities (primarily silicon and oxygen) that are present. The code is an atomic physics model of Torus-II. In addition to ionization, we include three-body and radiative recombination processes.
Numerical quantification and minimization of perimeter losses in high-efficiency silicon solar cells
Altermatt, P.P.; Heiser, Gernot; Green, M.A. [New South Wales Univ., Kensington, NSW (Australia)
1996-09-01
This paper presents a quantitative analysis of perimeter losses in high-efficiency silicon solar cells. A new method of numerical modelling is used, which provides the means to simulate a full-sized solar cell, including its perimeter region. We analyse the reduction in efficiency due to perimeter losses as a function of the distance between the active cell area and the cut edge. It is shown how the optimum distance depends on whether the cells in the panel are shingled or not. The simulations also indicate that passivating the cut-face with a thermal oxide does not increase cell efficiency substantially. Therefore, doping schemes for the perimeter domain are suggested in order to increase efficiency levels above present standards. Finally, perimeter effects in cells that remain embedded in the wafer during the efficiency measurement are outlined. (author)
A Numerical Study on the Impeller Meridional Curvature of High Pressure Multistage Pump
Kim, Deok Su; Jean, Sang Gyu; Mamatov, Sanjar [Hyosung Goodsprings, Inc., Busan (Korea, Republic of); Park, Warn Gyu [Pusan Nat’l Univ., Busan (Korea, Republic of)
2017-07-15
This paper presents the hydraulic design an impeller and radial diffuser of a high-pressure multistage pump for reverse osmosis. The flow distribution and hydraulic performance for the meridional design of the impeller were analyzed numerically. Optimization was conducted based on the response surface method by varying the hub and shroud meridional curvatures, while maintaining the impeller outlet diameter, outlet width, and eye diameter constant. The analysis results of the head and efficiency with the variation in the impeller meridional profile showed that angle of the front shroud near the impeller outlet (εDs) had the highest effect on head increase, while the hub inlet length (d1i) and shroud curvature (Rds) had the highest effect on efficiency. From the meridional profile variation, an approximately 0.5% increase in efficiency was observed compared with the base model (case 25).
Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer; Humboldt-Universitaet, Berlin
2016-04-01
We discuss the determination of the strong coupling α_M_S(m_Z) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α_s(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α_s=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α_s∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
Determination of the QCD Λ-parameter and the accuracy of perturbation theory at high energies
Dalla Brida, Mattia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Fritzsch, Patrick [Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica UAM/CSIC; Korzec, Tomasz [Wuppertal Univ. (Germany). Dept. of Physics; Ramos, Alberto [CERN - European Organization for Nuclear Research, Geneva (Switzerland). Theory Div.; Sint, Stefan [Trinity College Dublin (Ireland). School of Mathematics; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Collaboration: ALPHA Collaboration
2016-04-15
We discuss the determination of the strong coupling α{sub MS}(m{sub Z}) or equivalently the QCD Λ-parameter. Its determination requires the use of perturbation theory in α{sub s}(μ) in some scheme, s, and at some energy scale μ. The higher the scale μ the more accurate perturbation theory becomes, owing to asymptotic freedom. As one step in our computation of the Λ-parameter in three-flavor QCD, we perform lattice computations in a scheme which allows us to non-perturbatively reach very high energies, corresponding to α{sub s}=0.1 and below. We find that (continuum) perturbation theory is very accurate there, yielding a three percent error in the Λ-parameter, while data around α{sub s}∼0.2 is clearly insufficient to quote such a precision. It is important to realize that these findings are expected to be generic, as our scheme has advantageous properties regarding the applicability of perturbation theory.
High-accuracy X-ray detector calibration based on cryogenic radiometry
Krumrey, M.; Cibik, L.; Müller, P.
2010-06-01
Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.
High-accuracy X-ray detector calibration based on cryogenic radiometry
Krumrey, M.; Cibik, L.; Mueller, P.
2010-01-01
Cryogenic electrical substitution radiometers (ESRs) are absolute thermal detectors, based on the equivalence of electrical power and radiant power. Their core piece is a cavity absorber, which is typically made of copper to achieve a short response time. At higher photon energies, the use of copper prevents the operation of ESRs due to increasing transmittance. A new absorber design for hard X-rays has been developed at the laboratory of the Physikalisch-Technische Bundesanstalt (PTB) at the electron storage ring BESSY II. The Monte Carlo simulation code Geant4 was applied to optimize its absorptance for photon energies of up to 60 keV. The measurement of the radiant power of monochromatized synchrotron radiation was achieved with relative standard uncertainties of less than 0.2 %, covering the entire photon energy range of three beamlines from 50 eV to 60 keV. Monochromatized synchrotron radiation of high spectral purity is used to calibrate silicon photodiodes against the ESR for photon energies up to 60 keV with relative standard uncertainties below 0.3 %. For some silicon photodiodes, the photocurrent is not linear with the incident radiant power.
High-accuracy local positioning network for the alignment of the Mu2e experiment.
Hejdukova, Jana B. [Czech Technical Univ., Prague (Czech Republic)
2017-06-01
This Diploma thesis describes the establishment of a high-precision local positioning network and accelerator alignment for the Mu2e physics experiment. The process of establishing new network consists of few steps: design of the network, pre-analysis, installation works, measurements of the network and making adjustments. Adjustments were performed using two approaches. First is a geodetic approach of taking into account the Earth’s curvature and the metrological approach of a pure 3D Cartesian system on the other side. The comparison of those two approaches is performed and evaluated in the results and compared with expected differences. The effect of the Earth’s curvature was found to be significant for this kind of network and should not be neglected. The measurements were obtained with Absolute Tracker AT401, leveling instrument Leica DNA03 and gyrotheodolite DMT Gyromat 2000. The coordinates of the points of the reference network were determined by the Least Square Meth od and the overall view is attached as Annexes.
Afzal, F.; Raza, S.; Shafique, M.
2017-01-01
Objective: To determine the diagnostic accuracy of x-ray chest in interstitial lung disease as confirmed by high resolution computed tomography (HRCT) chest. Study Design: A cross-sectional validational study. Place and Duration of Study: Department of Diagnostic Radiology, Combined Military Hospital Rawalpindi, from Oct 2013 to Apr 2014. Material and Method: A total of 137 patients with clinical suspicion of interstitial lung disease (ILD) aged 20-50 years of both genders were included in the study. Patients with h/o previous histopathological diagnosis, already taking treatment and pregnant females were excluded. All the patients had chest x-ray and then HRCT. The x-ray and HRCT findings were recorded as presence or absence of the ILD. Results: Mean age was 40.21 ± 4.29 years. Out of 137 patients, 79 (57.66 percent) were males and 58 (42.34 percent) were females with male to female ratio of 1.36:1. Chest x-ray detected ILD in 80 (58.39 percent) patients, out of which, 72 (true positive) had ILD and 8 (false positive) had no ILD on HRCT. Overall sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of chest x-ray in diagnosing ILD was 80.0 percent, 82.98 percent, 90.0 percent, 68.42 percent and 81.02 percent respectively. Conclusion: This study concluded that chest x-ray is simple, non-invasive, economical and readily available alternative to HRCT with an acceptable diagnostic accuracy of 81 percent in the diagnosis of ILD. (author)
Uskul, Ayse K; Paulmann, Silke; Weick, Mario
2016-02-01
Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).
Schaffer, L.; Burns, J. A.
1994-01-01
We use a combination of analytical and numerical methods to investigate the dynamics of charged dust grains in planetary magnetospheres. Our emphasis is on obtaining results valid for particles that are not necessarily dominated either by gravitational or electromagnetic forces. A Hamiltonian formulation of the problem yields exact results, for all values of charge-to-mass ratio, when we introduce two constraints: particles remain in the equatorial plane and the magnetic field is taken as axially symmetric. In particular, we obtain locations of equilibrium points, the frequencies of stable periodic orbits, the topology of separatrices in phase space, and the rate of longitudinal drift. These results are significant for specific applications: motion in the nearly aligned dipolar field of Saturn, and the trajectories of arbitrarily charged particles in complex magnetic fields for limited periods of time after ejection from parent bodies. Since the model is restrictive, we also use numerical integrations of the full three-dimensional equations of motion and illustrate under what conditions the constrained problem yields reasonable results. We show that a large fraction of the intermediately charged and highly charged (gyrating) particles will always be lost to a planet's atmosphere within a few hundred hours, for motion through tilted-dipole magnetic fields. We find that grains must have a very high charge-to-mass ratio in order to be mirrored back to the ring plane. Thus, except perhaps at Saturn where the dipole tilt is very small, the likely inhabitants of the dusty ring systems are those particles that are either nearly Keplerian (weakly charged) grains or grains whose charges place them in the lower end of the intermediate charge zone. Fianlly, we demonstrate the effect of plasma drag on the orbits of gyrating particles to be a rapid decrease in gyroradius followed by a slow radial evolution of the guiding center.
Numerical evaluation of flow through a prismatic very high temperature gas-cooled reactor
Barros Filho, Jose A.; Santos, Andre A.C.; Navarro, Moyses A.; Ribeiro, Felipe Lopes
2011-01-01
The High-temperature Gas-cooled reactor (HTGR) is a Next Generation Nuclear System that has a good chance to be used as energy generation source in the near future owing to its potential capacity to supply hydrogen without greenhouse gas emission for the future humanity. Recently, improvements in the HTGR design led to the Very High Temperature Reactor (VHTR) concept in which the outlet temperature of the coolant gas reaches to 1000 deg C increasing the efficiency of the hydrogen and electricity generation. Among the core concepts emerging in the VHTR development stands out the prismatic block which uses coated fuel microspheres named TRISO pressed into cylinders and assembled in hexagonal graphite blocks staked to form columns. The graphite blocks contain flow channels around the fuel cylinders for the helium coolant. In this study an analysis is performed using the CFD code CFX 13.0 on a prismatic fuel assembly in order to investigate its thermo-fluid dynamic performance. The simulations were made in a 1/12 fuel element model of the GT-MHR design which was developed by General Atomics. A numerical mesh verification process based on the Grid Convergence Index (GCI) was performed using five progressively refined meshes to assess the numerical uncertainty of the simulation and determine adequate mesh parameters. An analysis was also performed to evaluate different methods to define the inlet and outlet boundary conditions. In this study simulations of models with and without inlet and outlet plena were compared, showing that the presence of the plena offers a more realistic flow distribution. (author)
Warwick R Adams
Full Text Available Parkinson's Disease (PD is a progressive neurodegenerative movement disease affecting over 6 million people worldwide. Loss of dopamine-producing neurons results in a range of both motor and non-motor symptoms, however there is currently no definitive test for PD by non-specialist clinicians, especially in the early disease stages where the symptoms may be subtle and poorly characterised. This results in a high misdiagnosis rate (up to 25% by non-specialists and people can have the disease for many years before diagnosis. There is a need for a more accurate, objective means of early detection, ideally one which can be used by individuals in their home setting. In this investigation, keystroke timing information from 103 subjects (comprising 32 with mild PD severity and the remainder non-PD controls was captured as they typed on a computer keyboard over an extended period and showed that PD affects various characteristics of hand and finger movement and that these can be detected. A novel methodology was used to classify the subjects' disease status, by utilising a combination of many keystroke features which were analysed by an ensemble of machine learning classification models. When applied to two separate participant groups, this approach was able to successfully discriminate between early-PD subjects and controls with 96% sensitivity, 97% specificity and an AUC of 0.98. The technique does not require any specialised equipment or medical supervision, and does not rely on the experience and skill of the practitioner. Regarding more general application, it currently does not incorporate a second cardinal disease symptom, so may not differentiate PD from similar movement-related disorders.
Koopman, Richelle J; Kochendorfer, Karl M; Moore, Joi L; Mehr, David R; Wakefield, Douglas S; Yadamsuren, Borchuluun; Coberly, Jared S; Kruse, Robin L; Wakefield, Bonnie J; Belden, Jeffery L
2011-01-01
We compared use of a new diabetes dashboard screen with use of a conventional approach of viewing multiple electronic health record (EHR) screens to find data needed for ambulatory diabetes care. We performed a usability study, including a quantitative time study and qualitative analysis of information-seeking behaviors. While being recorded with Morae Recorder software and "think-aloud" interview methods, 10 primary care physicians first searched their EHR for 10 diabetes data elements using a conventional approach for a simulated patient, and then using a new diabetes dashboard for another. We measured time, number of mouse clicks, and accuracy. Two coders analyzed think-aloud and interview data using grounded theory methodology. The mean time needed to find all data elements was 5.5 minutes using the conventional approach vs 1.3 minutes using the diabetes dashboard (P dashboard (P dashboard (P dashboard improves both the efficiency and accuracy of acquiring data needed for high-quality diabetes care. Usability analysis tools can provide important insights into the value of optimizing physician use of health information technologies.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Numerical Modeling of MILD Combustion at High Pressure to Predict the Optimal Operating Conditions
Vanteru, Mahendra Reddy; Roberts, William L.
2017-01-01
This Chapter presents numerical simulation on MILD combustion operating at high pressure. Influence of preheat and dilution of oxidizer and operating pressure on stabilization of MILD combustion are presented. Three different preheat temperatures (1100, 1300 and 1500 K) and three different dilution levels (3, 6 and 9% O2) are simulated over an operating pressure variation from 1 atm to 16 atm. A classical jet in hot coflow burner is considered for this study. Total of 45 cases are simulated and analyzed. Essential characteristics of MILD combustion, i.e., maximum temperature (Tmax), temperature rise (ΔT) and temperature distributions, are analyzed. The distribution of emissions OH and CO are also studied and presented. Well-stabilized MILD combustion is observed for all cases except for two cases with high preheated (1500 K). Peak temperature is observed to decrease with increasing operating pressure for a given level of preheat and dilution. OH mass faction is reduced with increasing pressure. The CO emissions show little sensitivity to operating pressure. However, CO mass fraction is slightly higher at 1 atm operating pressure as compared to 4 to 16 atm. Since the residence time of reactants increases as the operating pressure increases, well-stabilized MILD combustion is observed for all highly diluted and low temperature preheat cases (3% O2 and 1100 K).
Numerical Modeling of MILD Combustion at High Pressure to Predict the Optimal Operating Conditions
Vanteru, Mahendra Reddy
2017-02-01
This Chapter presents numerical simulation on MILD combustion operating at high pressure. Influence of preheat and dilution of oxidizer and operating pressure on stabilization of MILD combustion are presented. Three different preheat temperatures (1100, 1300 and 1500 K) and three different dilution levels (3, 6 and 9% O2) are simulated over an operating pressure variation from 1 atm to 16 atm. A classical jet in hot coflow burner is considered for this study. Total of 45 cases are simulated and analyzed. Essential characteristics of MILD combustion, i.e., maximum temperature (Tmax), temperature rise (ΔT) and temperature distributions, are analyzed. The distribution of emissions OH and CO are also studied and presented. Well-stabilized MILD combustion is observed for all cases except for two cases with high preheated (1500 K). Peak temperature is observed to decrease with increasing operating pressure for a given level of preheat and dilution. OH mass faction is reduced with increasing pressure. The CO emissions show little sensitivity to operating pressure. However, CO mass fraction is slightly higher at 1 atm operating pressure as compared to 4 to 16 atm. Since the residence time of reactants increases as the operating pressure increases, well-stabilized MILD combustion is observed for all highly diluted and low temperature preheat cases (3% O2 and 1100 K).
Nandi, S.; Layns, A. L.; Goldberg, M.; Gambacorta, A.; Ling, Y.; Collard, A.; Grumbine, R. W.; Sapper, J.; Ignatov, A.; Yoe, J. G.
2017-12-01
This work describes end to end operational implementation of high priority products from National Oceanic and Atmospheric Administration's (NOAA) operational polar-orbiting satellite constellation, to include Suomi National Polar-orbiting Partnership (S-NPP) and the Joint Polar Satellite System series initial satellite (JPSS-1), into numerical weather prediction and earth systems models. Development and evaluation needed for the initial implementations of VIIRS Environmental Data Records (EDR) for Sea Surface Temperature ingestion in the Real-Time Global Sea Surface Temperature Analysis (RTG) and Polar Winds assimilated in the National Weather Service (NWS) Global Forecast System (GFS) is presented. These implementations ensure continuity of data in these models in the event of loss of legacy sensor data. Also discussed is accelerated operational implementation of Advanced Technology Microwave Sounder (ATMS) Temperature Data Records (TDR) and Cross-track Infrared Sounder (CrIS) Sensor Data Records, identified as Key Performance Parameters by the National Weather Service. Operational use of SNPP after 28 October, 2011 launch took more than one year due to the learning curve and development needed for full exploitation of new remote sensing capabilities. Today, ATMS and CrIS data positively impact weather forecast accuracy. For NOAA's JPSS initial satellite (JPSS-1), scheduled for launch in late 2017, we identify scope and timelines for pre-launch and post-launch activities needed to efficiently transition these capabilities into operations. As part of these alignment efforts, operational readiness for KPPs will be possible as soon as 90 days after launch. The schedule acceleration is possible because of the experience with S-NPP. NOAA operational polar-orbiting satellite constellation provides continuity and enhancement of earth systems observations out to 2036. Program best practices and lessons learned will inform future implementation for follow-on JPSS-3 and -4
Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.
2014-12-01
This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.
Rabb, Savelas A.; Olesik, John W.
2008-01-01
The ability to obtain high precision, high accuracy measurements in samples with complex matrices using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy (HP-ICP-OES) was investigated. The Common Analyte Internal Standard (CAIS) procedure was incorporated into the High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method to correct for matrix-induced changes in emission intensity ratios. Matrix matching and standard addition approaches to minimize matrix-induced errors when using High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy were also assessed. The High Performance Inductively Coupled Plasma-Optical Emission Spectroscopy method was tested with synthetic solutions in a variety of matrices, alloy standard reference materials and geological reference materials
Görgens, Christian; Guddat, Sven; Thomas, Andreas; Wachsmuth, Philipp; Orlovius, Anne-Katrin; Sigmund, Gerd; Thevis, Mario; Schänzer, Wilhelm
2016-11-30
So far, in sports drug testing compounds of different classes are processed and measured using different screening procedures. The constantly increasing number of samples in doping analysis, as well as the large number of substances with doping related, pharmacological effects require the development of even more powerful assays than those already employed in sports drug testing, indispensably with reduced sample preparation procedures. The analysis of native urine samples after direct injection provides a promising analytical approach, which thereby possesses a broad applicability to many different compounds and their metabolites, without a time-consuming sample preparation. In this study, a novel multi-target approach based on liquid chromatography and high resolution/high accuracy mass spectrometry is presented to screen for more than 200 analytes of various classes of doping agents far below the required detection limits in sports drug testing. Here, classic groups of drugs as diuretics, stimulants, β 2 -agonists, narcotics and anabolic androgenic steroids as well as various newer target compounds like hypoxia-inducible factor (HIF) stabilizers, selective androgen receptor modulators (SARMs), selective estrogen receptor modulators (SERMs), plasma volume expanders and other doping related compounds, listed in the 2016 WADA prohibited list were implemented. As a main achievement, growth hormone releasing peptides could be implemented, which chemically belong to the group of small peptides (0.99), limit of detection (0.1-25ng/mL; 3'OH-stanozolol glucuronide: 50pg/mL; dextran/HES: 10μg/mL) and matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Vujadinovic, Mirjam; Vukovic, Ana; Cvetkovic, Bojan; Pejanovic, Goran; Nickovic, Slobodan; Djurdjevic, Vladimir; Rajkovic, Borivoj; Djordjevic, Marija
2015-04-01
In May 2014 west Balkan region was affected by catastrophic floods in Serbia, Bosnia and Herzegovina and eastern parts of Croatia. Observed precipitation amount were extremely high, on many stations largest ever recorded. In the period from 12th to 18th of May, most of Serbia received between 50 to 100 mm of rainfall, while western parts of the country, which were influenced the most, had over 200 mm of rainfall, locally even more than 300 mm. This very intense precipitation came when the soil was already saturated after a very wet period during the second half of April and beginning of May, when most of Serbia received between 120 i 170 mm of rainfall. New abundant precipitation on already saturated soil increased surface and underground water flow, caused floods, soil erosion and landslides. High water levels, most of them record breaking, were measured on the Sava, Drina, Dunav, Kolubara, Ljig, Ub, Toplica, Tamnava, Jadar, Zapadna Morava, Velika Morava, Mlava and Pek river. Overall, two cities and 17 municipals were severely affected by the floods, 32000 people were evacuated from their homes, while 51 died. Material damage to the infrastructure, energy power system, crops, livestock funds and houses is estimated to more than 2 billion euro. Although the operational numerical weather forecast gave in generally good precipitation prediction, flood forecasting in this case was mainly done through the expert judgment rather than relying on dynamic hydrological modeling. We applied an integrated atmospheric-hydrologic modelling system to some of the most impacted catchments in order to timely simulate hydrological response, and examine its potentials as a flood warning system. The system is based on the Non-hydrostatic Multiscale Model NMMB, which is a numerical weather prediction model that can be used on a broad range of spatial and temporal scales. Its non-hydrostatic module allows high horizontal resolution and resolving cloud systems as well as large
Li, Yongkai; Yi, Ming; Zou, Xiufen
2014-01-01
To gain insights into the mechanisms of cell fate decision in a noisy environment, the effects of intrinsic and extrinsic noises on cell fate are explored at the single cell level. Specifically, we theoretically define the impulse of Cln1/2 as an indication of cell fates. The strong dependence between the impulse of Cln1/2 and cell fates is exhibited. Based on the simulation results, we illustrate that increasing intrinsic fluctuations causes the parallel shift of the separation ratio of Whi5P but that increasing extrinsic fluctuations leads to the mixture of different cell fates. Our quantitative study also suggests that the strengths of intrinsic and extrinsic noises around an approximate linear model can ensure a high accuracy of cell fate selection. Furthermore, this study demonstrates that the selection of cell fates is an entropy-decreasing process. In addition, we reveal that cell fates are significantly correlated with the range of entropy decreases. PMID:25042292
Yao, W.E.; Hershkowitz; Intrator, T.
1985-01-01
The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (α 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device an on the Phaedrus tandem mirror
A high-accuracy extraction of the isoscalar πN scattering length from pionic deuterium data
Phillips, Daniel R.; Baru, Vadim; Hanhart, Christoph; Nogga, Andreas; Hoferichter, Martin; Kubis, Bastian
2010-01-01
We present a high-accuracy calculation of the π(bar sign)d scattering length using chiral perturbation theory up to order (M π /m p ) 7/2 . For the first time isospin-violating corrections are included consistently. The resulting value of a π -bar d has a theoretical uncertainty of a few percent. We use it, together with data on pionic deuterium and pionic hydrogen atoms, to extract the isoscalar and isovector pion-nucleon scattering lengths from a combined analysis, and obtain a + (7.9±3.2)·10 -3 M π -1 and a-bar (86.3±1.0)·10 -3 M π -1 .
Yao, W.E.; Hershkowitz, N.; Intrator, T.
1985-01-01
The floating potential of the emissive probe has been used to directly measure the plasma potential. The authors have recently presented another method for directly indicating the plasma potential with a differential emissive probe. In this paper they describe the effects of probe size, plasma density and plasma potential fluctuation on plasma potential measurements and give methods for reducing errors. A control system with fast time response (≅ 20 μs) and high accuracy (the order of the probe temperature T/sub w//e) for maintaining a differential emissive probe at plasma potential has been developed. It can be operated in pulsed discharge plasma to measure plasma potential dynamic characteristics. A solid state optical coupler is employed to improve circuit performance. This system was tested experimentally by measuring the plasma potential in an argon plasma device and on the Phaedrus tandem mirror
Paulsen, P.J.; Beary, E.S.
1996-01-01
At NIST (National Institute of Standards and Technology), ICP-MS ID (inductively coupled mass spectrometry isotope dilution) has been used to certify a wide range of elements in a variety of materials with high accuracy. Both the chemical preparation and instrumental procedures are simpler than with other ID mass spectrometric techniques. The ICP-MS has picogram/ml detection limits for most elements using fixed operating parameters. Chemical separations are required only to remove an interference (from molecular ions as well as isobaric atoms), or to pre-concentrate the analyte. For example, chemical separations were required for the analysis of SRM 2711, Montana II Soil, but not for boron in peach leaves, SRM 1547.(3 refs., 3 tabs., 2 figs
Two scale damage model and related numerical issues for thermo-mechanical high cycle fatigue
Desmorat, R.; Kane, A.; Seyedi, M.; Sermage, J.P.
2007-01-01
On the idea that fatigue damage is localized at the microscopic scale, a scale smaller than the mesoscopic one of the Representative Volume Element (RVE), a three-dimensional two scale damage model has been proposed for High Cycle Fatigue applications. It is extended here to aniso-thermal cases and then to thermo-mechanical fatigue. The modeling consists in the micro-mechanics analysis of a weak micro-inclusion subjected to plasticity and damage embedded in an elastic meso-element (the RVE of continuum mechanics). The consideration of plasticity coupled with damage equations at micro-scale, altogether with Eshelby-Kroner localization law, allows to compute the value of microscopic damage up to failure for any kind of loading, 1D or 3D, cyclic or random, isothermal or aniso-thermal, mechanical, thermal or thermo-mechanical. A robust numerical scheme is proposed in order to make the computations fast. A post-processor for damage and fatigue (DAMAGE-2005) has been developed. It applies to complex thermo-mechanical loadings. Examples of the representation by the two scale damage model of physical phenomena related to High Cycle Fatigue are given such as the mean stress effect, the non-linear accumulation of damage. Examples of thermal and thermo-mechanical fatigue as well as complex applications on real size testing structure subjected to thermo-mechanical fatigue are detailed. (authors)
Wang, Hui; Magnain, Caroline; Sakadžić, Sava; Fischl, Bruce; Boas, David A
2017-12-01
Quantification of tissue optical properties with optical coherence tomography (OCT) has proven to be useful in evaluating structural characteristics and pathological changes. Previous studies primarily used an exponential model to analyze low numerical aperture (NA) OCT measurements and obtain the total attenuation coefficient for biological tissue. In this study, we develop a systematic method that includes the confocal parameter for modeling the depth profiles of high NA OCT, when the confocal parameter cannot be ignored. This approach enables us to quantify tissue optical properties with higher lateral resolution. The model parameter predictions for the scattering coefficients were tested with calibrated microsphere phantoms. The application of the model to human brain tissue demonstrates that the scattering and back-scattering coefficients each provide unique information, allowing us to differentially identify laminar structures in primary visual cortex and distinguish various nuclei in the midbrain. The combination of the two optical properties greatly enhances the power of OCT to distinguish intricate structures in the human brain beyond what is achievable with measured OCT intensity information alone, and therefore has the potential to enable objective evaluation of normal brain structure as well as pathological conditions in brain diseases. These results represent a promising step for enabling the quantification of tissue optical properties from high NA OCT.
Quality and sensitivity of high-resolution numerical simulation of urban heat islands
Li, Dan; Bou-Zeid, Elie
2014-05-01
High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).
Goyal, Rahul; Trivedi, Chirag; Kumar Gandhi, Bhupendra; Cervantes, Michel J.
2017-07-01
Hydraulic turbines are operated over an extended operating range to meet the real time electricity demand. Turbines operated at part load have flow parameters not matching the designed ones. This results in unstable flow conditions in the runner and draft tube developing low frequency and high amplitude pressure pulsations. The unsteady pressure pulsations affect the dynamic stability of the turbine and cause additional fatigue. The work presented in this paper discusses the flow field investigation of a high head model Francis turbine at part load: 50% of the rated load. Numerical simulation of the complete turbine has been performed. Unsteady pressure pulsations in the vaneless space, runner, and draft tube are investigated and validated with available experimental data. Detailed analysis of the rotor stator interaction and draft tube flow field are performed and discussed. The analysis shows the presence of a rotating vortex rope in the draft tube at the frequency of 0.3 times of the runner rotational frequency. The frequency of the vortex rope precession, which causes severe fluctuations and vibrations in the draft tube, is predicted within 3.9% of the experimental measured value. The vortex rope results pressure pulsations propagating in the system whose frequency is also perceive in the runner and upstream the runner.
Numerical Study of High-Speed Droplet Impact on Surfaces and its Physical Cleaning Effects
Kondo, Tomoki; Ando, Keita
2015-11-01
Spurred by the demand for cleaning techniques of low environmental impact, one favors physical cleaning that does not rely on any chemicals. One of the promising candidates is based on water jets that often involve fission into droplet fragments and collide with target surfaces to which contaminant particles (often micron-sized or even smaller) stick. Hydrodynamic force (e.g., shearing and lifting) arising from the droplet impact will play a role to remove the particles, but its detailed mechanism is still unknown. To explore the role of high-speed droplet impact in physical cleaning, we solve compressible Navier-Stokes equations with a finite volume method that is designed to capture both shocks and material interfaces in accurate and robust manners. Water hammer and shear flow accompanied by high-speed droplet impact at a rigid wall is simulated to evaluate lifting force and rotating torque, which are relevant to the application of particle removal. For the simulation, we use the numerical code recently developed by Computational Flow Group lead by Tim Colonius at Caltech. The first author thanks Jomela Meng for her help in handling the code during his stay at Caltech.
Generalized MHD for numerical stability analysis of high-performance plasmas in tokamaks
Mikhailovskii, A.B.
1998-01-01
provide a basis for development of generalized MHD codes for numerical stability analysis of high-performance plasmas in tokamaks. (author)
Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.
2010-09-01
The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the
Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira
2010-01-01
SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)
Abou Chakra, Charbel; Somma, Janine; Elali, Taha; Drapeau, Laurent
2017-04-01
Climate change and its negative impact on water resource is well described. For countries like Lebanon, undergoing major population's rise and already decreasing precipitations issues, effective water resources management is crucial. Their continuous and systematic monitoring overs long period of time is therefore an important activity to investigate drought risk scenarios for the Lebanese territory. Snow cover on Lebanese mountains is the most important water resources reserve. Consequently, systematic observation of snow cover dynamic plays a major role in order to support hydrologic research with accurate data on snow cover volumes over the melting season. For the last 20 years few studies have been conducted for Lebanese snow cover. They were focusing on estimating the snow cover surface using remote sensing and terrestrial measurement without obtaining accurate maps for the sampled locations. Indeed, estimations of both snow cover area and volumes are difficult due to snow accumulation very high variability and Lebanese mountains chains slopes topographic heterogeneity. Therefore, the snow cover relief measurement in its three-dimensional aspect and its Digital Elevation Model computation is essential to estimate snow cover volume. Despite the need to cover the all lebanese territory, we favored experimental terrestrial topographic site approaches due to high resolution satellite imagery cost, its limited accessibility and its acquisition restrictions. It is also most challenging to modelise snow cover at national scale. We therefore, selected a representative witness sinkhole located at Ouyoun el Siman to undertake systematic and continuous observations based on topographic approach using a total station. After four years of continuous observations, we acknowledged the relation between snow melt rate, date of total melting and neighboring springs discharges. Consequently, we are able to forecast, early in the season, dates of total snowmelt and springs low
Xia, Wei; Li, Chuncheng; Hao, Hui; Wang, Yiping; Ni, Xiaoqi; Guo, Dongmei; Wang, Ming
2018-02-01
A novel position-sensitive Fabry-Perot interferometer was constructed with direct phase modulation by a built-in electro-optic modulator. Pure sinusoidal phase modulation of the light was produced, and the first harmonic of the interference signal was extracted to dynamically maintain the interferometer phase to the most sensitive point of the interferogram. Therefore, the minute vibration of the object was coded on the variation of the interference signal and could be directly retrieved by the output voltage of a photodetector. The operating principle and the signal processing method for active feedback control of the interference phase have been demonstrated in detail. The developed vibration sensor was calibrated through a high-precision piezo-electric transducer and tested by a nano-positioning stage under a vibration magnitude of 60 nm and a frequency of 300 Hz. The active phase-tracking method of the system provides high immunity against environmental disturbances. Experimental results show that the proposed interferometer can effectively reconstruct tiny vibration waveforms with subnanometer resolution, paving the way for high-accuracy vibration sensing, especially for micro-electro-mechanical systems/nano-electro-mechanical systems and ultrasonic devices.
High-Accuracy Tidal Flat Digital Elevation Model Construction Using TanDEM-X Science Phase Data
Lee, Seung-Kuk; Ryu, Joo-Hyung
2017-01-01
This study explored the feasibility of using TanDEM-X (TDX) interferometric observations of tidal flats for digital elevation model (DEM) construction. Our goal was to generate high-precision DEMs in tidal flat areas, because accurate intertidal zone data are essential for monitoring coastal environment sand erosion processes. To monitor dynamic coastal changes caused by waves, currents, and tides, very accurate DEMs with high spatial resolution are required. The bi- and monostatic modes of the TDX interferometer employed during the TDX science phase provided a great opportunity for highly accurate intertidal DEM construction using radar interferometry with no time lag (bistatic mode) or an approximately 10-s temporal baseline (monostatic mode) between the master and slave synthetic aperture radar image acquisitions. In this study, DEM construction in tidal flat areas was first optimized based on the TDX system parameters used in various TDX modes. We successfully generated intertidal zone DEMs with 57-m spatial resolutions and interferometric height accuracies better than 0.15 m for three representative tidal flats on the west coast of the Korean Peninsula. Finally, we validated these TDX DEMs against real-time kinematic-GPS measurements acquired in two tidal flat areas; the correlation coefficient was 0.97 with a root mean square error of 0.20 m.
Hu, C.R.
1998-01-01
A fundamental topological consequence of the unconventional (i.e., non-s-wave) pairing symmetry of high-T c superconductors (HTSC's) is the existence of midgap (quasi-particle) states (MS's) bound to surface,m interfaces and other locations. This prediction by the author has most-likely solved a decade-old puzzle, viz., the ubiquitous observation of a zero-bias conductance peak (ZBCP) in tunneling experiments performed on HTSC's. There are also numerous other novel consequences of these MS's, predicted by various researchers, including a new Josephson critical current term; an (already observed) low-temperature splitting of the ZBCP due possibly to a spontaneous breaking of the time-reversal symmetry at a sample surface; a new explanation of the paramagnetic Meissner effect; and a giant magnetic moment, etc. Here the author will review the physical origin of the MS's, the several extensions of the original idea and the many novel consequences of these MS's, some of which have been investigated quantitatively and some others only deduced in qualitative terms so far
Lorrette, Ch.
2007-04-01
This work is an original contribution to the study of the thermo-structural composite materials thermal behaviour. It aims to develop a methodology with a new experimental device for thermal characterization adapted to this type of material and to model the heat transfer by conduction within these heterogeneous media. The first part deals with prediction of the thermal effective conductivity of stratified composite materials in the three space directions. For that, a multi scale model using a rigorous morphology analysis of the structure and the elementary properties is proposed and implemented. The second part deals with the thermal characterization at high temperature. It shows how to estimate simultaneously the thermal effusiveness and the thermal conductivity. The present method is based on the observation of the heating from a plane sample submitted to a continuous excitation generated by Joule Effect. Heat transfer is modelled with the quadrupole formalism, temperature is here measured on two sides of the sample. The development of both resistive probes for excitation and linear probes for temperature measurements enables the thermal properties measured up to 1000 C. Finally, some experimental and numerical application examples lead to review the obtained results. (author)
Numerical Investigation of Double-Cone Flows with High Enthalpy Effects
Nompelis, I.; Candler, G. V.
2009-01-01
A numerical study of shock/shock and shock/boundary layer interactions generated by a double-cone model that is placed in a hypersonic free-stream is presented. Computational results are compared with the experimental measurements made at the CUBRC LENS facility for nitrogen flows at high enthalpy conditions. The CFD predictions agree well with surface pressure and heat-flux measurements for all but one of the double-cone cases that have been studied by the authors. Unsteadiness is observed in computations of one of the LENS cases, however for this case the experimental measurements show that the flowfield is steady. To understand this discrepancy, several double-cone experiments performed in two different facilities with both air and nitrogen as the working gas are examined in the present study. Computational results agree well with measurements made in both the AEDC tunnel 9 and the CUBRC LENS facility for double-cone flows at low free-stream Reynolds numbers where the flow is steady. It is shown that at higher free- stream pressures the double-cone simulations develop instabilities that result in an unsteady separation.
Radionuclide Transport in Fractured Rock: Numerical Assessment for High Level Waste Repository
Claudia Siqueira da Silveira
2013-01-01
Full Text Available Deep and stable geological formations with low permeability have been considered for high level waste definitive repository. A common problem is the modeling of radionuclide migration in a fractured medium. Initially, we considered a system consisting of a rock matrix with a single planar fracture in water saturated porous rock. Transport in the fracture is assumed to obey an advection-diffusion equation, while molecular diffusion is considered the dominant mechanism of transport in porous matrix. The partial differential equations describing the movement of radionuclides were discretized by finite difference methods, namely, fully explicit, fully implicit, and Crank-Nicolson schemes. The convective term was discretized by the following numerical schemes: backward differences, centered differences, and forward differences. The model was validated using an analytical solution found in the literature. Finally, we carried out a simulation with relevant spent fuel nuclide data with a system consisting of a horizontal fracture and a vertical fracture for assessing the performance of a hypothetical repository inserted into the host rock. We have analysed the bentonite expanded performance at the beginning of fracture, the quantified radionuclide released from a borehole, and an estimated effective dose to an adult, obtained from ingestion of well water during one year.
Numerical simulation of nonequilibrium flow in high-enthalpy shock tunnel
Kaneko, M.; Men' shov, I.; Nakamura, Y
2005-03-01
The flow field of a nozzle starting process with thermal and chemical nonequilibrium has been simulated. This flow is produced in high enthalpy impulse facilities such as the free piston shock tunnel. The governing equations are the axisymmetric, compressible Navier-Stokes equations. In this study, Park's two-temperature model, where air consists of five species, is used for defining the thermodynamic properties of air as a driven gas. The numerical scheme employed here is the hybrid scheme of the explicit and implicit methods, which was developed in our laboratory, along with AUSM{sup +} to evaluate inviscid fluxes. In the present simulation, the Mach number of an incident shock wave is set at M{sub s}=10.0. It corresponds to a specific enthalpy, h{sub 0}, of 12 MJ/kg. The results clearly show the complicated thermal and chemical nonequilibrium flow field around the end of the shock tube section and at the nozzle inlet during the initial stage of the nozzle starting process. They also suggest that the phenomenon of nozzle melting might be associated with a flow separation at the nozzle inlet.
Jiang Lei
2015-01-01
Full Text Available Direct numerical simulation (DNS of a round jet in crossflow based on lattice Boltzmann method (LBM is carried out on multi-GPU cluster. Data parallel SIMT (single instruction multiple thread characteristic of GPU matches the parallelism of LBM well, which leads to the high efficiency of GPU on the LBM solver. With present GPU settings (6 Nvidia Tesla K20M, the present DNS simulation can be completed in several hours. A grid system of 1.5 × 108 is adopted and largest jet Reynolds number reaches 3000. The jet-to-free-stream velocity ratio is set as 3.3. The jet is orthogonal to the mainstream flow direction. The validated code shows good agreement with experiments. Vortical structures of CRVP, shear-layer vortices and horseshoe vortices, are presented and analyzed based on velocity fields and vorticity distributions. Turbulent statistical quantities of Reynolds stress are also displayed. Coherent structures are revealed in a very fine resolution based on the second invariant of the velocity gradients.
Achieving high performance in numerical computations on RISC workstations and parallel systems
Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)
1997-08-20
The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.
NUMERICAL SIMULATIONS OF FLOW BEHAVIOR IN DRIVEN CAVITY AT HIGH REYNOLDS NUMBERS
Fudhail Bin Abdul Munir
2012-02-01
Full Text Available In recent years, due to rapidly increasing computational power, computational methods have become the essential tools to conduct researches in various engineering fields. In parallel to the development of ultra high speed digital computers, computational fluid dynamics (CFD has become the new third approach apart from theory and experiment in the philosophical study and development of fluid dynamics. Lattice Boltzmann method (LBM is an alternative method to conventional CFD. LBM is relatively new approach that uses simple microscopic models to simulate complicated microscopic behavior of transport phenomena. In this paper, fluid flow behaviors of steady incompressible flow inside lid driven square cavity are studied. Numerical calculations are conducted for different Reynolds numbers by using Lattice Boltzmann scheme. The objective of the paper is to demonstrate the capability of this lattice Boltzmann scheme for engineering applications particularly in fluid transport phenomena. Keywords-component; lattice Boltzmann method, lid driven cavity, computational fluid dynamics.
Numerical Material Model for Composite Laminates in High-Velocity Impact Simulation
Tao Liu
Full Text Available Abstract A numerical material model for composite laminate, was developed and integrated into the nonlinear dynamic explicit finite element programs as a material user subroutine. This model coupling nonlinear state of equation (EOS, was a macro-mechanics model, which was used to simulate the major mechanical behaviors of composite laminate under high-velocity impact conditions. The basic theoretical framework of the developed material model was introduced. An inverse flyer plate simulation was conducted, which demonstrated the advantage of the developed model in characterizing the nonlinear shock response. The developed model and its implementation were validated through a classic ballistic impact issue, i.e. projectile impacting on Kevlar29/Phenolic laminate. The failure modes and ballistic limit velocity were analyzed, and a good agreement was achieved when comparing with the analytical and experimental results. The computational capacity of this model, for Kevlar/Epoxy laminates with different architectures, i.e. plain-woven and cross-plied laminates, was further evaluated and the residual velocity curves and damage cone were accurately predicted.
Direct Numerical Simulations of High-Speed Turbulent Boundary Layers over Riblets
Duan, Lian; Choudhari, Meelan, M.
2014-01-01
Direct numerical simulations (DNS) of spatially developing turbulent boundary layers over riblets with a broad range of riblet spacings are conducted to investigate the effects of riblets on skin friction at high speeds. Zero-pressure gradient boundary layers under two flow conditions (Mach 2:5 with T(sub w)/T(sub r) = 1 and Mach 7:2 with T(sub w)/T(sub r) = 0:5) are considered. The DNS results show that the drag-reduction curve (delta C(sub f)/C(sub f) vs l(sup +)(sub g )) at both supersonic speeds follows the trend of low-speed data and consists of a `viscous' regime for small riblet size, a `breakdown' regime with optimal drag reduction, and a `drag-increasing' regime for larger riblet sizes. At l l(sup +)(sub g) approx. 10 (corresponding to s+ approx 20 for the current triangular riblets), drag reduction of approximately 7% is achieved at both Mach numbers, and con rms the observations of the few existing experiments under supersonic conditions. The Mach- number dependence of the drag-reduction curve occurs for riblet sizes that are larger than the optimal size, with smaller slopes of (delta C(sub f)/C(sub f) for larger freestream Mach numbers. The Reynolds analogy holds with 2(C(sub h)=C(sub f) approximately equal to that of at plates for both drag-reducing and drag-increasing configurations.
Luo, Yamei; Lü, Baida
2010-01-01
The dynamic behavior of spectral Stokes singularities of partially coherent radially polarized beams focused by a high numerical aperture (NA) objective is studied by using the vectorial Debye diffraction theory and complex spectral Stokes fields. It is shown that there exist s 12 , s 23 , and s 31 singularities, as well as P (completely polarized) and U (unpolarized) singularities. The motion, pair creation and annihilation, and changes in the degree of polarization of s 12 , s 23 , and s 31 singularities, and the handedness reversal of s 12 singularities (C-points) may appear by varying a controlling parameter, such as the truncation parameter, NA, or spatial correlation length. The creation and annihilation occur for a pair of s 12 singularities with opposite topological charge but the same handedness, and for a pair of oppositely charged s 23 or s 31 singularities. The critical value of the truncation parameter, at which the pair annihilation takes place, increases as the semi-angle of the aperture lens (or, equivalently, NA) or spatial correlation length increases. The collision of an s 12 singularity with an L-line (s 3 = 0 contour) leads to a V-point, which is located at the intersection of contours of s 12 = 0 and s 23 = 0 (or s 31 = 0) and is unstable
Franz, A., LLNL
1998-02-17
The numerical simulation of chemically reacting flows is a topic, that has attracted a great deal of current research At the heart of numerical reactive flow simulations are large sets of coupled, nonlinear Partial Differential Equations (PDES). Due to the stiffness that is usually present, explicit time differencing schemes are not used despite their inherent simplicity and efficiency on parallel and vector machines, since these schemes require prohibitively small numerical stepsizes. Implicit time differencing schemes, although possessing good stability characteristics, introduce a great deal of computational overhead necessary to solve the simultaneous algebraic system at each timestep. This thesis examines an algorithm based on a preconditioned time differencing scheme. The algorithm is explicit and permits a large stable time step. An investigation of the algorithm`s accuracy, stability and performance on a parallel architecture is presented
Numerical studies of QCD renormalons in high-order perturbative expansions
Bauer, Clemens
2013-01-01
Perturbative expansions in four-dimensional non-Abelian gauge theories such as Quantum Chromodynamics (QCD) are expected to be divergent, at best asymptotic. One reason is that it is impossible to strictly exclude from the relevant Feynman diagrams those energy regions in which a perturbative treatment is inapplicable. The divergent nature of the series is then signaled by a rapid (factorial) growth of the perturbative expansion coefficients, commonly referred to as a renormalon. In QCD, the most severe divergences occur in the infrared (IR) limit and therefore they are classified as IR renormalons. Their appearance can be understood within the well-accepted Operator Product Expansion (OPE) framework. According to the OPE, the perturbative calculation of a physical observable must be amended by non-perturbative power corrections that come in the form of condensates, universal characteristics of the rich QCD vacuum structure. Adding up perturbative and non-perturbative contributions, the ambiguity due to the renormalon cancels and the physical observable is well-defined. Although the field has made considerable progress in the last twenty years, a proof of renormalon existence is still pending. It has only been tested assuming strong simplifications or in toy models. The aim of this thesis is to provide the first numerical evidence for renormalon existence in the gauge sector of QCD. We use Numerical Stochastic Perturbation Theory (NSPT) to directly obtain perturbative coefficients within lattice regularization, a means to replace continuum spacetime by a four-dimensional hypercubic lattice. A peculiar feature of NSPT are comparatively low simulation costs when reaching high expansion orders. We examine two distinct observables: the static self-energy of an isolated quark and the elementary plaquette. Following the OPE classification, the static quark self-energy is ideally suited for a renormalon study. Taking into account peculiarities of the lattice approach such
Shan, Xuchen; Zhang, Bei; Lan, Guoqiang; Wang, Yiqiao; Liu, Shugang
2015-11-01
Biology and medicine sample measurement takes an important role in the microscopic optical technology. Optical tweezer has the advantage of accurate capture and non-pollution of the sample. The SPR(surface plasmon resonance) sensor has so many advantages include high sensitivity, fast measurement, less consumption of sample and label-free detection of biological sample that the SPR sensing technique has been used for surface topography, analysis of biochemical and immune, drug screening and environmental monitoring. If they combine, they will play an important role in the biological, chemical and other subjects. The system we propose use the multi-axis cage system, by using the methods of reflection and transmiss ion to improve the space utilization. The SPR system and optical tweezer were builtup and combined in one system. The cage of multi-axis system gives full play to its accuracy, simplicity and flexibility. The size of the system is 20 * 15 * 40 cm3 and thus the sample can be replaced to switch between the optical tweezers system and the SPR system in the small space. It means that we get the refractive index of the sample and control the particle in the same system. In order to control the revolving stage, get the picture and achieve the data stored automatically, we write a LabVIEW procedure. Then according to the data from the back focal plane calculate the refractive index of the sample. By changing the slide we can trap the particle as optical tweezer, which makes us measurement and trap the sample at the same time.
Mark Lyons
2006-06-01
Full Text Available Despite the acknowledged importance of fatigue on performance in sport, ecologically sound studies investigating fatigue and its effects on sport-specific skills are surprisingly rare. The aim of this study was to investigate the effect of moderate and high intensity total body fatigue on passing accuracy in expert and novice basketball players. Ten novice basketball players (age: 23.30 ± 1.05 yrs and ten expert basketball players (age: 22.50 ± 0.41 yrs volunteered to participate in the study. Both groups performed the modified AAHPERD Basketball Passing Test under three different testing conditions: rest, moderate intensity and high intensity total body fatigue. Fatigue intensity was established using a percentage of the maximal number of squat thrusts performed by the participant in one minute. ANOVA with repeated measures revealed a significant (F 2,36 = 5.252, p = 0.01 level of fatigue by level of skill interaction. On examination of the mean scores it is clear that following high intensity total body fatigue there is a significant detriment in the passing performance of both novice and expert basketball players when compared to their resting scores. Fundamentally however, the detrimental impact of fatigue on passing performance is not as steep in the expert players compared to the novice players. The results suggest that expert or skilled players are better able to cope with both moderate and high intensity fatigue conditions and maintain a higher level of performance when compared to novice players. The findings of this research therefore, suggest the need for trainers and conditioning coaches in basketball to include moderate, but particularly high intensity exercise into their skills sessions. This specific training may enable players at all levels of the game to better cope with the demands of the game on court and maintain a higher standard of play
Pakzad, R.; Wang, S. Y.; Sloan, S. W.
2018-04-01
In this study, an elastic-brittle-damage constitutive model was incorporated into the coupled fluid/solid analysis of ABAQUS to iteratively calculate the equilibrium effective stress of Biot's theory of consolidation. The Young's modulus, strength and permeability parameter of the material were randomly assigned to the representative volume elements of finite element models following the Weibull distribution function. The hydraulic conductivity of elements was associated with their hydrostatic effective stress and damage level. The steady-state permeability test results for sandstone specimens under different triaxial loading conditions were reproduced by employing the same set of material parameters in coupled transient flow/stress analyses of plane-strain models, thereby indicating the reliability of the numerical model. The influence of heterogeneity on the failure response and the absolute permeability was investigated, and the post-peak permeability was found to decrease with the heterogeneity level in the coupled analysis with transient flow. The proposed model was applied to the plane-strain simulation of the fluid pressurization of a cavity within a large-scale block under different conditions. Regardless of the heterogeneity level, the hydraulically driven fractures propagated perpendicular to the minimum principal far-field stress direction for high-permeability models under anisotropic far-field stress conditions. Scattered damage elements appeared in the models with higher degrees of heterogeneity. The partially saturated areas around propagating fractures were simulated by relating the saturation degree to the negative pore pressure in low-permeability blocks under high pressure. By replicating previously reported trends in the fracture initiation and breakdown pressure for different pressurization rates and hydraulic conductivities, the results showed that the proposed model for hydraulic fracture problems is reliable for a wide range of
Quality and sensitivity of high-resolution numerical simulation of urban heat islands
Li, Dan; Bou-Zeid, Elie
2014-01-01
High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)
HIGH-ENERGY COSMIC-RAY DIFFUSION IN MOLECULAR CLOUDS: A NUMERICAL APPROACH
Fatuzzo, M.; Melia, F.; Todd, E.; Adams, F. C.
2010-01-01
The propagation of high-energy cosmic rays (CRs) through giant molecular clouds constitutes a fundamental process in astronomy and astrophysics. The diffusion of CRs through these magnetically turbulent environments is often studied through the use of energy-dependent diffusion coefficients, although these are not always well motivated theoretically. Now, however, it is feasible to perform detailed numerical simulations of the diffusion process computationally. While the general problem depends upon both the field structure and particle energy, the analysis may be greatly simplified by dimensionless analysis. That is, for a specified purely turbulent field, the analysis depends almost exclusively on a single parameter-the ratio of the maximum wavelength of the turbulent field cells to the particle gyration radius. For turbulent magnetic fluctuations superimposed over an underlying uniform magnetic field, particle diffusion depends on a second dimensionless parameter that characterizes the ratio of the turbulent to uniform magnetic field energy densities. We consider both of these possibilities and parametrize our results to provide simple quantitative expressions that suitably characterize the diffusion process within molecular cloud environments. Doing so, we find that the simple scaling laws often invoked by the high-energy astrophysics community to model CR diffusion through such regions appear to be fairly robust for the case of a uniform magnetic field with a strong turbulent component, but are only valid up to ∼50 TeV particle energies for a purely turbulent field. These results have important consequences for the analysis of CR processes based on TeV emission spectra associated with dense molecular clouds.
Numerical modeling of permafrost dynamics in Alaska using a high spatial resolution dataset
E. E. Jafarov
2012-06-01
Full Text Available Climate projections for the 21st century indicate that there could be a pronounced warming and permafrost degradation in the Arctic and sub-Arctic regions. Climate warming is likely to cause permafrost thawing with subsequent effects on surface albedo, hydrology, soil organic matter storage and greenhouse gas emissions.
To assess possible changes in the permafrost thermal state and active layer thickness, we implemented the GIPL2-MPI transient numerical model for the entire Alaska permafrost domain. The model input parameters are spatial datasets of mean monthly air temperature and precipitation, prescribed thermal properties of the multilayered soil column, and water content that are specific for each soil class and geographical location. As a climate forcing, we used the composite of five IPCC Global Circulation Models that has been downscaled to 2 by 2 km spatial resolution by Scenarios Network for Alaska Planning (SNAP group.
In this paper, we present the modeling results based on input of a five-model composite with A1B carbon emission scenario. The model has been calibrated according to the annual borehole temperature measurements for the State of Alaska. We also performed more detailed calibration for fifteen shallow borehole stations where high quality data are available on daily basis. To validate the model performance, we compared simulated active layer thicknesses with observed data from Circumpolar Active Layer Monitoring (CALM stations. The calibrated model was used to address possible ground temperature changes for the 21st century. The model simulation results show widespread permafrost degradation in Alaska could begin between 2040–2099 within the vast area southward from the Brooks Range, except for the high altitude regions of the Alaska Range and Wrangell Mountains.
Direct numerical simulation of reactor two-phase flows enabled by high-performance computing
Fang, Jun; Cambareri, Joseph J.; Brown, Cameron S.; Feng, Jinyong; Gouws, Andre; Li, Mengnan; Bolotnov, Igor A.
2018-04-01
Nuclear reactor two-phase flows remain a great engineering challenge, where the high-resolution two-phase flow database which can inform practical model development is still sparse due to the extreme reactor operation conditions and measurement difficulties. Owing to the rapid growth of computing power, the direct numerical simulation (DNS) is enjoying a renewed interest in investigating the related flow problems. A combination between DNS and an interface tracking method can provide a unique opportunity to study two-phase flows based on first principles calculations. More importantly, state-of-the-art high-performance computing (HPC) facilities are helping unlock this great potential. This paper reviews the recent research progress of two-phase flow DNS related to reactor applications. The progress in large-scale bubbly flow DNS has been focused not only on the sheer size of those simulations in terms of resolved Reynolds number, but also on the associated advanced modeling and analysis techniques. Specifically, the current areas of active research include modeling of sub-cooled boiling, bubble coalescence, as well as the advanced post-processing toolkit for bubbly flow simulations in reactor geometries. A novel bubble tracking method has been developed to track the evolution of bubbles in two-phase bubbly flow. Also, spectral analysis of DNS database in different geometries has been performed to investigate the modulation of the energy spectrum slope due to bubble-induced turbulence. In addition, the single-and two-phase analysis results are presented for turbulent flows within the pressurized water reactor (PWR) core geometries. The related simulations are possible to carry out only with the world leading HPC platforms. These simulations are allowing more complex turbulence model development and validation for use in 3D multiphase computational fluid dynamics (M-CFD) codes.
High-order accurate numerical algorithm for three-dimensional transport prediction
Pepper, D W [Savannah River Lab., Aiken, SC; Baker, A J
1980-01-01
The numerical solution of the three-dimensional pollutant transport equation is obtained with the method of fractional steps; advection is solved by the method of moments and diffusion by cubic splines. Topography and variable mesh spacing are accounted for with coordinate transformations. First estimate wind fields are obtained by interpolation to grid points surrounding specific data locations. Numerical results agree with results obtained from analytical Gaussian plume relations for ideal conditions. The numerical model is used to simulate the transport of tritium released from the Savannah River Plant on 2 May 1974. Predicted ground level air concentration 56 km from the release point is within 38% of the experimentally measured value.
Hawong, Jai Sug; Lee, Dong Hun; Lee, Dong Ha; Tche, Konstantin
2004-01-01
In this research, the photoelastic experimental hybrid method with Hook-Jeeves numerical method has been developed: This method is more precise and stable than the photoelastic experimental hybrid method with Newton-Rapson numerical method with Gaussian elimination method. Using the photoelastic experimental hybrid method with Hook-Jeeves numerical method, we can separate stress components from isochromatics only and stress intensity factors and stress concentration factors can be determined. The photoelastic experimental hybrid method with Hook-Jeeves had better be used in the full field experiment than the photoelastic experimental hybrid method with Newton-Rapson with Gaussian elimination method
Hughes, A L; Buitenhuis, A J
2010-01-01
among populations with respect to mean expression scores, but numerous transcripts showed reduced variance in expression scores in the high FP population in comparison to control and low FP populations. The reduction in variance in the high FP population generally involved transcripts whose expression...
Numerical tsunami hazard assessment of the submarine volcano Kick 'em Jenny in high resolution are
Dondin, Frédéric; Dorville, Jean-Francois Marc; Robertson, Richard E. A.
2016-04-01
Landslide-generated tsunami are infrequent phenomena that can be potentially highly hazardous for population located in the near-field domain of the source. The Lesser Antilles volcanic arc is a curved 800 km chain of volcanic islands. At least 53 flank collapse episodes have been recognized along the arc. Several of these collapses have been associated with underwater voluminous deposits (volume > 1 km3). Due to their momentum these events were likely capable of generating regional tsunami. However no clear field evidence of tsunami associated with these voluminous events have been reported but the occurrence of such an episode nowadays would certainly have catastrophic consequences. Kick 'em Jenny (KeJ) is the only active submarine volcano of the Lesser Antilles Arc (LAA), with a current edifice volume estimated to 1.5 km3. It is the southernmost edifice of the LAA with recognized associated volcanic landslide deposits. The volcano appears to have undergone three episodes of flank failure. Numerical simulations of one of these episodes associated with a collapse volume of ca. 4.4 km3 and considering a single pulse collapse revealed that this episode would have produced a regional tsunami with amplitude of 30 m. In the present study we applied a detailed hazard assessment on KeJ submarine volcano (KeJ) form its collapse to its waves impact on high resolution coastal area of selected island of the LAA in order to highlight needs to improve alert system and risk mitigation. We present the assessment process of tsunami hazard related to shoreline surface elevation (i.e. run-up) and flood dynamic (i.e. duration, height, speed...) at the coast of LAA island in the case of a potential flank collapse scenario at KeJ. After quantification of potential initial volumes of collapse material using relative slope instability analysis (RSIA, VolcanoFit 2.0 & SSAP 4.5) based on seven geomechanical models, the tsunami source have been simulate by St-Venant equations-based code
High-accuracy determination of the neutron flux in the new experimental area nTOF-EAR2 at CERN
Sabate-Gilarte, M. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Barbagallo, M.; Colonna, N.; Damone, L.; Belloni, F.; Mastromarco, M.; Tagliente, G.; Variale, V. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Bari (Italy); Gunsing, F.; Berthoumieux, E.; Diakaki, M.; Papaevangelou, T.; Dupont, E. [Universite Paris-Saclay, CEA Irfu, Gif-sur-Yvette (France); Zugec, P.; Bosnar, D. [University of Zagreb, Department of Physics, Faculty of Science, Zagreb (Croatia); Vlachoudis, V.; Aberle, O.; Brugger, M.; Calviani, M.; Cardella, R.; Cerutti, F.; Chiaveri, E.; Ferrari, A.; Kadi, Y.; Losito, R.; Macina, D.; Montesano, S.; Rubbia, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Chen, Y.H.; Audouin, L.; Tassan-Got, L. [Centre National de la Recherche Scientifique/IN2P3 - IPN, Orsay (France); Stamatopoulos, A.; Kokkoris, M.; Tsinganis, A.; Vlastou, R. [National Technical University of Athens (NTUA), Athens (Greece); Lerendegui-Marco, J.; Cortes-Giraldo, M.A.; Guerrero, C.; Quesada, J.M. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (Spain); Villacorta, A. [University of Salamanca, Salamanca (Spain); Cosentino, L.; Finocchiaro, P.; Piscopo, M. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Musumarra, A. [INFN, Laboratori Nazionali del Sud, Catania (Italy); Universita di Catania, Dipartimento di Fisica, Catania (Italy); Andrzejewski, J.; Gawlik, A.; Marganiec, J.; Perkowski, J. [University of Lodz, Lodz (Poland); Becares, V.; Balibrea, J.; Cano-Ott, D.; Garcia, A.R.; Gonzalez, E.; Martinez, T.; Mendoza, E. [Centro de Investigaciones Energeticas Medioambientales y Tecnologicas (CIEMAT), Madrid (Spain); Bacak, M.; Weiss, C. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Technische Universitaet Wien, Wien (Austria); Baccomi, R.; Milazzo, P.M. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (Italy); Barros, S.; Ferreira, P.; Goncalves, I.F.; Vaz, P. [Instituto Superior Tecnico, Lisbon (Portugal); Becvar, F.; Krticka, M.; Valenta, S. [Charles University, Prague (Czech Republic); Beinrucker, C.; Goebel, K.; Heftrich, T.; Reifarth, R.; Schmidt, S.; Weigand, M.; Wolf, C. [Goethe University Frankfurt, Frankfurt (Germany); Billowes, J.; Frost, R.J.W.; Ryan, J.A.; Smith, A.G.; Warren, S.; Wright, T. [University of Manchester, Manchester (United Kingdom); Caamano, M.; Deo, K.; Duran, I.; Fernandez-Dominguez, B.; Leal-Cidoncha, E.; Paradela, C.; Robles, M.S. [University of Santiago de Compostela, Santiago de Compostela (Spain); Calvino, F.; Casanovas, A.; Riego-Perez, A. [Universitat Politecnica de Catalunya, Barcelona (Spain); Castelluccio, D.M.; Lo Meo, S. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (Italy); Cortes, G.; Mengoni, A. [Agenzia Nazionale per le Nuove Tecnologie (ENEA), Bologna (Italy); Domingo-Pardo, C.; Tain, J.L. [Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain); Dressler, R.; Heinitz, S.; Kivel, N.; Maugeri, E.A.; Schumann, D. [Paul Scherrer Institut (PSI), Villingen (Switzerland); Furman, V.; Sedyshev, P. [Joint Institute for Nuclear Research (JINR), Dubna (Russian Federation); Gheorghe, I.; Glodariu, T.; Mirea, M.; Oprea, A. [Horia Hulubei National Institute of Physics and Nuclear Engineering, Magurele (Romania); Goverdovski, A.; Ketlerov, V.; Khryachkov, V. [Institute of Physics and Power Engineering (IPPE), Obninsk (Russian Federation); Griesmayer, E.; Jericha, E.; Kavrigin, P.; Leeb, H. [Technische Universitaet Wien, Wien (Austria); Harada, H.; Kimura, A. [Japan Atomic Energy Agency (JAEA), Tokai-mura (Japan); Hernandez-Prieto, A. [European Organization for Nuclear Research (CERN), Geneva (CH); Universitat Politecnica de Catalunya, Barcelona (ES); Heyse, J.; Schillebeeckx, P. [European Commission, Joint Research Centre, Geel (BE); Jenkins, D.G. [University of York, York (GB); Kaeppeler, F. [Karlsruhe Institute of Technology, Karlsruhe (DE); Katabuchi, T. [Tokyo Institute of Technology, Tokyo (JP); Lederer, C.; Lonsdale, S.J.; Woods, P.J. [University of Edinburgh, School of Physics and Astronomy, Edinburgh (GB); Licata, M.; Massimi, C.; Vannini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Universita di Bologna, Dipartimento di Fisica e Astronomia, Bologna (IT); Mastinu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Legnaro, Legnaro (IT); Matteucci, F. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Trieste (IT); Universita di Trieste, Dipartimento di Astronomia, Trieste (IT); Mingrone, F. [European Organization for Nuclear Research (CERN), Geneva (CH); Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Nolte, R. [Physikalisch-Technische Bundesanstalt (PTB), Braunschweig (DE); Palomo-Pinto, F.R. [Universidad de Sevilla, Dept. Ingenieria Electronica, Escuela Tecnica Superior de Ingenieros, Sevilla (ES); Patronis, N. [University of Ioannina, Ioannina (GR); Pavlik, A. [University of Vienna, Faculty of Physics, Vienna (AT); Porras, J.I. [University of Granada, Granada (ES); Praena, J. [Universidad de Sevilla, Departamento de Fisica Atomica, Molecular y Nuclear, Sevilla (ES); University of Granada, Granada (ES); Rajeev, K.; Rout, P.C.; Saxena, A.; Suryanarayana, S.V. [Bhabha Atomic Research Centre (BARC), Mumbai (IN); Rauscher, T. [University of Hertfordshire, Centre for Astrophysics Research, Hatfield (GB); University of Basel, Department of Physics, Basel (CH); Tarifeno-Saldivia, A. [Universitat Politecnica de Catalunya, Barcelona (ES); Universidad de Valencia, Instituto de Fisica Corpuscular, Valencia (ES); Ventura, A. [Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Bologna (IT); Wallner, A. [Australian National University, Canberra (AU)
2017-10-15
A new high flux experimental area has recently become operational at the nTOF facility at CERN. This new measuring station, nTOF-EAR2, is placed at the end of a vertical beam line at a distance of approximately 20 m from the spallation target. The characterization of the neutron beam, in terms of flux, spatial profile and resolution function, is of crucial importance for the feasibility study and data analysis of all measurements to be performed in the new area. In this paper, the measurement of the neutron flux, performed with different solid-state and gaseous detection systems, and using three neutron-converting reactions considered standard in different energy regions is reported. The results of the various measurements have been combined, yielding an evaluated neutron energy distribution in a wide energy range, from 2 meV to 100 MeV, with an accuracy ranging from 2%, at low energy, to 6% in the high-energy region. In addition, an absolute normalization of the nTOF-EAR2 neutron flux has been obtained by means of an activation measurement performed with {sup 197}Au foils in the beam. (orig.)
Advanced Numerical Integration Techniques for HighFidelity SDE Spacecraft Simulation
National Aeronautics and Space Administration — Classic numerical integration techniques, such as the ones at the heart of several NASA GSFC analysis tools, are known to work well for deterministic differential...
Experimental and numerical results of a high frequency rotating active magnetic refrigerator
Lozano, Jaime; Engelbrecht, Kurt; Bahl, Christian
2012-01-01
Experimental results for a recently developed prototype magnetic refrigeration device at The Technical University of Denmark (DTU) were obtained and compared with numerical simulation results. A continuously rotating active magnetic regenerator (AMR) using 2.8 kg packed sphere regenerators...
Experimental and numerical results of a high frequency rotating active magnetic refrigerator
Lozano, Jaime; Engelbrecht, Kurt; Bahl, Christian R.H.
2014-01-01
Experimental results for a recently developed prototype magnetic refrigeration device at the Technical University of Denmark (DTU) were obtained and compared with numerical simulation results. A continuously rotating active magnetic regenerator (AMR) using 2.8 kg packed sphere regenerators...
Langrene, Nicolas
2014-01-01
This thesis deals with the numerical solution of general stochastic control problems, with notable applications for electricity markets. We first propose a structural model for the price of electricity, allowing for price spikes well above the marginal fuel price under strained market conditions. This model allows to price and partially hedge electricity derivatives, using fuel forwards as hedging instruments. Then, we propose an algorithm, which combines Monte-Carlo simulations with local basis regressions, to solve general optimal switching problems. A comprehensive rate of convergence of the method is provided. Moreover, we manage to make the algorithm parsimonious in memory (and hence suitable for high dimensional problems) by generalizing to this framework a memory reduction method that avoids the storage of the sample paths. We illustrate this on the problem of investments in new power plants (our structural power price model allowing the new plants to impact the price of electricity). Finally, we study more general stochastic control problems (the control can be continuous and impact the drift and volatility of the state process), the solutions of which belong to the class of fully nonlinear Hamilton-Jacobi-Bellman equations, and can be handled via constrained Backward Stochastic Differential Equations, for which we develop a backward algorithm based on control randomization and parametric optimizations. A rate of convergence between the constraPned BSDE and its discrete version is provided, as well as an estimate of the optimal control. This algorithm is then applied to the problem of super replication of options under uncertain volatilities (and correlations). (author)
Numerical Analysis of Novel Back Surface Field for High Efficiency Ultrathin CdTe Solar Cells
M. A. Matin
2013-01-01
Full Text Available This paper numerically explores the possibility of high efficiency, ultrathin, and stable CdTe cells with different back surface field (BSF using well accepted simulator AMPS-1D (analysis of microelectronics and photonic structures. A modified structure of CdTe based PV cell SnO2/Zn2SnO4/CdS/CdTe/BSF/BC has been proposed over reference structure SnO2/Zn2SnO4/CdS/CdTe/Cu. Both higher bandgap materials like ZnTe and Cu2Te and low bandgap materials like As2Te3 and Sb2Te3 have been used as BSF to reduce minority carrier recombination loss at the back contact in ultra-thin CdTe cells. In this analysis the highest conversion efficiency of CdTe based PV cell without BSF has been found to be around 17% using CdTe absorber thickness of 5 μm. However, the proposed structures with different BSF have shown acceptable efficiencies with an ultra-thin CdTe absorber of only 0.6 μm. The proposed structure with As2Te3 BSF showed the highest conversion efficiency of 20.8% ( V, mA/cm2, and . Moreover, the proposed structures have shown improved stability in most extents, as it was found that the cells have relatively lower negative temperature coefficient. However, the cell with ZnTe BSF has shown better overall stability than other proposed cells with temperature coefficient (TC of −0.3%/°C.
Shiqi Zhou
2011-12-01
Full Text Available Thermodynamic and structural properties of liquids are of fundamental interest in physics, chemistry, and biology, and perturbation approach has been fundamental to liquid theoretical approaches since the dawn of modern statistical mechanics and remains so to this day. Although thermodynamic perturbation theory (TPT is widely used in the chemical physics community, one of the most popular versions of the TPT, i.e. Zwanzig (Zwanzig, R. W. J. Chem. Phys. 1954, 22, 1420-1426 1st-order high temperature series expansion (HTSE TPT and its 2nd-order counterpart under a macroscopic compressibility approximation of Barker-Henderson (Barker, J. A.; Henderson, D. J. Chem. Phys. 1967, 47, 2856-2861, have some serious shortcomings: (i the nth-order term of the HTSE is involved with reference fluid distribution functions of order up to 2n, and the higher-order terms hence progressively become more complicated and numerically inaccessible; (ii the performance of the HTSE rapidly deteriorates and the calculated results become even qualitatively incorrect as the temperature of interest decreases. This account deals with the developments that we have made over the last five years or so to advance a coupling parameter series expansion (CPSE and a non hard sphere (HS perturbation strategy that has scored some of its greatest successes in overcoming the above-mentioned difficulties. In this account (i we expatiate on implementation details of our schemes: how input information indispensable to high-order truncation of the CPSE in both the HS and non HS perturbation schemes is calculated by an Ornstein-Zernike integral equation theory; how high-order thermodynamic quantities, such as critical parameters and excess constant volume heat capacity, are extracted from the resulting excess Helmholtz free energy with irregular and inevitable numerical errors; how to select reference potential in the non HS perturbation scheme. (ii We give a quantitative analysis on why
Coles, Phillip; Yurchenko, Sergei N.; Polyansky, Oleg; Kyuberis, Aleksandra; Ovsyannikov, Roman I.; Zobov, Nikolay Fedorovich; Tennyson, Jonathan
2017-06-01
We present a new spectroscopic potential energy surface (PES) for ^{14}NH_3, produced by refining a high accuracy ab initio PES to experimental energy levels taken predominantly from MARVEL. The PES reproduces 1722 matched J=0-8 experimental energies with a root-mean-square error of 0.035 cm-1 under 6000 cm^{-1} and 0.059 under 7200 cm^{-1}. In conjunction with a new DMS calculated using multi reference configuration interaction (MRCI) and H=aug-cc-pVQZ, N=aug-cc-pWCVQZ basis sets, an infrared (IR) line list has been computed which is suitable for use up to 2000 K. The line list is used to assign experimental lines in the 7500 - 10,500 cm^{-1} region and previously unassigned lines in HITRAN in the 6000-7000 cm^{-1} region. Oleg L. Polyansky, Roman I. Ovsyannikov, Aleksandra A. Kyuberis, Lorenzo Lodi, Jonathan Tennyson, Andrey Yachmenev, Sergei N. Yurchenko, Nikolai F. Zobov, J. Mol. Spec., 327 (2016) 21-30 Afaf R. Al Derzia, Tibor Furtenbacher, Jonathan Tennyson, Sergei N. Yurchenko, Attila G. Császár, J. Quant. Spectrosc. Rad. Trans., 161 (2015) 117-130
M. Dumont
2010-03-01
Full Text Available High-accuracy measurements of snow Bidirectional Reflectance Distribution Function (BRDF were performed for four natural snow samples with a spectrogonio-radiometer in the 500–2600 nm wavelength range. These measurements are one of the first sets of direct snow BRDF values over a wide range of lighting and viewing geometry. They were compared to BRDF calculated with two optical models. Variations of the snow anisotropy factor with lighting geometry, wavelength and snow physical properties were investigated. Results show that at wavelengths with small penetration depth, scattering mainly occurs in the very top layers and the anisotropy factor is controlled by the phase function. In this condition, forward scattering peak or double scattering peak is observed. In contrast at shorter wavelengths, the penetration of the radiation is much deeper and the number of scattering events increases. The anisotropy factor is thus nearly constant and decreases at grazing observation angles. The whole dataset is available on demand from the corresponding author.
Kodama, K. P.
2017-12-01
The talk will consider two broad topics in rock magnetism and paleomagnetism: the accuracy of paleomagnetic remanence and the use of rock magnetics to measure geologic time in sedimentary sequences. The accuracy of the inclination recorded by sedimentary rocks is crucial to paleogeographic reconstructions. Laboratory compaction experiments show that inclination shallows on the order of 10˚-15˚. Corrections to the inclination can be made using the effects of compaction on the directional distribution of secular variation recorded by sediments or the anisotropy of the magnetic grains carrying the ancient remanence. A summary of all the compaction correction studies as of 2012 shows that 85% of sedimentary rocks studied have enjoyed some amount of inclination shallowing. Future work should also consider the effect of grain-scale strain on paleomagnetic remanence. High resolution chronostratigraphy can be assigned to a sedimentary sequence using rock magnetics to detect astronomically-forced climate cycles. The power of the technique is relatively quick, non-destructive measurements, the objective identification of the cycles compared to facies interpretations, and the sensitivity of rock magnetics to subtle changes in sedimentary source. An example of this technique comes from using rock magnetics to identify astronomically-forced climate cycles in three globally distributed occurrences of the Shuram carbon isotope excursion. The Shuram excursion may record the oxidation of the world ocean in the Ediacaran, just before the Cambrian explosion of metazoans. Using rock magnetic cyclostratigraphy, the excursion is shown to have the same duration (8-9 Myr) in southern California, south China and south Australia. Magnetostratigraphy of the rocks carrying the excursion in California and Australia shows a reversed to normal geomagnetic field polarity transition at the excursion's nadir, thus supporting the synchroneity of the excursion globally. Both results point to a
Hendrik eMandelkow
2016-03-01
Full Text Available Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI. However, conventional fMRI analysis based on statistical parametric mapping (SPM and the general linear model (GLM is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA, have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbour (NN, Gaussian Naïve Bayes (GNB, and (regularised Linear Discriminant Analysis (LDA in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie.Results show that LDA regularised by principal component analysis (PCA achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2s apart during a 300s movie (chance level 0.7% = 2s/300s. The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these
Shokri, Abbas; Eskandarloo, Amir; Norouzi, Marouf; Poorolajal, Jalal; Majidi, Gelareh; Aliyaly, Alireza
2018-03-01
This study compared the diagnostic accuracy of cone-beam computed tomography (CBCT) scans obtained with 2 CBCT systems with high- and low-resolution modes for the detection of root perforations in endodontically treated mandibular molars. The root canals of 72 mandibular molars were cleaned and shaped. Perforations measuring 0.2, 0.3, and 0.4 mm in diameter were created at the furcation area of 48 roots, simulating strip perforations, or on the external surfaces of 48 roots, simulating root perforations. Forty-eight roots remained intact (control group). The roots were filled using gutta-percha (Gapadent, Tianjin, China) and AH26 sealer (Dentsply Maillefer, Ballaigues, Switzerland). The CBCT scans were obtained using the NewTom 3G (QR srl, Verona, Italy) and Cranex 3D (Soredex, Helsinki, Finland) CBCT systems in high- and low-resolution modes, and were evaluated by 2 observers. The chi-square test was used to assess the nominal variables. In strip perforations, the accuracies of low- and high-resolution modes were 75% and 83% for NewTom 3G and 67% and 69% for Cranex 3D. In root perforations, the accuracies of low- and high-resolution modes were 79% and 83% for NewTom 3G and was 56% and 73% for Cranex 3D. The accuracy of the 2 CBCT systems was different for the detection of strip and root perforations. The Cranex 3D had non-significantly higher accuracy than the NewTom 3G. In both scanners, the high-resolution mode yielded significantly higher accuracy than the low-resolution mode. The diagnostic accuracy of CBCT scans was not affected by the perforation diameter.
Numerical investigation of power requirements for ultra-high-speed serial-to-parallel conversion
Lillieholm, Mads; Mulvad, Hans Christian Hansen; Palushani, Evarist
2012-01-01
We present a numerical bit-error rate investigation of 160-640 Gbit/s serial-to-parallel conversion by four-wave mixing based time-domain optical Fourier transformation, showing an inverse scaling of the required pump energy per bit with the bit rate.......We present a numerical bit-error rate investigation of 160-640 Gbit/s serial-to-parallel conversion by four-wave mixing based time-domain optical Fourier transformation, showing an inverse scaling of the required pump energy per bit with the bit rate....