Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Energy Technology Data Exchange (ETDEWEB)
Nygaard, K
1968-09-15
From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra.
International Nuclear Information System (INIS)
Nygaard, K.
1968-09-01
From the point of view that no mathematical method can ever minimise or alter errors already made in a physical measurement, the classical least squares method has severe limitations which makes it unsuitable for the statistical analysis of many physical measurements. Based on the assumptions that the experimental errors are characteristic for each single experiment and that the errors must be properly estimated rather than minimised, a new method for solving large systems of linear equations is developed. The new method exposes the entire range of possible solutions before the decision is taken which of the possible solutions should be chosen as a representative one. The choice is based on physical considerations which (in two examples, curve fitting and unfolding of a spectrum) are presented in such a form that a computer is able to make the decision, A description of the computation is given. The method described is a tool for removing uncertainties due to conventional mathematical formulations (zero determinant, linear dependence) and which are not inherent in the physical problem as such. The method is therefore especially well fitted for unfolding of spectra
International Nuclear Information System (INIS)
Ceolin, C.; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T.
2015-01-01
Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)
Energy Technology Data Exchange (ETDEWEB)
Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2015-07-01
Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)
Computational Error Estimate for the Power Series Solution of Odes ...
African Journals Online (AJOL)
This paper compares the error estimation of power series solution with recursive Tau method for solving ordinary differential equations. From the computational viewpoint, the power series using zeros of Chebyshevpolunomial is effective, accurate and easy to use. Keywords: Lanczos Tau method, Chebyshev polynomial, ...
Exact error estimation for solutions of nuclide chain equations
International Nuclear Information System (INIS)
Tachihara, Hidekazu; Sekimoto, Hiroshi
1999-01-01
The exact solution of nuclide chain equations within arbitrary figures is obtained for a linear chain by employing the Bateman method in the multiple-precision arithmetic. The exact error estimation of major calculation methods for a nuclide chain equation is done by using this exact solution as a standard. The Bateman, finite difference, Runge-Kutta and matrix exponential methods are investigated. The present study confirms the following. The original Bateman method has very low accuracy in some cases, because of large-scale cancellations. The revised Bateman method by Siewers reduces the occurrence of cancellations and thereby shows high accuracy. In the time difference method as the finite difference and Runge-Kutta methods, the solutions are mainly affected by the truncation errors in the early decay time, and afterward by the round-off errors. Even though the variable time mesh is employed to suppress the accumulation of round-off errors, it appears to be nonpractical. Judging from these estimations, the matrix exponential method is the best among all the methods except the Bateman method whose calculation process for a linear chain is not identical with that for a general one. (author)
Reducing errors in the GRACE gravity solutions using regularization
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4
Discretization vs. Rounding Error in Euler's Method
Borges, Carlos F.
2011-01-01
Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…
Error estimation in the neural network solution of ordinary differential equations.
Filici, Cristian
2010-06-01
In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented. Copyright 2010. Published by Elsevier Ltd.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Error Estimation and Accuracy Improvements in Nodal Transport Methods
International Nuclear Information System (INIS)
Zamonsky, O.M.
2000-01-01
The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid
Error-finding and error-correcting methods for the start-up of the SLC
International Nuclear Information System (INIS)
Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.
1987-02-01
During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper
Approximate damped oscillatory solutions and error estimates for the perturbed Klein–Gordon equation
International Nuclear Information System (INIS)
Ye, Caier; Zhang, Weiguo
2015-01-01
Highlights: • Analyze the dynamical behavior of the planar dynamical system corresponding to the perturbed Klein–Gordon equation. • Present the relations between the properties of traveling wave solutions and the perturbation coefficient. • Obtain all explicit expressions of approximate damped oscillatory solutions. • Investigate error estimates between exact damped oscillatory solutions and the approximate solutions and give some numerical simulations. - Abstract: The influence of perturbation on traveling wave solutions of the perturbed Klein–Gordon equation is studied by applying the bifurcation method and qualitative theory of dynamical systems. All possible approximate damped oscillatory solutions for this equation are obtained by using undetermined coefficient method. Error estimates indicate that the approximate solutions are meaningful. The results of numerical simulations also establish our analysis
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Liesen, J.; Strakoš, Z.
2014-01-01
Roč. 449, 15 May (2014), s. 89-114 ISSN 0024-3795 R&D Projects: GA AV ČR IAA100300802; GA ČR GA201/09/0917 Grant - others:GA MŠk(CZ) LL1202; GA UK(CZ) 695612 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * adaptivity * a posteriori error analysis * discretization error * algebra ic error * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
Error Analysis of Galerkin's Method for Semilinear Equations
Directory of Open Access Journals (Sweden)
Tadashi Kawanago
2012-01-01
Full Text Available We establish a general existence result for Galerkin's approximate solutions of abstract semilinear equations and conduct an error analysis. Our results may be regarded as some extension of a precedent work (Schultz 1969. The derivation of our results is, however, different from the discussion in his paper and is essentially based on the convergence theorem of Newton’s method and some techniques for deriving it. Some of our results may be applicable for investigating the quality of numerical verification methods for solutions of ordinary and partial differential equations.
Error Parsing: An alternative method of implementing social judgment theory
Crystal C. Hall; Daniel M. Oppenheimer
2015-01-01
We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...
Quantifying geocode location error using GIS methods
Directory of Open Access Journals (Sweden)
Gardner Bennett R
2007-04-01
Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage
New decoding methods of interleaved burst error-correcting codes
Nakano, Y.; Kasahara, M.; Namekawa, T.
1983-04-01
A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.
Error compensation for hybrid-computer solution of linear differential equations
Kemp, N. H.
1970-01-01
Z-transform technique compensates for digital transport delay and digital-to-analog hold. Method determines best values for compensation constants in multi-step and Taylor series projections. Technique also provides hybrid-calculation error compared to continuous exact solution, plus system stability properties.
Error estimates in projective solutions of the radon equation
International Nuclear Information System (INIS)
Lubuma, M.S.
1991-04-01
The model Radon equation is the integral equation of the second kind defined by the interior limits of the electrostatic double layer potential relative to a curve with one angular point and characterized by the non compactness of the operator with respect to the maximum norm. It is shown that the solution to this equation is decomposable into a regular part and a finite linear combination of intrinsic singular functions. The maximal regularity of the solution and explicit formulae for the coefficients of the singular functions are given. The regularity permits to specify how slow the convergence of the classical projection method is, while the above mentioned formulae lead to modified projection methods of the Dual Singular Function Method type, with better approximations for the solution and for the coefficients of singularities. (author). 23 refs
Irving, J.; Koepke, C.; Elsheikh, A. H.
2017-12-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion
A posteriori error analysis of multiscale operator decomposition methods for multiphysics models
International Nuclear Information System (INIS)
Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T
2008-01-01
Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples
Equation-Method for correcting clipping errors in OFDM signals.
Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry
2016-01-01
Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.
Seven common errors in finding exact solutions of nonlinear differential equations
Kudryashov, Nikolai A.
2009-01-01
We analyze the common errors of the recent papers in which the solitary wave solutions of nonlinear differential equations are presented. Seven common errors are formulated and classified. These errors are illustrated by using multiple examples of the common errors from the recent publications. We
Internal Error Propagation in Explicit Runge--Kutta Methods
Ketcheson, David I.
2014-09-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Error response test system and method using test mask variable
Gender, Thomas K. (Inventor)
2006-01-01
An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.
A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers
Energy Technology Data Exchange (ETDEWEB)
Melboe, Hallgeir
2001-10-01
This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)
Variation Iteration Method for The Approximate Solution of Nonlinear ...
African Journals Online (AJOL)
In this study, we considered the numerical solution of the nonlinear Burgers equation using the Variational Iteration Method (VIM). The method seeks to examine the convergence of solutions of the Burgers equation at the expense of the parameters x and t of which the amount of errors depends. Numerical experimentation ...
A posteriori error estimator and AMR for discrete ordinates nodal transport methods
International Nuclear Information System (INIS)
Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.
2009-01-01
In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns
Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.
1993-01-01
A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.
Internal quality control of RIA with Tonks error calculation method
International Nuclear Information System (INIS)
Chen Xiaodong
1996-01-01
According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced
Directory of Open Access Journals (Sweden)
Salih Yalcinbas
2016-01-01
Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.
An evaluation of solutions to moment method of biochemical oxygen ...
African Journals Online (AJOL)
This paper evaluated selected solutions of moment method in respect to Biochemical Oxygen Demand (BOD) kinetics with the aim of ascertain error free solution. Domestic - institutional wastewaters were collected two - weekly for three months from waste - stabilization ponds in Obafemi Awolowo University, Ile - Ife.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
A straightness error measurement method matched new generation GPS
International Nuclear Information System (INIS)
Zhang, X B; Lu, H; Jiang, X Q; Li, Z
2005-01-01
The axis of the non-diffracting beam produced by an axicon is very stable and can be adopted as the datum line to measure the spatial straightness error in continuous working distance, which may be short, medium or long. Though combining the non-diffracting beam datum-line with LVDT displace detector, a new straightness error measurement method is developed. Because the non-diffracting beam datum-line amends the straightness error gauged by LVDT, the straightness error is reliable and this method is matchs new generation GPS
Internal Error Propagation in Explicit Runge--Kutta Methods
Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo
2014-01-01
of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong; Sun, Shuyu; Xie, Xiaoping
2015-01-01
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Towards automatic global error control: Computable weak error expansion for the tau-leap method
Karlsson, Peer Jesper; Tempone, Raul
2011-01-01
This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.
THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS
Directory of Open Access Journals (Sweden)
Natalia Bakhova
2011-03-01
Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.
Error of image saturation in the structured-light method.
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-01-01
In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Size-dependent error of the density functional theory ionization potential in vacuum and solution.
Sosa Vazquez, Xochitl A; Isborn, Christine M
2015-12-28
Density functional theory is often the method of choice for modeling the energetics of large molecules and including explicit solvation effects. It is preferable to use a method that treats systems of different sizes and with different amounts of explicit solvent on equal footing. However, recent work suggests that approximate density functional theory has a size-dependent error in the computation of the ionization potential. We here investigate the lack of size-intensivity of the ionization potential computed with approximate density functionals in vacuum and solution. We show that local and semi-local approximations to exchange do not yield a constant ionization potential for an increasing number of identical isolated molecules in vacuum. Instead, as the number of molecules increases, the total energy required to ionize the system decreases. Rather surprisingly, we find that this is still the case in solution, whether using a polarizable continuum model or with explicit solvent that breaks the degeneracy of each solute, and we find that explicit solvent in the calculation can exacerbate the size-dependent delocalization error. We demonstrate that increasing the amount of exact exchange changes the character of the polarization of the solvent molecules; for small amounts of exact exchange the solvent molecules contribute a fraction of their electron density to the ionized electron, but for larger amounts of exact exchange they properly polarize in response to the cationic solute. In vacuum and explicit solvent, the ionization potential can be made size-intensive by optimally tuning a long-range corrected hybrid functional.
Error Analysis for Fourier Methods for Option Pricing
Hä ppö lä , Juho
2016-01-01
We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE
The error analysis of the determination of the activity coefficients via the isopiestic method
International Nuclear Information System (INIS)
Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong
2005-01-01
Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
Grinding Method and Error Analysis of Eccentric Shaft Parts
Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua
2017-12-01
RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.
An error bound estimate and convergence of the Nodal-LTS N solution in a rectangle
International Nuclear Information System (INIS)
Hauser, Eliete Biasotto; Pazos, Ruben Panta; Tullio de Vilhena, Marco
2005-01-01
In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS N solution in a rectangle. For such we present an efficient algorithm, called LTS N 2D-Diag solution for Cartesian geometry
Energy Technology Data Exchange (ETDEWEB)
Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)
2000-07-01
The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.
Nonlinear error dynamics for cycled data assimilation methods
International Nuclear Information System (INIS)
Moodey, Alexander J F; Lawless, Amos S; Potthast, Roland W E; Van Leeuwen, Peter Jan
2013-01-01
We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at t k , k = 1, 2, 3, …, with a first guess given by the state propagated via a dynamical system model M k from time t k−1 to time t k . In particular, for nonlinear dynamical systems M k that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ‖e k ‖ ≔ ‖x (a) k − x (t) k ‖ between the estimated state x (a) and the true state x (t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system M k under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ‖e k ‖, depending on the size δ of the observation error, the reconstruction operator R α , the observation operator H and the Lipschitz constants K (1) and K (2) on the lower and higher modes of M k controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c‖R α ‖δ with some constant c. Since ‖R α ‖ → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz ‘63 system. (paper)
Output Error Method for Tiltrotor Unstable in Hover
Directory of Open Access Journals (Sweden)
Lichota Piotr
2017-03-01
Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.
Energy Technology Data Exchange (ETDEWEB)
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
International Nuclear Information System (INIS)
Barros, R.C. de; Larsen, E.W.
1991-01-01
A generalization of the one-group Spectral Green's Function (SGF) method is developed for multigroup, slab-geometry discrete ordinates (S N ) problems. The multigroup SGF method is free from spatial truncation errors; it generated numerical values for the cell-edge and cell-average angular fluxes that agree with the analytic solution of the multigroup S N equations. Numerical results are given to illustrate the method's accuracy
CFD code verification and the method of manufactured solutions
International Nuclear Information System (INIS)
Pelletier, D.; Roache, P.J.
2002-01-01
This paper presents the Method of Manufactured Solutions (MMS) for CFD code verification. The MMS provides benchmark solutions for direct evaluation of the solution error. The best benchmarks are exact analytical solutions with sufficiently complex solution structure to ensure that all terms of the differential equations are exercised in the simulation. The MMS provides a straight forward and general procedure for generating such solutions. When used with systematic grid refinement studies, which are remarkably sensitive, the MMS provides strong code verification with a theorem-like quality. The MMS is first presented on simple 1-D examples. Manufactured solutions for more complex problems are then presented with sample results from grid convergence studies. (author)
Method for decoupling error correction from privacy amplification
Energy Technology Data Exchange (ETDEWEB)
Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)
2003-04-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.
Method for decoupling error correction from privacy amplification
International Nuclear Information System (INIS)
Lo, Hoi-Kwong
2003-01-01
In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof
Analysis of possible systematic errors in the Oslo method
International Nuclear Information System (INIS)
Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.
2011-01-01
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
An in-situ measuring method for planar straightness error
Chen, Xi; Fu, Luhua; Yang, Tongyu; Sun, Changku; Wang, Zhong; Zhao, Yan; Liu, Changjie
2018-01-01
According to some current problems in the course of measuring the plane shape error of workpiece, an in-situ measuring method based on laser triangulation is presented in this paper. The method avoids the inefficiency of traditional methods like knife straightedge as well as the time and cost requirements of coordinate measuring machine(CMM). A laser-based measuring head is designed and installed on the spindle of a numerical control(NC) machine. The measuring head moves in the path planning to measure measuring points. The spatial coordinates of the measuring points are obtained by the combination of the laser triangulation displacement sensor and the coordinate system of the NC machine, which could make the indicators of measurement come true. The method to evaluate planar straightness error adopts particle swarm optimization(PSO). To verify the feasibility and accuracy of the measuring method, simulation experiments were implemented with a CMM. Comparing the measurement results of measuring head with the corresponding measured values obtained by composite measuring machine, it is verified that the method can realize high-precise and automatic measurement of the planar straightness error of the workpiece.
Directory of Open Access Journals (Sweden)
Hongchun Sun
2012-01-01
Full Text Available For the extended mixed linear complementarity problem (EML CP, we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation
Directory of Open Access Journals (Sweden)
Tianshuang Qiu
2007-12-01
Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of Ã¢Â€ÂœbiasedÃ¢Â€Â or Ã¢Â€ÂœunbiasedÃ¢Â€Â is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.
International Nuclear Information System (INIS)
Nidaira, Kazuo
2008-01-01
International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)
Error Analysis for Fourier Methods for Option Pricing
Häppölä, Juho
2016-01-06
We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.
CREME96 and Related Error Rate Prediction Methods
Adams, James H., Jr.
2012-01-01
Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and
Statistical method for quality control in presence of measurement errors
International Nuclear Information System (INIS)
Lauer-Peccoud, M.R.
1998-01-01
In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)
Error assessment in recombinant baculovirus titration: evaluation of different methods.
Roldão, António; Oliveira, Rui; Carrondo, Manuel J T; Alves, Paula M
2009-07-01
The success of baculovirus/insect cells system in heterologous protein expression depends on the robustness and efficiency of the production workflow. It is essential that process parameters are controlled and include as little variability as possible. The multiplicity of infection (MOI) is the most critical factor since irreproducible MOIs caused by inaccurate estimation of viral titers hinder batch consistency and process optimization. This lack of accuracy is related to intrinsic characteristics of the method such as the inability to distinguish between infectious and non-infectious baculovirus. In this study, several methods for baculovirus titration were compared. The most critical issues identified were the incubation time and cell concentration at the time of infection. These variables influence strongly the accuracy of titers and must be defined for optimal performance of the titration method. Although the standard errors of the methods varied significantly (7-36%), titers were within the same order of magnitude; thus, viral titers can be considered independent of the method of titration. A cost analysis of the baculovirus titration methods used in this study showed that the alamarblue, real time Q-PCR and plaque assays were the most expensive techniques. The remaining methods cost on average 75% less than the former methods. Based on the cost, time and error analysis undertaken in this study, the end-point dilution assay, microculture tetrazolium assay and flow cytometric assay were found to be the techniques that combine all these three main factors better. Nevertheless, it is always recommended to confirm the accuracy of the titration either by comparison with a well characterized baculovirus reference stock or by titration using two different methods and verification of the variability of results.
Reliable methods for computer simulation error control and a posteriori estimates
Neittaanmäki, P
2004-01-01
Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-09-01
The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.
Residual-based Methods for Controlling Discretization Error in CFD
2015-08-24
ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1 . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a
Directory of Open Access Journals (Sweden)
Şuayip Yüzbaşı
2017-03-01
Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.
Total error components - isolation of laboratory variation from method performance
International Nuclear Information System (INIS)
Bottrell, D.; Bleyler, R.; Fisk, J.; Hiatt, M.
1992-01-01
The consideration of total error across sampling and analytical components of environmental measurements is relatively recent. The U.S. Environmental Protection Agency (EPA), through the Contract Laboratory Program (CLP), provides complete analyses and documented reports on approximately 70,000 samples per year. The quality assurance (QA) functions of the CLP procedures provide an ideal data base-CLP Automated Results Data Base (CARD)-to evaluate program performance relative to quality control (QC) criteria and to evaluate the analysis of blind samples. Repetitive analyses of blind samples within each participating laboratory provide a mechanism to separate laboratory and method performance. Isolation of error sources is necessary to identify effective options to establish performance expectations, and to improve procedures. In addition, optimized method performance is necessary to identify significant effects that result from the selection among alternative procedures in the data collection process (e.g., sampling device, storage container, mode of sample transit, etc.). This information is necessary to evaluate data quality; to understand overall quality; and to provide appropriate, cost-effective information required to support a specific decision
The commission errors search and assessment (CESA) method
Energy Technology Data Exchange (ETDEWEB)
Reer, B.; Dang, V. N
2007-05-15
Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)
The commission errors search and assessment (CESA) method
International Nuclear Information System (INIS)
Reer, B.; Dang, V. N.
2007-05-01
Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)
Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes
Crocce, Fabian
2016-01-06
Prices of European options whose underlying asset is driven by the L´evy process are solutions to partial integrodifferential Equations (PIDEs) that generalise the Black-Scholes equation by incorporating a non-local integral term to account for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE. The rapid rate of convergence of the trapezoid quadrature and the speedup provide efficient methods for evaluationg option prices, possibly for a range of parameter configurations simultaneously. A couple of works have been devoted to the error analysis and parameter selection for these transform-based methods. In [5] several payoff functions are considered for a rather general set of models, whose characteristic function is assumed to be known. [4] presents the framework and theoretical approach for the error analysis, and establishes polynomial convergence rates for approximations of the option prices. [1] presents FT-related methods with curved integration contour. The classical flat FT-methods have been, on the other hand, extended for option pricing problems beyond the European framework [3]. We present a methodology for studying and bounding the error committed when using FT methods to compute option prices. We also provide a systematic way of choosing the parameters of the numerical method, minimising the error bound and guaranteeing adherence to a pre-described error tolerance. We focus on exponential L´evy processes that may be of either diffusive or pure jump in type. Our contribution is to derive a tight error bound for a Fourier transform method when pricing options under risk-neutral Levy dynamics. We present a simplified bound that separates the contributions of the payoff and of the process in an easily processed and extensible product form that
Using the CAIR-method to derive cognitive error mechanisms
International Nuclear Information System (INIS)
Straeter, Oliver
2000-01-01
This paper describes an application of the second-generation method CAHR (Connectionism Assessment of Human Reliability; Straeter, 1997) that was developed at the Technical University of Munich and the GRS in the years from 1992 to 1998. The method enables to combine event analysis and assessment and therefore to base human reliability assessment on past experience. The term connectionism' was coined by modeling human cognition on the basis of artificial intelligence models. Connectionism is a term describing methods that represent complex interrelations of various parameters (known for pattern recognition, expert systems, modeling of cognition). The method enables to combine event analysis and assessment on past experience. The paper will demonstrate the application of the method to communication aspects in NPPs (Nuclear Power Plants) and will give some outlooks for further developments. Application of the method to the problem of communication failures, for examples, initial work on communication within the low-power and shut down study for Boiling Water Reactors (BWRs), investigation of communication failures, importance of procedural and verbal communication for different error type and causes for failures in procedural and verbal communication are explained. (S.Y.)
Incremental Volumetric Remapping Method: Analysis and Error Evaluation
International Nuclear Information System (INIS)
Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.
2007-01-01
In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the
Acceleration of monte Carlo solution by conjugate gradient method
International Nuclear Information System (INIS)
Toshihisa, Yamamoto
2005-01-01
The conjugate gradient method (CG) was applied to accelerate Monte Carlo solutions in fixed source problems. The equilibrium model based formulation enables to use CG scheme as well as initial guess to maximize computational performance. This method is available to arbitrary geometry provided that the neutron source distribution in each subregion can be regarded as flat. Even if it is not the case, the method can still be used as a powerful tool to provide an initial guess very close to the converged solution. The major difference of Monte Carlo CG to deterministic CG is that residual error is estimated using Monte Carlo sampling, thus statistical error exists in the residual. This leads to a flow diagram specific to Monte Carlo-CG. Three pre-conditioners were proposed for CG scheme and the performance was compared with a simple 1-D slab heterogeneous test problem. One of them, Sparse-M option, showed an excellent performance in convergence. The performance per unit cost was improved by four times in the test problem. Although direct estimation of efficiency of the method is impossible mainly because of the strong problem-dependence of the optimized pre-conditioner in CG, the method seems to have efficient potential as a fast solution algorithm for Monte Carlo calculations. (author)
A Fast Soft Bit Error Rate Estimation Method
Directory of Open Access Journals (Sweden)
Ait-Idir Tarik
2010-01-01
Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.
Type I Error Inflation in DIF Identification with Mantel-Haenszel: An Explanation and a Solution
Magis, David; De Boeck, Paul
2014-01-01
It is known that sum score-based methods for the identification of differential item functioning (DIF), such as the Mantel-Haenszel (MH) approach, can be affected by Type I error inflation in the absence of any DIF effect. This may happen when the items differ in discrimination and when there is item impact. On the other hand, outlier DIF methods…
Solution of the radiative enclosure with a hybrid inverse method
Energy Technology Data Exchange (ETDEWEB)
Silva, Rogerio Brittes da; Franca, Francis Henrique Ramos [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Dept. de Engenharia Mecanica], E-mail: frfranca@mecanica.ufrgs.br
2010-07-01
This work applies the inverse analysis to solve a three-dimensional radiative enclosure - which the surfaces are diffuse-grays - filled with transparent medium. The aim is determine the powers and locations of the heaters to attain both uniform heat flux and temperature on the design surface. A hybrid solution that couples two methods, the generalized extremal optimization (GEO) and the truncated singular value decomposition (TSVD) is proposed. The determination of the heat sources distribution is treated as an optimization problem, by GEO algorithm , whereas the solution of the system of equation, that embodies the Fredholm equation of first kind and therefore is expected to be ill conditioned, is build up through TSVD regularization method. The results show that the hybrid method can lead to a heat flux on the design surface that satisfies the imposed conditions with maximum error of less than 1,10%. The results illustrated the relevance of a hybrid method as a prediction tool. (author)
Solutions of First-Order Volterra Type Linear Integrodifferential Equations by Collocation Method
Directory of Open Access Journals (Sweden)
Olumuyiwa A. Agbolade
2017-01-01
Full Text Available The numerical solutions of linear integrodifferential equations of Volterra type have been considered. Power series is used as the basis polynomial to approximate the solution of the problem. Furthermore, standard and Chebyshev-Gauss-Lobatto collocation points were, respectively, chosen to collocate the approximate solution. Numerical experiments are performed on some sample problems already solved by homotopy analysis method and finite difference methods. Comparison of the absolute error is obtained from the present method and those from aforementioned methods. It is also observed that the absolute errors obtained are very low establishing convergence and computational efficiency.
On Round-off Error for Adaptive Finite Element Methods
Alvarez-Aramberri, J.
2012-06-02
Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.
On Round-off Error for Adaptive Finite Element Methods
Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.
2012-01-01
Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.
Method of continuously regenerating decontaminating electrolytic solution
International Nuclear Information System (INIS)
Sasaki, Takashi; Kobayashi, Toshio; Wada, Koichi.
1985-01-01
Purpose: To continuously recover radioactive metal ions from the electrolytic solution used for the electrolytic decontamination of radioactive equipment and increased with the radioactive dose, as well as regenerate the electrolytic solution to a high concentration acid. Method: A liquid in an auxiliary tank is recycled to a cathode chamber containing water of an electro depositing regeneration tank to render pH = 2 by way of a pH controller and a pH electrode. The electrolytic solution in an electrolytic decontaminating tank is introduced by way of an injection pump to an auxiliary tank and, interlocking therewith, a regenerating solution is introduced from a regenerating solution extracting pump by way of a extraction pipeway to an electrolytic decontaminating tank. Meanwhile, electric current is supplied to the electrode to deposit radioactive metal ions dissolved in the cathode chamber on the capturing electrode. While on the other hand, anions are transferred by way of a partition wall to an anode chamber to regenerate the electrolytic solution to high concentration acid solution. While on the other hand, water is supplied by way of an electromagnetic valve interlocking with the level meter to maintain the level meter constant. This can decrease the generation of the liquid wastes and also reduce the amount of the radioactive secondary wastes. (Horiuchi, T.)
Findings from analysing and quantifying human error using current methods
International Nuclear Information System (INIS)
Dang, V.N.; Reer, B.
1999-01-01
In human reliability analysis (HRA), the scarcity of data means that, at best, judgement must be applied to transfer to the domain of the analysis what data are available for similar tasks. In particular for the quantification of tasks involving decisions, the analyst has to choose among quantification approaches that all depend to a significant degree on expert judgement. The use of expert judgement can be made more reliable by eliciting relative judgements rather than absolute judgements. These approaches, which are based on multiple criterion decision theory, focus on ranking the tasks to be analysed by difficulty. While these approaches remedy at least partially the poor performance of experts in the estimation of probabilities, they nevertheless require the calibration of the relative scale on which the actions are ranked in order to obtain the probabilities of interest. This paper presents some results from a comparison of some current HRA methods performed in the frame of a study of SLIM calibration options. The HRA quantification methods THERP, HEART, and INTENT were applied to derive calibration human error probabilities for two groups of operator actions. (author)
International Nuclear Information System (INIS)
Stepanov, A.V.; Stepanov, D.A.; Nikitina, S.A.; Gogoleva, T.D.; Grigor'eva, M.G.; Bulyanitsa, L.S.; Panteleev, Yu.A.; Pevtsova, E.V.; Domkin, V.D.; Pen'kin, M.V.
2006-01-01
Precision method of spectrophotometry with inner standardization is used for analysis of pure Pu solutions. Improvement of the spectrophotometer and spectrophotometric method of analysis is done to decrease accidental constituent of relative error of the method. Influence of U, Np impurities and corrosion products on systematic constituent of error of the method, and effect of fluoride-ion on completeness of Pu oxidation in sample preparation are studied [ru
International Nuclear Information System (INIS)
Takagawa, Kenichi; Miyazaki, Takamasa; Gofuku, Akio; Iida, Hiroyasu
2007-01-01
Since many of the adverse events that have occurred in nuclear power plants in Japan and abroad have been related to maintenance or operation, it is necessary to plan preventive measures based on detailed analyses of human errors made by maintenance workers or operators. Therefore, before planning preventive measures, we developed a new method of analyzing human errors. Since each human error is an unsafe action caused by some misjudgement made by a person, we decided to classify them into six categories according to the stage in the judgment process in which the error was made. By further classifying each error into either an omission-type or commission-type, we produced 12 categories of errors. Then, we divided them into the two categories of basic error tendencies and individual error tendencies, and categorized background factors into four categories: imperfect planning; imperfect facilities or tools; imperfect environment; and imperfect instructions or communication. We thus defined the factors in each category to make it easy to identify factors that caused the error. Then using this method, we studied the characteristics of human errors that involved maintenance workers and planners since many maintenance errors have occurred. Among the human errors made by workers (worker errors) during the implementation stage, the following three types were prevalent with approximately 80%: commission-type 'projection errors', omission-type comprehension errors' and commission type 'action errors'. The most common among the individual factors of worker errors was 'repetition or habit' (schema), based on the assumption of a typical situation, and the half number of the 'repetition or habit' cases (schema) were not influenced by any background factors. The most common background factor that contributed to the individual factor was 'imperfect work environment', followed by 'insufficient knowledge'. Approximately 80% of the individual factors were 'repetition or habit' or
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Method of lines solution of Richards` equation
Energy Technology Data Exchange (ETDEWEB)
Kelley, C.T.; Miller, C.T.; Tocci, M.D.
1996-12-31
We consider the method of lines solution of Richard`s equation, which models flow through porous media, as an example of a situation in which the method can give incorrect results because of premature termination of the nonlinear corrector iteration. This premature termination arises when the solution has a sharp moving front and the Jacobian is ill-conditioned. While this problem can be solved by tightening the tolerances provided to the ODE or DAE solver used for the temporal integration, it is more efficient to modify the termination criteria of the nonlinear solver and/or recompute the Jacobian more frequently. In this paper we continue previous work on this topic by analyzing the modifications in more detail and giving a strategy on how the modifications can be turned on and off in response to changes in the character of the solution.
The multi-element probabilistic collocation method (ME-PCM): Error analysis and applications
International Nuclear Information System (INIS)
Foo, Jasmine; Wan Xiaoliang; Karniadakis, George Em
2008-01-01
Stochastic spectral methods are numerical techniques for approximating solutions to partial differential equations with random parameters. In this work, we present and examine the multi-element probabilistic collocation method (ME-PCM), which is a generalized form of the probabilistic collocation method. In the ME-PCM, the parametric space is discretized and a collocation/cubature grid is prescribed on each element. Both full and sparse tensor product grids based on Gauss and Clenshaw-Curtis quadrature rules are considered. We prove analytically and observe in numerical tests that as the parameter space mesh is refined, the convergence rate of the solution depends on the quadrature rule of each element only through its degree of exactness. In addition, the L 2 error of the tensor product interpolant is examined and an adaptivity algorithm is provided. Numerical examples demonstrating adaptive ME-PCM are shown, including low-regularity problems and long-time integration. We test the ME-PCM on two-dimensional Navier-Stokes examples and a stochastic diffusion problem with various random input distributions and up to 50 dimensions. While the convergence rate of ME-PCM deteriorates in 50 dimensions, the error in the mean and variance is two orders of magnitude lower than the error obtained with the Monte Carlo method using only a small number of samples (e.g., 100). The computational cost of ME-PCM is found to be favorable when compared to the cost of other methods including stochastic Galerkin, Monte Carlo and quasi-random sequence methods
Radon measurements-discussion of error estimates for selected methods
International Nuclear Information System (INIS)
Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav
2010-01-01
The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Hošek, Radim; Maltese, D.; Novotný, A.
2017-01-01
Roč. 33, č. 4 (2017), s. 1208-1223 ISSN 0749-159X EU Projects: European Commission(XE) 320078 - MATH EF Institutional support: RVO:67985840 Keywords : convergence * error estimates * mixed numerical method * Navier–Stokes system Subject RIV: BA - General Math ematics OBOR OECD: Pure math ematics Impact factor: 1.079, year: 2016 http://onlinelibrary.wiley.com/doi/10.1002/num.22140/abstract
Local and accumulated truncation errors in a class of perturbative numerical methods
International Nuclear Information System (INIS)
Adam, G.; Adam, S.; Corciovei, A.
1980-01-01
The approach to the solution of the radial Schroedinger equation using piecewise perturbative theory with a step function reference potential leads to a class of powerful numerical methods, conveniently abridged as SF-PNM(K), where K denotes the order at which the perturbation series was truncated. In the present paper rigorous results are given for the local truncation errors and bounds are derived for the accumulated truncated errors associated to SF-PNM(K), K = 0, 1, 2. They allow us to establish the smoothness conditions which have to be fulfilled by the potential in order to ensure a safe use of SF-PNM(K), and to understand the experimentally observed behaviour of the numerical results with the step size h. (author)
Statistical error estimation of the Feynman-α method using the bootstrap method
International Nuclear Information System (INIS)
Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho
2016-01-01
Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)
Algebraic methods for solution of polyhedra
Energy Technology Data Exchange (ETDEWEB)
Sabitov, Idzhad Kh [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)
2011-06-30
By analogy with the solution of triangles, the solution of polyhedra means a theory and methods for calculating some geometric parameters of polyhedra in terms of other parameters of them. The main content of this paper is a survey of results on calculating the volumes of polyhedra in terms of their metrics and combinatorial structures. It turns out that a far-reaching generalization of Heron's formula for the area of a triangle to the volumes of polyhedra is possible, and it underlies the proof of the conjecture that the volume of a deformed flexible polyhedron remains constant. Bibliography: 110 titles.
Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems
Whiteley, J. P.
2011-05-09
Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.
Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems
Whiteley, J. P.; Gillow, K.; Tavener, S. J.; Walter, A. C.
2011-01-01
Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.
James W. Hardin; Henrik Schmeidiche; Raymond J. Carroll
2003-01-01
This paper discusses and illustrates the method of regression calibration. This is a straightforward technique for fitting models with additive measurement error. We present this discussion in terms of generalized linear models (GLMs) following the notation defined in Hardin and Carroll (2003). Discussion will include specified measurement error, measurement error estimated by replicate error-prone proxies, and measurement error estimated by instrumental variables. The discussion focuses on s...
An error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle
Energy Technology Data Exchange (ETDEWEB)
Hauser, Eliete Biasotto [Faculty of Mathematics, PUCRS Av Ipiranga 6681, Building 15, Porto Alegre - RS 90619-900 (Brazil)]. E-mail: eliete@pucrs.br; Pazos, Ruben Panta [Department of Mathematics, UNISC Av Independencia, 2293, room 1301, Santa Cruz do Sul - RS 96815-900 (Brazil)]. E-mail: rpp@impa.br; Tullio de Vilhena, Marco [Graduate Program in Applied Mathematics, UFRGS Av Bento Goncalves 9500, Building 43-111, Porto Alegre - RS 91509-900 (Brazil)]. E-mail: vilhena@mat.ufrgs.br
2005-07-15
In this work, we report the mathematical analysis concerning error bound estimate and convergence of the Nodal-LTS {sub N} solution in a rectangle. For such we present an efficient algorithm, called LTS {sub N} 2D-Diag solution for Cartesian geometry.
An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs
International Nuclear Information System (INIS)
Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol
2011-01-01
In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...
DEFF Research Database (Denmark)
Chen, Yangyang; Yang, Ming; Long, Jiang
2017-01-01
For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...
Five-equation and robust three-equation methods for solution verification of large eddy simulation
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Errors in the estimation method for the rejection of vibrations in adaptive optics systems
Kania, Dariusz
2017-06-01
In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.
Ketcheson, David I.
2014-04-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…
International Nuclear Information System (INIS)
Duo, J. I.; Azmy, Y. Y.
2007-01-01
A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)
Error evaluation of inelastic response spectrum method for earthquake design
International Nuclear Information System (INIS)
Paz, M.; Wong, J.
1981-01-01
Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)
Issues with data and analyses: Errors, underlying themes, and potential solutions.
Brown, Andrew W; Kaiser, Kathryn A; Allison, David B
2018-03-13
Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge.
International Nuclear Information System (INIS)
Menon, R.K.; Bloch, C.A.; Sperling, M.A.
1990-01-01
We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by [U-14C]glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. [U-14C]glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of [U-14C]glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Directory of Open Access Journals (Sweden)
Gyungho Khim
2015-01-01
Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok
2015-01-01
We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715
International Nuclear Information System (INIS)
Ragusa, J. C.
2004-01-01
In this paper, a method for performing spatially adaptive computations in the framework of multigroup diffusion on 2-D and 3-D Cartesian grids is investigated. The numerical error, intrinsic to any computer simulation of physical phenomena, is monitored through an a posteriori error estimator. In a posteriori analysis, the computed solution itself is used to assess the accuracy. By efficiently estimating the spatial error, the entire computational process is controlled through successively adapted grids. Our analysis is based on a finite element solution of the diffusion equation. Bilinear test functions are used. The derived a posteriori error estimator is therefore based on the Hessian of the numerical solution. (authors)
Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'
International Nuclear Information System (INIS)
Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi
1996-01-01
To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)
Directory of Open Access Journals (Sweden)
Pang Fubin
2015-09-01
Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Chemical deposition methods using supercritical fluid solutions
Sievers, Robert E.; Hansen, Brian N.
1990-01-01
A method for depositing a film of a desired material on a substrate comprises dissolving at least one reagent in a supercritical fluid comprising at least one solvent. Either the reagent is capable of reacting with or is a precursor of a compound capable of reacting with the solvent to form the desired product, or at least one additional reagent is included in the supercritical solution and is capable of reacting with or is a precursor of a compound capable of reacting with the first reagent or with a compound derived from the first reagent to form the desired material. The supercritical solution is expanded to produce a vapor or aerosol and a chemical reaction is induced in the vapor or aerosol so that a film of the desired material resulting from the chemical reaction is deposited on the substrate surface. In an alternate embodiment, the supercritical solution containing at least one reagent is expanded to produce a vapor or aerosol which is then mixed with a gas containing at least one additional reagent. A chemical reaction is induced in the resulting mixture so that a film of the desired material is deposited.
International Nuclear Information System (INIS)
Nalesso, G.F.; Jacobson, A.R.
1991-01-01
A solution to the problem of a plane electromagnetic wave traveling parallel to a constant magnetic field in a horizontally stratified ionosphere was developed assuming that the permittivity of the medium can be represented as the sum of an unperturbed component and a perturbed component. The method is successfully applied to the case of a linearly varying permittivity of a lossless ionosphere with a superimposed Gaussian perturbing term. The feasibility of applying the method in the presence of an odd number of turning points is discussed. 13 refs
International Nuclear Information System (INIS)
Suzuki, Yoshio; Kawakami, Yoshiaki; Nakajima, Norihiro
2017-01-01
The method to estimate errors included in observational data and the method to compare numerical results with observational results are investigated toward the verification and validation (V and V) of a seismic simulation. For the method to estimate errors, 144 literatures for the past 5 years (from the year 2010 to 2014) in the structure engineering field and earthquake engineering field where the description about acceleration data is frequent are surveyed. As a result, it is found that some processes to remove components regarded as errors from observational data are used in about 30% of those literatures. Errors are caused by the resolution, the linearity, the temperature coefficient for sensitivity, the temperature coefficient for zero shift, the transverse sensitivity, the seismometer property, the aliasing, and so on. Those processes can be exploited to estimate errors individually. For the method to compare numerical results with observational results, public materials of ASME V and V Symposium 2012-2015, their references, and above 144 literatures are surveyed. As a result, it is found that six methods have been mainly proposed in existing researches. Evaluating those methods using nine items, advantages and disadvantages for those methods are arranged. The method is not well established so that it is necessary to employ those methods by compensating disadvantages and/or to search for a solution to a novel method. (author)
Pernot, Pascal; Savin, Andreas
2018-06-01
Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.
2017-04-01
In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.
Exact solutions to some nonlinear PDEs, travelling profiles method
Directory of Open Access Journals (Sweden)
Noureddine Benhamidouche
2008-04-01
\\end{equation*} by a new method that we call the travelling profiles method. This method allows us to find several forms of exact solutions including the classical forms such as travelling-wave and self-similar solutions.
Directory of Open Access Journals (Sweden)
Zbigniew Staroszczyk
2014-12-01
Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors
Energy Technology Data Exchange (ETDEWEB)
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Substep methods for burnup calculations with Bateman solutions
International Nuclear Information System (INIS)
Isotalo, A.E.; Aarnio, P.A.
2011-01-01
Highlights: → Bateman solution based depletion requires constant microscopic reaction rates. → Traditionally constant approximation is used for each depletion step. → Here depletion steps are divided to substeps which are solved sequentially. → This allows piecewise constant, rather than constant, approximation for each step. → Discretization errors are almost completely removed with only minor slowdown. - Abstract: When material changes in burnup calculations are solved by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates, one has to first predict the development of the reaction rates during the step and then further approximate these predictions with their averages in the depletion calculation. Representing the continuously changing reaction rates with their averages results in some error regardless of how accurately their development was predicted. Since neutronics solutions tend to be computationally expensive, steps in typical calculations are long and the resulting discretization errors significant. In this paper we present a simple solution to reducing these errors: the depletion steps are divided to substeps that are solved sequentially, allowing finer discretization of the reaction rates without additional neutronics solutions. This greatly reduces the discretization errors and, at least when combined with Monte Carlo neutronics, causes only minor slowdown as neutronics dominates the total running time.
Cognitive strategies: a method to reduce diagnostic errors in ER
Directory of Open Access Journals (Sweden)
Carolina Prevaldi
2009-02-01
Full Text Available I wonder why sometimes we are able to rapidly recognize patterns of disease presentation, formulate a speedy diagnostic closure, and go on with a treatment plan. On the other hand sometimes we proceed studing in deep our patient in an analytic, slow and rational way of decison making. Why decisions sometimes can be intuitive, while sometimes we have to proceed in a rigorous way? What is the “back ground noise” and the “signal to noise ratio” of presenting sintoms? What is the risk in premature labeling or “closure” of a patient? When is it useful the “cook-book” approach in clinical decision making? The Emergency Department is a natural laboratory for the study of error” stated an author. Many studies have focused on the occurrence of errors in medicine, and in hospital practice, but the ED with his unique operating characteristics seems to be a uniquely errorprone environment. That's why it is useful to understand the underlying pattern of thinking that can lead us to misdiagnosis. The general knowledge of thought processes gives the psysician awareness an the ability to apply different tecniques in clinical decision making and to recognize and avoid pitfalls.
Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.
Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn
2017-07-01
The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Hua-Zhan Yin
Full Text Available In everyday life, error monitoring and processing are important for improving ongoing performance in response to a changing environment. However, detecting an error is not always a conscious process. The temporal activation patterns of brain areas related to cognitive control in the absence of conscious awareness of an error remain unknown. In the present study, event-related potentials (ERPs in the brain were used to explore the neural bases of unconscious error detection when subjects solved a Chinese anagram task. Our ERP data showed that the unconscious error detection (UED response elicited a more negative ERP component (N2 than did no error (NE and detect error (DE responses in the 300-400-ms time window, and the DE elicited a greater late positive component (LPC than did the UED and NE in the 900-1200-ms time window after the onset of the anagram stimuli. Taken together with the results of dipole source analysis, the N2 (anterior cingulate cortex might reflect unconscious/automatic conflict monitoring, and the LPC (superior/medial frontal gyrus might reflect conscious error recognition.
Solution of the porous media equation by Adomian's decomposition method
International Nuclear Information System (INIS)
Pamuk, Serdal
2005-01-01
The particular exact solutions of the porous media equation that usually occurs in nonlinear problems of heat and mass transfer, and in biological systems are obtained using Adomian's decomposition method. Also, numerical comparison of particular solutions in the decomposition method indicate that there is a very good agreement between the numerical solutions and particular exact solutions in terms of efficiency and accuracy
The Connection between Teaching Methods and Attribution Errors
Wieman, Carl; Welsh, Ashley
2016-01-01
We collected data at a large, very selective public university on what math and science instructors felt was the biggest barrier to their students' learning. We also determined the extent of each instructor's use of research-based effective teaching methods. Instructors using fewer effective methods were more likely to say the greatest barrier to…
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas; Richtarik, Peter
2017-01-01
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss
Directory of Open Access Journals (Sweden)
S. Das
2013-12-01
Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.
Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative
National Research Council Canada - National Science Library
Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S
2005-01-01
.... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
International Nuclear Information System (INIS)
Abreu, M.P.; Filho, H.A.; Barros, R.C.
1993-01-01
The authors describe a new nodal method for multigroup slab-geometry discrete ordinates S N eigenvalue problems that is completely free from all spatial truncation errors. The unknowns in the method are the node-edge angular fluxes, the node-average angular fluxes, and the effective multiplication factor k eff . The numerical values obtained for these quantities are exactly those of the dominant analytic solution of the S N eigenvalue problem apart from finite arithmetic considerations. This method is based on the use of the standard balance equation and two nonstandard auxiliary equations. In the nonmultiplying regions, e.g., the reflector, we use the multigroup spectral Green's function (SGF) auxiliary equations. In the fuel regions, we use the multigroup spectral diamond (SD) auxiliary equations. The SD auxiliary equation is an extension of the conventional auxiliary equation used in the diamond difference (DD) method. This hybrid characteristic of the SD-SGF method improves both the numerical stability and the convergence rate
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
Linearly convergent stochastic heavy ball method for minimizing generalization error
Loizou, Nicolas
2017-10-30
In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.
Study of on-machine error identification and compensation methods for micro machine tools
International Nuclear Information System (INIS)
Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng
2016-01-01
Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results
Hu, Juju; Hu, Haijiang; Ji, Yinghua
2010-03-15
Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.
A platform-independent method for detecting errors in metagenomic sequencing data: DRISEE.
Directory of Open Access Journals (Sweden)
Kevin P Keegan
Full Text Available We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation, to assess sequencing quality (alternatively referred to as "noise" or "error" within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred. Here, DRISEE is applied to (non amplicon data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs, a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.
Ketcheson, David I.; Loczi, Lajos; Parsani, Matteo
2014-01-01
of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods
Directory of Open Access Journals (Sweden)
SURE KÖME
2014-12-01
Full Text Available In this paper, we investigated the effect of Magnus Series Expansion Method on homogeneous stiff ordinary differential equations with different stiffness ratios. A Magnus type integrator is used to obtain numerical solutions of two different examples of stiff problems and exact and approximate results are tabulated. Furthermore, absolute error graphics are demonstrated in detail.
International Nuclear Information System (INIS)
Fournier, D.; Le Tellier, R.; Suteau, C.; Herbin, R.
2011-01-01
The solution of the time-independent neutron transport equation in a deterministic way invariably consists in the successive discretization of the three variables: energy, angle and space. In the SNATCH solver used in this study, the energy and the angle are respectively discretized with a multigroup approach and the discrete ordinate method. A set of spatial coupled transport equations is obtained and solved using the Discontinuous Galerkin Finite Element Method (DGFEM). Within this method, the spatial domain is decomposed into elements and the solution is approximated by a hierarchical polynomial basis in each one. This approach is time and memory consuming when the mesh becomes fine or the basis order high. To improve the computational time and the memory footprint, adaptive algorithms are proposed. These algorithms are based on an error estimation in each cell. If the error is important in a given region, the mesh has to be refined (h−refinement) or the polynomial basis order increased (p−refinement). This paper is related to the choice between the two types of refinement. Two ways to estimate the error are compared on different benchmarks. Analyzing the differences, a hp−refinement method is proposed and tested. (author)
L∞-error estimates of a finite element method for the Hamilton-Jacobi-Bellman equations
International Nuclear Information System (INIS)
Bouldbrachene, M.
1994-11-01
We study the finite element approximation for the solution of the Hamilton-Jacobi-Bellman equations involving a system of quasi-variational inequalities (QVI). We also give the optimal L ∞ -error estimates, using the concepts of subsolutions and discrete regularity. (author). 7 refs
International Nuclear Information System (INIS)
Schunert, Sebastian; Azmy, Yousry Y.
2011-01-01
The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally ne mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite rst eliminates the aforementioned limitation of ne mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme. (author)
Numerical multistep methods for the efficient solution of quantum mechanics and related problems
International Nuclear Information System (INIS)
Anastassi, Z.A.; Simos, T.E.
2009-01-01
In this paper we present the recent development in the numerical integration of the Schroedinger equation and related systems of ordinary differential equations with oscillatory solutions, such as the N-body problem. We examine several types of multistep methods (explicit, implicit, predictor-corrector, hybrid) and several properties (P-stability, trigonometric fitting of various orders, phase fitting, high phase-lag order, algebraic order). We analyze the local truncation error and the stability of the methods. The error for the Schroedinger equation is also presented, which reveals the relation of the error to the energy. The efficiency of the methods is evaluated through the integration of five problems. Figures are presented and analyzed and some general conclusions are made. Code written in Maple is given for the development of all methods analyzed in this paper. Also the subroutines written in Matlab, that concern the integration of the methods, are presented.
Method of processing plutonium and uranium solution
International Nuclear Information System (INIS)
Otsuka, Katsuyuki; Kondo, Isao; Suzuki, Toru.
1989-01-01
Solutions of plutonium nitrate solutions and uranyl nitrate recovered in the solvent extraction step in reprocessing plants and nuclear fuel production plants are applied with low temperature treatment by means of freeze-drying under vacuum into residues containing nitrates, which are denitrated under heating and calcined under reduction into powders. That is, since complicate processes of heating, concentration and dinitration conducted so far for the plutonium solution and uranyl solution are replaced with one step of freeze-drying under vacuum, the process can be simplified significantly. In addition, since the treatment is applied at low temperature, occurrence of corrosion for the material of evaporation, etc. can be prevented. Further, the number of operators can be saved by dividing the operations into recovery of solidification products, supply and sintering of the solutions and vacuum sublimation. Further, since nitrates processed at a low temperature are powderized by heating dinitration, the powderization step can be simplified. The specific surface area and the grain size distribution of the powder is made appropriate and it is possible to obtain oxide powders of physical property easily to be prepared into pellets. (N.H.)
Error analysis in Fourier methods for option pricing for exponential Lévy processes
Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul
2015-01-01
We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions
International Nuclear Information System (INIS)
Kowsary, F.; Pooladvand, K.; Pourshaghaghy, A.
2007-01-01
In this paper, an appropriate distribution of the heating elements' strengths in a radiation furnace is estimated using inverse methods so that a pre-specified temperature and heat flux distribution is attained on the design surface. Minimization of the sum of the squares of the error function is performed using the variable metric method (VMM), and the results are compared with those obtained by the conjugate gradient method (CGM) established previously in the literature. It is shown via test cases and a well-founded validation procedure that the VMM, when using a 'regularized' estimator, is more accurate and is able to reach at a higher quality final solution as compared to the CGM. The test cases used in this study were two-dimensional furnaces filled with an absorbing, emitting, and scattering gas
Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography
DEFF Research Database (Denmark)
Müller, P.; Hiller, Jochen; Dai, Y.
2015-01-01
X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...
Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method
DEFF Research Database (Denmark)
Børsting, H.; Knudsen, Morten; Rasmussen, Henrik
1993-01-01
Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....
Directory of Open Access Journals (Sweden)
Shanshan He
2015-10-01
Full Text Available Piecewise linear (G01-based tool paths generated by CAM systems lack G1 and G2 continuity. The discontinuity causes vibration and unnecessary hesitation during machining. To ensure efficient high-speed machining, a method to improve the continuity of the tool paths is required, such as B-spline fitting that approximates G01 paths with B-spline curves. Conventional B-spline fitting approaches cannot be directly used for tool path B-spline fitting, because they have shortages such as numerical instability, lack of chord error constraint, and lack of assurance of a usable result. Progressive and Iterative Approximation for Least Squares (LSPIA is an efficient method for data fitting that solves the numerical instability problem. However, it does not consider chord errors and needs more work to ensure ironclad results for commercial applications. In this paper, we use LSPIA method incorporating Energy term (ELSPIA to avoid the numerical instability, and lower chord errors by using stretching energy term. We implement several algorithm improvements, including (1 an improved technique for initial control point determination over Dominant Point Method, (2 an algorithm that updates foot point parameters as needed, (3 analysis of the degrees of freedom of control points to insert new control points only when needed, (4 chord error refinement using a similar ELSPIA method with the above enhancements. The proposed approach can generate a shape-preserving B-spline curve. Experiments with data analysis and machining tests are presented for verification of quality and efficiency. Comparisons with other known solutions are included to evaluate the worthiness of the proposed solution.
von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón
2007-09-01
Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations.
Directory of Open Access Journals (Sweden)
Jeffrey R Kugelman
Full Text Available Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5 of all compared methods.
Error of the slanted edge method for measuring the modulation transfer function of imaging systems.
Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu
2018-03-01
The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.
Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme
Directory of Open Access Journals (Sweden)
Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin
2012-08-01
Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words: Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.
A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine
Directory of Open Access Journals (Sweden)
Bian Xiangjuan
2014-05-01
Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.
The use of error and uncertainty methods in the medical laboratory.
Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar
2018-01-26
Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.
HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM
Directory of Open Access Journals (Sweden)
Zahirah Alifia Maulida
2015-01-01
Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human
PARALLEL SOLUTION METHODS OF PARTIAL DIFFERENTIAL EQUATIONS
Directory of Open Access Journals (Sweden)
Korhan KARABULUT
1998-03-01
Full Text Available Partial differential equations arise in almost all fields of science and engineering. Computer time spent in solving partial differential equations is much more than that of in any other problem class. For this reason, partial differential equations are suitable to be solved on parallel computers that offer great computation power. In this study, parallel solution to partial differential equations with Jacobi, Gauss-Siedel, SOR (Succesive OverRelaxation and SSOR (Symmetric SOR algorithms is studied.
Vasil'ev, V. I.; Kardashevsky, A. M.; Popov, V. V.; Prokopev, G. A.
2017-10-01
This article presents results of computational experiment carried out using a finite-difference method for solving the inverse Cauchy problem for a two-dimensional elliptic equation. The computational algorithm involves an iterative determination of the missing boundary condition from the override condition using the conjugate gradient method. The results of calculations are carried out on the examples with exact solutions as well as at specifying an additional condition with random errors are presented. Results showed a high efficiency of the iterative method of conjugate gradients for numerical solution
A method for analysing incidents due to human errors on nuclear installations
International Nuclear Information System (INIS)
Griffon, M.
1980-01-01
This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)
Calculating method on human error probabilities considering influence of management and organization
International Nuclear Information System (INIS)
Gao Jia; Huang Xiangrui; Shen Zupei
1996-01-01
This paper is concerned with how management and organizational influences can be factored into quantifying human error probabilities on risk assessments, using a three-level Influence Diagram (ID) which is originally only as a tool for construction and representation of models of decision-making trees or event trees. An analytical model of human errors causation has been set up with three influence levels, introducing a method for quantification assessments (of the ID), which can be applied into quantifying probabilities) of human errors on risk assessments, especially into the quantification of complex event trees (system) as engineering decision-making analysis. A numerical case study is provided to illustrate the approach
Statistical analysis with measurement error or misclassification strategy, method and application
Yi, Grace Y
2017-01-01
This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...
Re-Normalization Method of Doppler Lidar Signal for Error Reduction
Energy Technology Data Exchange (ETDEWEB)
Park, Nakgyu; Baik, Sunghoon; Park, Seungkyu; Kim, Donglyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Dukhyeon [Hanbat National Univ., Daejeon (Korea, Republic of)
2014-05-15
In this paper, we presented a re-normalization method for the fluctuations of Doppler signals from the various noises mainly due to the frequency locking error for a Doppler lidar system. For the Doppler lidar system, we used an injection-seeded pulsed Nd:YAG laser as the transmitter and an iodine filter as the Doppler frequency discriminator. For the Doppler frequency shift measurement, the transmission ratio using the injection-seeded laser is locked to stabilize the frequency. If the frequency locking system is not perfect, the Doppler signal has some error due to the frequency locking error. The re-normalization process of the Doppler signals was performed to reduce this error using an additional laser beam to an Iodine cell. We confirmed that the renormalized Doppler signal shows the stable experimental data much more than that of the averaged Doppler signal using our calibration method, the reduced standard deviation was 4.838 Χ 10{sup -3}.
Perturbation method for periodic solutions of nonlinear jerk equations
International Nuclear Information System (INIS)
Hu, H.
2008-01-01
A Lindstedt-Poincare type perturbation method with bookkeeping parameters is presented for determining accurate analytical approximate periodic solutions of some third-order (jerk) differential equations with cubic nonlinearities. In the process of the solution, higher-order approximate angular frequencies are obtained by Newton's method. A typical example is given to illustrate the effectiveness and simplicity of the proposed method
A Multipoint Method for Detecting Genotyping Errors and Mutations in Sibling-Pair Linkage Data
Douglas, Julie A.; Boehnke, Michael; Lange, Kenneth
2000-01-01
The identification of genes contributing to complex diseases and quantitative traits requires genetic data of high fidelity, because undetected errors and mutations can profoundly affect linkage information. The recent emphasis on the use of the sibling-pair design eliminates or decreases the likelihood of detection of genotyping errors and marker mutations through apparent Mendelian incompatibilities or close double recombinants. In this article, we describe a hidden Markov method for detect...
Round-off error in long-term orbital integrations using multistep methods
Quinlan, Gerald D.
1994-01-01
Techniques for reducing roundoff error are compared by testing them on high-order Stormer and summetric multistep methods. The best technique for most applications is to write the equation in summed, function-evaluation form and to store the coefficients as rational numbers. A larger error reduction can be achieved by writing the equation in backward-difference form and performing some of the additions in extended precision, but this entails a larger central processing unit (cpu) cost.
Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems
International Nuclear Information System (INIS)
Stephansen, A.F.
2007-12-01
In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)
Estimating misclassification error: a closer look at cross-validation based methods
Directory of Open Access Journals (Sweden)
Ounpraseuth Songthip
2012-11-01
Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.
SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA
International Nuclear Information System (INIS)
Jang, Inseok; Jung, Wondea; Seong, Poong Hyun
2015-01-01
The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation
Correction method for the error of diamond tool's radius in ultra-precision cutting
Wang, Yi; Yu, Jing-chi
2010-10-01
The compensation method for the error of diamond tool's cutting edge is a bottle-neck technology to hinder the high accuracy aspheric surface's directly formation after single diamond turning. Traditional compensation was done according to the measurement result from profile meter, which took long measurement time and caused low processing efficiency. A new compensation method was firstly put forward in the article, in which the correction of the error of diamond tool's cutting edge was done according to measurement result from digital interferometer. First, detailed theoretical calculation related with compensation method was deduced. Then, the effect after compensation was simulated by computer. Finally, φ50 mm work piece finished its diamond turning and new correction turning under Nanotech 250. Testing surface achieved high shape accuracy pv 0.137λ and rms=0.011λ, which approved the new compensation method agreed with predictive analysis, high accuracy and fast speed of error convergence.
The nuclear physical method for high pressure steam manifold water level gauging and its error
International Nuclear Information System (INIS)
Li Nianzu; Li Beicheng; Jia Shengming
1993-10-01
A new method, which is non-contact on measured water level, for measuring high pressure steam manifold water level with nuclear detection technique is introduced. This method overcomes the inherent drawback of previous water level gauges based on other principles. This method can realize full range real time monitoring on the continuous water level of high pressure steam manifold from the start to full load of boiler, and the actual value of water level can be obtained. The measuring errors were analysed on site. Errors from practical operation in Tianjin Junliangcheng Power Plant and in laboratory are also presented
Development of an analysis rule of diagnosis error for standard method of human reliability analysis
International Nuclear Information System (INIS)
Jeong, W. D.; Kang, D. I.; Jeong, K. S.
2003-01-01
This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules
A new method for weakening the combined effect of residual errors on multibeam bathymetric data
Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue
2014-12-01
Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.
Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain
Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young
2010-01-01
Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071
Rani, Monika; Bhatti, Harbax S.; Singh, Vikramjeet
2017-11-01
In optical communication, the behavior of the ultrashort pulses of optical solitons can be described through nonlinear Schrodinger equation. This partial differential equation is widely used to contemplate a number of physically important phenomena, including optical shock waves, laser and plasma physics, quantum mechanics, elastic media, etc. The exact analytical solution of (1+n)-dimensional higher order nonlinear Schrodinger equation by He's variational iteration method has been presented. Our proposed solutions are very helpful in studying the solitary wave phenomena and ensure rapid convergent series and avoid round off errors. Different examples with graphical representations have been given to justify the capability of the method.
The systematic error of temperature noise correlation measurement method and self-calibration
International Nuclear Information System (INIS)
Tian Hong; Tong Yunxian
1993-04-01
The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments
Error analysis of some Galerkin - least squares methods for the elasticity equations
International Nuclear Information System (INIS)
Franca, L.P.; Stenberg, R.
1989-05-01
We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt
Error analysis and system improvements in phase-stepping methods for photoelasticity
International Nuclear Information System (INIS)
Wenyan Ji
1997-11-01
In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe
Solution of problems in calculus of variations via He's variational iteration method
International Nuclear Information System (INIS)
Tatari, Mehdi; Dehghan, Mehdi
2007-01-01
In the modeling of a large class of problems in science and engineering, the minimization of a functional is appeared. Finding the solution of these problems needs to solve the corresponding ordinary differential equations which are generally nonlinear. In recent years He's variational iteration method has been attracted a lot of attention of the researchers for solving nonlinear problems. This method finds the solution of the problem without any discretization of the equation. Since this method gives a closed form solution of the problem and avoids the round off errors, it can be considered as an efficient method for solving various kinds of problems. In this research He's variational iteration method will be employed for solving some problems in calculus of variations. Some examples are presented to show the efficiency of the proposed technique
A Method and Support Tool for the Analysis of Human Error Hazards in Digital Devices
International Nuclear Information System (INIS)
Lee, Yong Hee; Kim, Seon Soo; Lee, Yong Hee
2012-01-01
In recent years, many nuclear power plants have adopted modern digital I and C technologies since they are expected to significantly improve their performance and safety. Modern digital technologies were expected to significantly improve both the economical efficiency and safety of nuclear power plants. However, the introduction of an advanced main control room (MCR) is accompanied with lots of changes in forms and features and differences through virtue of new digital devices. Many user-friendly displays and new features in digital devices are not enough to prevent human errors in nuclear power plants (NPPs). It may be an urgent to matter find the human errors potentials due to digital devices, and their detailed mechanisms. We can then consider them during the design of digital devices and their interfaces. The characteristics of digital technologies and devices may give many opportunities to the interface management, and can be integrated into a compact single workstation in an advanced MCR, such that workers can operate the plant with minimum burden under any operating condition. However, these devices may introduce new types of human errors, and thus we need a means to evaluate and prevent such errors, especially within digital devices for NPPs. This research suggests a new method named HEA-BIS (Human Error Analysis based on Interaction Segment) to confirm and detect human errors associated with digital devices. This method can be facilitated by support tools when used to ensure the safety when applying digital devices in NPPs
SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER
International Nuclear Information System (INIS)
QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.
2007-01-01
Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately
A method for local transport analysis in tokamaks with error calculation
International Nuclear Information System (INIS)
Hogeweij, G.M.D.; Hordosy, G.; Lopes Cardozo, N.J.
1989-01-01
Global transport studies have revealed that heat transport in a tokamak is anomalous, but cannot provide information about the nature of the anomaly. Therefore, local transport analysis is essential for the study of anomalous transport. However, the determination of local transport coefficients is not a trivial affair. Generally speaking one can either directly measure the heat diffusivity, χ, by means of heat pulse propagation analysis, or deduce the profile of χ from measurements of the profiles of the temperature, T, and the power deposition. Here we are concerned only with the latter method, the local power balance analysis. For the sake of clarity heat diffusion only is considered: ρ=-gradT/q (1) where ρ=κ -1 =(nχ) -1 is the heat resistivity and q is the heat flux per unit area. It is assumed that the profiles T(r) and q(r) are given with some experimental error. In practice T(r) is measured directly, e.g. from ECE spectroscopy, while q(r) is deduced from the power deposition and loss profiles. The latter cannot be measured directly and is partly determined on the basis of models. This complication will not be considered here. Since in eq. (1) the gradient of T appears, noise on T can severely affect the solution ρ. This means that in general some form of smoothing must be applied. A criterion is needed to select the optimal smoothing. Too much smoothing will wipe out the details, whereas with too little smoothing the noise will distort the reconstructed profile of ρ. Here a new method to solve eq. (1) is presented which expresses ρ(r) as a cosine-series. The coefficients of this series are given as linear combinations of the Fourier coefficients of the measured T- and q-profiles. This formulation allows 1) the stable and accurate calculation of the ρ-profile, and 2) the analytical calculation of the error in this profile. (author) 5 refs., 3 figs
An error compensation method for a linear array sun sensor with a V-shaped slit
International Nuclear Information System (INIS)
Fan, Qiao-yun; Tan, Xiao-feng
2015-01-01
Existing methods of improving measurement accuracy, such as polynomial fitting and increasing pixel numbers, cannot guarantee high precision and good miniaturization specifications of a microsun sensor at the same time. Therefore, a novel integrated and accurate error compensation method is proposed. A mathematical error model is established according to the analysis results of all the contributing factors, and the model parameters are calculated through multi-sets simultaneous calibration. The numerical simulation results prove that the calibration method is unaffected by installation errors introduced by the calibration process, and is capable of separating the sensor’s intrinsic and extrinsic parameters precisely, and obtaining accurate and robust intrinsic parameters. In laboratorial calibration, the calibration data are generated by using a two-axis rotation table and a sun simulator. The experimental results show that owing to the proposed error compensation method, the sun sensor’s measurement accuracy is improved by 30 times throughout its field of view (±60° × ±60°), with a RMS error of 0.1°. (paper)
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
Determination of solute descriptors by chromatographic methods
International Nuclear Information System (INIS)
Poole, Colin F.; Atapattu, Sanka N.; Poole, Salwa K.; Bell, Andrea K.
2009-01-01
The solvation parameter model is now well established as a useful tool for obtaining quantitative structure-property relationships for chemical, biomedical and environmental processes. The model correlates a free-energy related property of a system to six free-energy derived descriptors describing molecular properties. These molecular descriptors are defined as L (gas-liquid partition coefficient on hexadecane at 298 K), V (McGowan's characteristic volume), E (excess molar refraction), S (dipolarity/polarizability), A (hydrogen-bond acidity), and B (hydrogen-bond basicity). McGowan's characteristic volume is trivially calculated from structure and the excess molar refraction can be calculated for liquids from their refractive index and easily estimated for solids. The remaining four descriptors are derived by experiment using (largely) two-phase partitioning, chromatography, and solubility measurements. In this article, the use of gas chromatography, reversed-phase liquid chromatography, micellar electrokinetic chromatography, and two-phase partitioning for determining solute descriptors is described. A large database of experimental retention factors and partition coefficients is constructed after first applying selection tools to remove unreliable experimental values and an optimized collection of varied compounds with descriptor values suitable for calibrating chromatographic systems is presented. These optimized descriptors are demonstrated to be robust and more suitable than other groups of descriptors characterizing the separation properties of chromatographic systems.
Determination of solute descriptors by chromatographic methods.
Poole, Colin F; Atapattu, Sanka N; Poole, Salwa K; Bell, Andrea K
2009-10-12
The solvation parameter model is now well established as a useful tool for obtaining quantitative structure-property relationships for chemical, biomedical and environmental processes. The model correlates a free-energy related property of a system to six free-energy derived descriptors describing molecular properties. These molecular descriptors are defined as L (gas-liquid partition coefficient on hexadecane at 298K), V (McGowan's characteristic volume), E (excess molar refraction), S (dipolarity/polarizability), A (hydrogen-bond acidity), and B (hydrogen-bond basicity). McGowan's characteristic volume is trivially calculated from structure and the excess molar refraction can be calculated for liquids from their refractive index and easily estimated for solids. The remaining four descriptors are derived by experiment using (largely) two-phase partitioning, chromatography, and solubility measurements. In this article, the use of gas chromatography, reversed-phase liquid chromatography, micellar electrokinetic chromatography, and two-phase partitioning for determining solute descriptors is described. A large database of experimental retention factors and partition coefficients is constructed after first applying selection tools to remove unreliable experimental values and an optimized collection of varied compounds with descriptor values suitable for calibrating chromatographic systems is presented. These optimized descriptors are demonstrated to be robust and more suitable than other groups of descriptors characterizing the separation properties of chromatographic systems.
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.
2012-01-01
We study the spatially semidiscrete lumped mass method for the model homogeneous heat equation with homogeneous Dirichlet boundary conditions. Improving earlier results we show that known optimal order smooth initial data error estimates for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods. © 2011 American Mathematical Society.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Energy Technology Data Exchange (ETDEWEB)
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
expansion method and travelling wave solutions for the perturbed ...
Indian Academy of Sciences (India)
Abstract. In this paper, we construct the travelling wave solutions to the perturbed nonlinear. Schrödinger's equation (NLSE) with Kerr law non-linearity by the extended (G /G)-expansion method. Based on this method, we obtain abundant exact travelling wave solutions of NLSE with. Kerr law nonlinearity with arbitrary ...
The functional variable method for finding exact solutions of some ...
Indian Academy of Sciences (India)
Abstract. In this paper, we implemented the functional variable method and the modified. Riemann–Liouville derivative for the exact solitary wave solutions and periodic wave solutions of the time-fractional Klein–Gordon equation, and the time-fractional Hirota–Satsuma coupled. KdV system. This method is extremely simple ...
Comparison of different methods for the solution of sets of linear equations
International Nuclear Information System (INIS)
Bilfinger, T.; Schmidt, F.
1978-06-01
The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de
A New Method to Solve Numeric Solution of Nonlinear Dynamic System
Directory of Open Access Journals (Sweden)
Min Hu
2016-01-01
Full Text Available It is well known that the cubic spline function has advantages of simple forms, good convergence, approximation, and second-order smoothness. A particular class of cubic spline function is constructed and an effective method to solve the numerical solution of nonlinear dynamic system is proposed based on the cubic spline function. Compared with existing methods, this method not only has high approximation precision, but also avoids the Runge phenomenon. The error analysis of several methods is given via two numeric examples, which turned out that the proposed method is a much more feasible tool applied to the engineering practice.
Approximate solution methods in engineering mechanics
International Nuclear Information System (INIS)
Boresi, A.P.; Cong, K.P.
1991-01-01
This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-07
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Directory of Open Access Journals (Sweden)
Huiliang Cao
2016-01-01
Full Text Available This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC, Quadrature Force Correction (QFC and Coupling Stiffness Correction (CSC methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.
Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods
Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun
2016-01-01
This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses’ quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups’ output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability. PMID:26751455
Reduction of very large reaction mechanisms using methods based on simulation error minimization
Energy Technology Data Exchange (ETDEWEB)
Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)
2009-02-15
A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)
Research on the Method of Noise Error Estimation of Atomic Clocks
Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.
2017-05-01
The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
A method to deal with installation errors of wearable accelerometers for human activity recognition
International Nuclear Information System (INIS)
Jiang, Ming; Wang, Zhelong; Shang, Hong; Li, Hongyi; Wang, Yuechao
2011-01-01
Human activity recognition (HAR) by using wearable accelerometers has gained significant interest in recent years in a range of healthcare areas, including inferring metabolic energy expenditure, predicting falls, measuring gait parameters and monitoring daily activities. The implementation of HAR relies heavily on the correctness of sensor fixation. The installation errors of wearable accelerometers may dramatically decrease the accuracy of HAR. In this paper, a method is proposed to improve the robustness of HAR to the installation errors of accelerometers. The method first calculates a transformation matrix by using Gram–Schmidt orthonormalization in order to eliminate the sensor's orientation error and then employs a low-pass filter with a cut-off frequency of 10 Hz to eliminate the main effect of the sensor's misplacement. The experimental results showed that the proposed method obtained a satisfactory performance for HAR. The average accuracy rate from ten subjects was 95.1% when there were no installation errors, and was 91.9% when installation errors were involved in wearable accelerometers
International Nuclear Information System (INIS)
Krini, Ossmane; Börcsök, Josef
2012-01-01
In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.
Energy Technology Data Exchange (ETDEWEB)
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
Directory of Open Access Journals (Sweden)
H. O. Bakodah
2013-01-01
Full Text Available A method of lines approach to the numerical solution of nonlinear wave equations typified by the regularized long wave (RLW is presented. The method developed uses a finite differences discretization to the space. Solution of the resulting system was obtained by applying fourth Runge-Kutta time discretization method. Using Von Neumann stability analysis, it is shown that the proposed method is marginally stable. To test the accuracy of the method some numerical experiments on test problems are presented. Test problems including solitary wave motion, two-solitary wave interaction, and the temporal evaluation of a Maxwellian initial pulse are studied. The accuracy of the present method is tested with and error norms and the conservation properties of mass, energy, and momentum under the RLW equation.
Kinetic equation solution by inverse kinetic method
International Nuclear Information System (INIS)
Salas, G.
1983-01-01
We propose a computer program (CAMU) which permits to solve the inverse kinetic equation. The CAMU code is written in HPL language for a HP 982 A microcomputer with a peripheral interface HP 9876 A ''thermal graphic printer''. The CAMU code solves the inverse kinetic equation by taking as data entry the output of the ionization chambers and integrating the equation with the help of the Simpson method. With this program we calculate the evolution of the reactivity in time for a given disturbance
International Nuclear Information System (INIS)
Park, H.; De Oliveira, C. R. E.
2007-01-01
This paper describes the verification of the recently developed space-angle self-adaptive algorithm for the finite element-spherical harmonics method via the Method of Manufactured Solutions. This method provides a simple, yet robust way for verifying the theoretical properties of the adaptive algorithm and interfaces very well with the underlying second-order, even-parity transport formulation. Simple analytic solutions in both spatial and angular variables are manufactured to assess the theoretical performance of the a posteriori error estimates. The numerical results confirm reliability of the developed space-angle error indicators. (authors)
Errors in accident data, its types, causes and methods of rectification-analysis of the literature.
Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri
2017-07-29
Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
The behaviour of the local error in splitting methods applied to stiff problems
International Nuclear Information System (INIS)
Kozlov, Roman; Kvaernoe, Anne; Owren, Brynjulf
2004-01-01
Splitting methods are frequently used in solving stiff differential equations and it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local order theory may lead to unstable error behaviour and inefficient stepsize sequences. Here, the behaviour of the local error in the Strang and Godunov splitting methods is explained by using two different tools, Lie series and singular perturbation theory. The two approaches provide an understanding of the phenomena from different points of view, but both are consistent with what is observed in numerical experiments
A GROSS ERROR ELIMINATION METHOD FOR POINT CLOUD DATA BASED ON KD-TREE
Directory of Open Access Journals (Sweden)
Q. Kang
2018-04-01
Full Text Available Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data’s pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
Approximate solution fuzzy pantograph equation by using homotopy perturbation method
Jameel, A. F.; Saaban, A.; Ahadkulov, H.; Alipiah, F. M.
2017-09-01
In this paper, Homotopy Perturbation Method (HPM) is modified and formulated to find the approximate solution for its employment to solve (FDDEs) involving a fuzzy pantograph equation. The solution that can be obtained by using HPM is in the form of infinite series that converge to the actual solution of the FDDE and this is one of the benefits of this method In addition, it can be used for solving high order fuzzy delay differential equations directly without reduction to a first order system. Moreover, the accuracy of HPM can be detected without needing the exact solution. The HPM is studied for fuzzy initial value problems involving pantograph equation. Using the properties of fuzzy set theory, we reformulate the standard approximate method of HPM and obtain the approximate solutions. The effectiveness of the proposed method is demonstrated for third order fuzzy pantograph equation.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
Directory of Open Access Journals (Sweden)
Miyamoto Michael M
2009-08-01
Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.
Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...
The boundary element method : errors and gridding for problems with hot spots
Kakuba, G.
2011-01-01
Adaptive gridding methods are of fundamental importance both for industry and academia. As one of the computing methods, the Boundary Element Method (BEM) is used to simulate problems whose fundamental solutions are available. The method is usually characterised as constant elements BEM or linear
Nonclassical pseudospectral method for the solution of brachistochrone problem
International Nuclear Information System (INIS)
Alipanah, A.; Razzaghi, M.; Dehghan, M.
2007-01-01
In this paper, nonclassical pseudospectral method is proposed for solving the classic brachistochrone problem. The brachistochrone problem is first formulated as a nonlinear optimal control problem. Properties of nonclassical pseudospectral method are presented, these properties are then utilized to reduce the computation of brachistochrone problem to the solution of algebraic equations. Using this method, the solution to the brachistochrone problem is compared with those in the literature
Wu, Zedong
2018-04-05
Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is is highly accurate and efficient.
International Nuclear Information System (INIS)
Zhang Wan-Zhen; Chen Zhe-Bo; Xia Bin-Feng; Lin Bin; Cao Xiang-Qun
2014-01-01
Digital structured light (SL) profilometry is increasingly used in three-dimensional (3D) measurement technology. However, the nonlinearity of the off-the-shelf projectors and cameras seriously reduces the measurement accuracy. In this paper, first, we review the nonlinear effects of the projector–camera system in the phase-shifting structured light depth measurement method. We show that high order harmonic wave components lead to phase error in the phase-shifting method. Then a practical method based on frequency domain filtering is proposed for nonlinear error reduction. By using this method, the nonlinear calibration of the SL system is not required. Moreover, both the nonlinear effects of the projector and the camera can be effectively reduced. The simulations and experiments have verified our nonlinear correction method. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Lugtig, Peter
2017-01-01
This paper proposes a method to simultaneously estimate both measurement and nonresponse errors for attitudinal and behavioural questions in a longitudinal survey. The method uses a Multi-Trait Multi-Method (MTMM) approach, which is commonly used to estimate the reliability and validity of survey
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi
2013-01-01
initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally
The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.
Kaskowitz, Gary S.; De Ayala, R. J.
2001-01-01
Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…
Digital halftoning methods for selectively partitioning error into achromatic and chromatic channels
Mulligan, Jeffrey B.
1990-01-01
A method is described for reducing the visibility of artifacts arising in the display of quantized color images on CRT displays. The method is based on the differential spatial sensitivity of the human visual system to chromatic and achromatic modulations. Because the visual system has the highest spatial and temporal acuity for the luminance component of an image, a technique which will reduce luminance artifacts at the expense of introducing high-frequency chromatic errors is sought. A method based on controlling the correlations between the quantization errors in the individual phosphor images is explored. The luminance component is greatest when the phosphor errors are positively correlated, and is minimized when the phosphor errors are negatively correlated. The greatest effect of the correlation is obtained when the intensity quantization step sizes of the individual phosphors have equal luminances. For the ordered dither algorithm, a version of the method can be implemented by simply inverting the matrix of thresholds for one of the color components.
Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method
Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.
2012-01-01
Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
The assessment of cognitive errors using an observer-rated method.
Drapeau, Martin
2014-01-01
Cognitive Errors (CEs) are a key construct in cognitive behavioral therapy (CBT). Integral to CBT is that individuals with depression process information in an overly negative or biased way, and that this bias is reflected in specific depressotypic CEs which are distinct from normal information processing. Despite the importance of this construct in CBT theory, practice, and research, few methods are available to researchers and clinicians to reliably identify CEs as they occur. In this paper, the author presents a rating system, the Cognitive Error Rating Scale, which can be used by trained observers to identify and assess the cognitive errors of patients or research participants in vivo, i.e., as they are used or reported by the patients or participants. The method is described, including some of the more important rating conventions to be considered when using the method. This paper also describes the 15 cognitive errors assessed, and the different summary scores, including valence of the CEs, that can be derived from the method.
Greedy solution of ill-posed problems: error bounds and exact inversion
International Nuclear Information System (INIS)
Denis, L; Lorenz, D A; Trede, D
2009-01-01
The orthogonal matching pursuit (OMP) is a greedy algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general, and in particular for two deconvolution examples from mass spectrometry and digital holography, respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transferred to ill-posed inverse problems since here the atoms are typically far from orthogonal. The ill-posedness of the operator probably causes the correlation of two distinct atoms to become huge, i.e. that two atoms look much alike. Therefore, one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for the exact recovery of the support of noisy signals. In the two examples, mass spectrometry and digital holography, we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography, our analysis may be regarded as a first step to calculate the resolution power of droplet holography
On numerical solution of Burgers' equation by homotopy analysis method
International Nuclear Information System (INIS)
Inc, Mustafa
2008-01-01
In this Letter, we present the Homotopy Analysis Method (shortly HAM) for obtaining the numerical solution of the one-dimensional nonlinear Burgers' equation. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Convergence of the solution and effects for the method is discussed. The comparison of the HAM results with the Homotopy Perturbation Method (HPM) and the results of [E.N. Aksan, Appl. Math. Comput. 174 (2006) 884; S. Kutluay, A. Esen, Int. J. Comput. Math. 81 (2004) 1433; S. Abbasbandy, M.T. Darvishi, Appl. Math. Comput. 163 (2005) 1265] are made. The results reveal that HAM is very simple and effective. The HAM contains the auxiliary parameter h, which provides us with a simple way to adjust and control the convergence region of solution series. The numerical solutions are compared with the known analytical and some numerical solutions
International Nuclear Information System (INIS)
Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.
1997-01-01
Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as
A new solution method for wheel/rail rolling contact.
Yang, Jian; Song, Hua; Fu, Lihua; Wang, Meng; Li, Wei
2016-01-01
To solve the problem of wheel/rail rolling contact of nonlinear steady-state curving, a three-dimensional transient finite element (FE) model is developed by the explicit software ANSYS/LS-DYNA. To improve the solving speed and efficiency, an explicit-explicit order solution method is put forward based on analysis of the features of implicit and explicit algorithm. The solution method was first applied to calculate the pre-loading of wheel/rail rolling contact with explicit algorithm, and then the results became the initial conditions in solving the dynamic process of wheel/rail rolling contact with explicit algorithm as well. Simultaneously, the common implicit-explicit order solution method is used to solve the FE model. Results show that the explicit-explicit order solution method has faster operation speed and higher efficiency than the implicit-explicit order solution method while the solution accuracy is almost the same. Hence, the explicit-explicit order solution method is more suitable for the wheel/rail rolling contact model with large scale and high nonlinearity.
A human error taxonomy and its application to an automatic method accident analysis
International Nuclear Information System (INIS)
Matthews, R.H.; Winter, P.W.
1983-01-01
Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)
Czech Academy of Sciences Publication Activity Database
Strakoš, Zdeněk; Tichý, Petr
2002-01-01
Roč. 13, - (2002), s. 56-80 ISSN 1068-9613 R&D Projects: GA ČR GA201/02/0595 Institutional research plan: AV0Z1030915 Keywords : conjugate gradient method * Gauss kvadrature * evaluation of convergence * error bounds * finite precision arithmetic * rounding errors * loss of orthogonality Subject RIV: BA - General Mathematics Impact factor: 0.565, year: 2002 http://etna.mcs.kent.edu/volumes/2001-2010/vol13/abstract.php?vol=13&pages=56-80
A review of some a posteriori error estimates for adaptive finite element methods
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2010-01-01
Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230
Defining collaborative business rules management solutions : framework and method
dr. Martijn Zoet; Johan Versendaal
2014-01-01
From the publishers' website: The goal of this research is to define a method for configuring a collaborative business rules management solution from a value proposition perspective. In an earlier published study (Business rules management solutions: added value by means of business
Reduction in the ionospheric error for a single-frequency GPS timing solution using tomography
Directory of Open Access Journals (Sweden)
Cathryn N. Mitchell
2009-06-01
Full Text Available
Abstract
Single-frequency Global Positioning System (GPS receivers do not accurately compensate for the ionospheric delay imposed upon a GPS signal. They rely upon models to compensate for the ionosphere. This delay compensation can be improved by measuring it directly with a dual-frequency receiver, or by monitoring the ionosphere using real-time maps. This investigation uses a 4D tomographic algorithm, Multi Instrument Data Analysis System (MIDAS, to correct for the ionospheric delay and compares the results to existing single and dualfrequency techniques. Maps of the ionospheric electron density, across Europe, are produced by using data collected from a fixed network of dual-frequency GPS receivers. Single-frequency pseudorange observations are corrected by using the maps to find the excess propagation delay on the GPS L1 signals. Days during the solar maximum year 2002 and the October 2003 storm have been chosen to display results when the ionospheric delays are large and variable. Results that improve upon the use of existing ionospheric models are achieved by applying MIDAS to fixed and mobile single-frequency GPS timing solutions. The approach offers the potential for corrections to be broadcast over a local region, or provided via the internet and allows timing accuracies to within 10 ns to be achieved.
Periodic boundary conditions and the error-controlled fast multipole method
Energy Technology Data Exchange (ETDEWEB)
Kabadshow, Ivo
2012-08-22
The simulation of pairwise interactions in huge particle ensembles is a vital issue in scientific research. Especially the calculation of long-range interactions poses limitations to the system size, since these interactions scale quadratically with the number of particles. Fast summation techniques like the Fast Multipole Method (FMM) can help to reduce the complexity to O(N). This work extends the possible range of applications of the FMM to periodic systems in one, two and three dimensions with one unique approach. Together with a tight error control, this contribution enables the simulation of periodic particle systems for different applications without the need to know and tune the FMM specific parameters. The implemented error control scheme automatically optimizes the parameters to obtain an approximation for the minimal runtime for a given energy error bound.
Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers
Directory of Open Access Journals (Sweden)
Zheng You
2013-04-01
Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Optical system error analysis and calibration method of high-accuracy star trackers.
Sun, Ting; Xing, Fei; You, Zheng
2013-04-08
The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.
Directory of Open Access Journals (Sweden)
Shoubin Wang
2017-01-01
Full Text Available The compound variable inverse problem which comprises boundary temperature distribution and surface convective heat conduction coefficient of two-dimensional steady heat transfer system with inner heat source is studied in this paper applying the conjugate gradient method. The introduction of complex variable to solve the gradient matrix of the objective function obtains more precise inversion results. This paper applies boundary element method to solve the temperature calculation of discrete points in forward problems. The factors of measuring error and the number of measuring points zero error which impact the measurement result are discussed and compared with L-MM method in inverse problems. Instance calculation and analysis prove that the method applied in this paper still has good effectiveness and accuracy even if measurement error exists and the boundary measurement points’ number is reduced. The comparison indicates that the influence of error on the inversion solution can be minimized effectively using this method.
International Nuclear Information System (INIS)
Yamamoto, Akio; Tatsumi, Masahiro
2006-01-01
In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)
A heteroscedastic measurement error model for method comparison data with replicate measurements.
Nawarathna, Lakshika S; Choudhary, Pankaj K
2015-03-30
Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.
A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors
Directory of Open Access Journals (Sweden)
Shuang Wang
2015-12-01
Full Text Available In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF and Least Square Methods (LSM is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.
New numerical method for solving the solute transport equation
International Nuclear Information System (INIS)
Ross, B.; Koplik, C.M.
1978-01-01
The solute transport equation can be solved numerically by approximating the water flow field by a network of stream tubes and using a Green's function solution within each stream tube. Compared to previous methods, this approach permits greater computational efficiency and easier representation of small discontinuities, and the results are easier to interpret physically. The method has been used to study hypothetical sites for disposal of high-level radioactive waste
Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method
International Nuclear Information System (INIS)
Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A
2009-01-01
A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.
Born approximation to a perturbative numerical method for the solution of the Schrodinger equation
International Nuclear Information System (INIS)
Adam, Gh.
1978-05-01
A perturbative numerical (PN) method is given for the solution of a regular one-dimensional Cauchy problem arising from the Schroedinger equation. The present method uses a step function approximation for the potential. Global, free of scaling difficulty, forward and backward PN algorithms are derived within first order perturbation theory (Born approximation). A rigorous analysis of the local truncation errors is performed. This shows that the order of accuracy of the method is equal to four. In between the mesh points, the global formula for the wavefunction is accurate within O(h 4 ), while that for the first order derivative is accurate within O(h 3 ). (author)
Method for Cs-137 separation from the decontamination solutions
International Nuclear Information System (INIS)
Toropov, I.G.; Efremenkov, V.M.; Toropova, V.V.; Satsukevich, V.M.; Davidov, Yu.P.
1995-01-01
In this work results of investigations are presented on separation of radiocaesium from the decontamination solutions containing reducing agents (thiocarbamide). The scientific basis for radiocaesium removal from the solution focuses on the state of the radionuclide and its sorption behavior in the solution with a complicated composition. Then using a combination of sorption and ultrafiltration methods it would be possible to concentrate the radionuclide in a small volume and to purify the main part of the solution. As a sorbent for radiocaesium removal from the solution, a ferrocyanide based sorbent is proposed. Use of this sorbent is justified since its high selectivity and effectiveness for radiocaesium sorption from the solutions of different composition is well known. When synthesis of the sorbent is performed directly in the treating solution, two components as a minimum should be added to it, namely K 4 Fe(CN) 6 and metal ions of Ni-II, Co-II, Cu-II, etc. The results are presented which show the possibility of radiocaesium separation from the decontamination solutions (containing 60--100 g/l of salts) using sorption and membrane separation methods without the use of metal salts. At the same time by using FE-2 in solution in the presence of cyanide ions and thiocarbamide, it is possible to avoid the addition of metal salts (Ni, Cu, etc.). Utilization of the proposed method for spent decontamination solution treatment allows a relatively easy way to reduce the concentration of radiocaesium in solution on 2--4 orders of magnitudes, and to exclude the utilization of relatively expensive metal salts
Energy Technology Data Exchange (ETDEWEB)
Reer, B.; Dang, V.N.; Hirschberg, S. [Paul Scherrer Inst., Nuclear Energy and Safety Research Dept., CH-5232 Villigen PSI (Switzerland); Straeter, O. [Gesellschaft fur Anlagen- und Reaktorsicherheit (Germany)
1999-12-01
identify EOCs. The following elements for error search may be distinguished: task, action, system failure, and scenario. All of the methods use at least three of these elements. The review of the methods suggests that there is space for and a need for integrating them. In the area of identifying potential EOCs, for instance, it may be desirable to combine the deductive search for EOCs as additional contributors to hardware failure events (as is done in ATHEANA and the Borssele method) with a search centred on the range of safety actions considered in procedures and training (as proposed in CODA). In combining these search strategies, a key constraint is to maintain the required effort at an acceptable level. Development is also needed to address the quantification problem. In contexts that 'force the error', for instance, contexts in which the plant cues potentially motivate inappropriate actions, the decision error has a high probability. In these contexts, the problem reduces to the quantification of the probability of the context, which can be based on engineering evaluation of the associated scenario. On the other hand, quantifying the probability of decision errors remains a problem in other cases. The CAHR methodology suggests a solution by basing the probability on a relative error frequency (how often similar errors appear in a database of events); efforts are being made to validate this procedure. In the longer term, dynamic, simulation-based PSA tools may provide a means to manage the range of new scenarios introduced when EOCs are comprehensively treated in the PSA. The report discusses finally the state of dynamic methods and how, in the mean time, dynamic simulations that treat the interdependent plant and operator responses can support the analysis of EOCs. (author)
International Nuclear Information System (INIS)
Reer, B.; Dang, V.N.; Hirschberg, S.; Straeter, O.
1999-12-01
identify EOCs. The following elements for error search may be distinguished: task, action, system failure, and scenario. All of the methods use at least three of these elements. The review of the methods suggests that there is space for and a need for integrating them. In the area of identifying potential EOCs, for instance, it may be desirable to combine the deductive search for EOCs as additional contributors to hardware failure events (as is done in ATHEANA and the Borssele method) with a search centred on the range of safety actions considered in procedures and training (as proposed in CODA). In combining these search strategies, a key constraint is to maintain the required effort at an acceptable level. Development is also needed to address the quantification problem. In contexts that 'force the error', for instance, contexts in which the plant cues potentially motivate inappropriate actions, the decision error has a high probability. In these contexts, the problem reduces to the quantification of the probability of the context, which can be based on engineering evaluation of the associated scenario. On the other hand, quantifying the probability of decision errors remains a problem in other cases. The CAHR methodology suggests a solution by basing the probability on a relative error frequency (how often similar errors appear in a database of events); efforts are being made to validate this procedure. In the longer term, dynamic, simulation-based PSA tools may provide a means to manage the range of new scenarios introduced when EOCs are comprehensively treated in the PSA. The report discusses finally the state of dynamic methods and how, in the mean time, dynamic simulations that treat the interdependent plant and operator responses can support the analysis of EOCs. (author)
Chen, Zhe; Qiu, Zurong; Huo, Xinming; Fan, Yuming; Li, Xinghua
2017-03-01
A fiber-capacitive drop analyzer is an instrument which monitors a growing droplet to produce a capacitive opto-tensiotrace (COT). Each COT is an integration of fiber light intensity signals and capacitance signals and can reflect the unique physicochemical property of a liquid. In this study, we propose a solution analytical and concentration quantitative method based on multivariate statistical methods. Eight characteristic values are extracted from each COT. A series of COT characteristic values of training solutions at different concentrations compose a data library of this kind of solution. A two-stage linear discriminant analysis is applied to analyze different solution libraries and establish discriminant functions. Test solutions can be discriminated by these functions. After determining the variety of test solutions, Spearman correlation test and principal components analysis are used to filter and reduce dimensions of eight characteristic values, producing a new representative parameter. A cubic spline interpolation function is built between the parameters and concentrations, based on which we can calculate the concentration of the test solution. Methanol, ethanol, n-propanol, and saline solutions are taken as experimental subjects in this paper. For each solution, nine or ten different concentrations are chosen to be the standard library, and the other two concentrations compose the test group. By using the methods mentioned above, all eight test solutions are correctly identified and the average relative error of quantitative analysis is 1.11%. The method proposed is feasible which enlarges the applicable scope of recognizing liquids based on the COT and improves the concentration quantitative precision, as well.
Exact solution of some linear matrix equations using algebraic methods
Djaferis, T. E.; Mitter, S. K.
1977-01-01
A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.
Rapid spectrographic method for determining microcomponents in solutions
International Nuclear Information System (INIS)
Karpenko, L.I.; Fadeeva, L.A.; Gordeeva, A.N.; Ermakova, N.V.
1984-01-01
Rapid spectrographic method foe determining microcomponents (Cd, V, Mo, Ni, rare earths and other elements) in industrial and natural solutions has been developed. The analyses were conducted in argon medium and in the air. Calibration charts for determining individual rare earths in solutions are presented. The accuracy of analysis (Sr) was detection limit was 10 -3 -10 -4 mg/ml, that for rare earths - 1.10 -2 mg/ml. The developed method enables to rapidly analyze solutions (sewages and industrialllwaters, wine products) for 20 elements including 6 rare earths, using strandard equipment
Suppressing carrier removal error in the Fourier transform method for interferogram analysis
International Nuclear Information System (INIS)
Fan, Qi; Yang, Hongru; Li, Gaoping; Zhao, Jianlin
2010-01-01
A new carrier removal method for interferogram analysis using the Fourier transform is presented. The proposed method can be used to suppress the carrier removal error as well as the spectral leakage error. First, the carrier frequencies are estimated with the spectral centroid of the up sidelobe of the apodized interferogram, and then the up sidelobe can be shifted to the origin in the frequency domain by multiplying the original interferogram by a constructed plane reference wave. The influence of the carrier frequencies without an integer multiple of the frequency interval and the window function for apodization of the interferogram can be avoided in our work. The simulation and experimental results show that this method is effective for phase measurement with a high accuracy from a single interferogram
Errors of the backextrapolation method in determination of the blood volume
Schröder, T.; Rösler, U.; Frerichs, I.; Hahn, G.; Ennker, J.; Hellige, G.
1999-01-01
Backextrapolation is an empirical method to calculate the central volume of distribution (for example the blood volume). It is based on the compartment model, which says that after an injection the substance is distributed instantaneously in the central volume with no time delay. The occurrence of recirculation is not taken into account. The change of concentration with time of indocyanine green (ICG) was observed in an in vitro model, in which the volume was recirculating in 60 s and the clearance of the ICG could be varied. It was found that the higher the elimination of ICG, the higher was the error of the backextrapolation method. The theoretical consideration of Schröder et al ( Biomed. Tech. 42 (1997) 7-11) was proved. If the injected substance is eliminated somewhere in the body (i.e. not by radioactive decay), the backextrapolation method produces large errors.
A novel method to correct for pitch and yaw patient setup errors in helical tomotherapy
International Nuclear Information System (INIS)
Boswell, Sarah A.; Jeraj, Robert; Ruchala, Kenneth J.; Olivera, Gustavo H.; Jaradat, Hazim A.; James, Joshua A.; Gutierrez, Alonso; Pearson, Dave; Frank, Gary; Mackie, T. Rock
2005-01-01
An accurate means of determining and correcting for daily patient setup errors is important to the cancer outcome in radiotherapy. While many tools have been developed to detect setup errors, difficulty may arise in accurately adjusting the patient to account for the rotational error components. A novel, automated method to correct for rotational patient setup errors in helical tomotherapy is proposed for a treatment couch that is restricted to motion along translational axes. In tomotherapy, only a narrow superior/inferior section of the target receives a dose at any instant, thus rotations in the sagittal and coronal planes may be approximately corrected for by very slow continuous couch motion in a direction perpendicular to the scanning direction. Results from proof-of-principle tests indicate that the method improves the accuracy of treatment delivery, especially for long and narrow targets. Rotational corrections about an axis perpendicular to the transverse plane continue to be implemented easily in tomotherapy by adjustment of the initial gantry angle
Error analysis of motion correction method for laser scanning of moving objects
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
An investigation of calibration methods for solution calorimetry.
Yff, Barbara T S; Royall, Paul G; Brown, Marc B; Martin, Gary P
2004-01-28
Solution calorimetry has been used in a number of varying applications within pharmaceutical research as a technique for the physical characterisation of pharmaceutical materials, such as quantifying small degrees of amorphous content, identifying polymorphs and investigating interactions between drugs and carbohydrates or proteins and carbohydrates. A calibration test procedure is necessary to validate the instrumentation; a few of the suggested calibration reactions are the enthalpies of solution associated with dissolving Tris in 0.1 M HCl or NaCl, KCl or propan-1-ol in water. In addition, there are a number of different methods available to determine enthalpies of solution from the experimental data provided by the calorimeter, for example, the Regnault-Pfaundler's method, a graphical extrapolation based on the Dickinson method, or a manual integration-based method. Thus, the aim of the study was to investigate how each of these methods influences the values for the enthalpy of solution. Experiments were performed according to the method outlined by Hogan and Buckton [Int. J. Pharm. 207 (2000) 57] using KCl (samples of 50, 100 and 200 mg), Tris and sucrose as calibrants. For all three materials the manual integration method was found to be the most consistent with the KCl in water (sample mass of 200 mg) being the most precise. Thus, this method is recommended for the validation of solution calorimeters.
Newton-like methods for Navier-Stokes solution
Qin, N.; Xu, X.; Richards, B. E.
1992-12-01
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
Properties and solution methods for large location-allocation problems
DEFF Research Database (Denmark)
Juel, Henrik; Love, Robert F.
1982-01-01
Location-allocation with l$ _p$ distances is studied. It is shown that this structure can be expressed as a concave minimization programming problem. Since concave minimization algorithms are not yet well developed, five solution methods are developed which utilize the special properties of the l......Location-allocation with l$ _p$ distances is studied. It is shown that this structure can be expressed as a concave minimization programming problem. Since concave minimization algorithms are not yet well developed, five solution methods are developed which utilize the special properties...... of the location-allocation problem. Using the rectilinear distance measure, two of these algorithms achieved optimal solutions in all 102 test problems for which solutions were known. The algorithms can be applied to much larger problems than any existing exact methods....
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.
Directory of Open Access Journals (Sweden)
Emma Wells
Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration
Solidification method for organic solution and processing method of aqueous solution
International Nuclear Information System (INIS)
Kamoshida, Mamoru; Fukazawa, Tetsuo; Yazawa, Noriko; Hasegawa, Toshihiko
1998-01-01
The relative dielectric constant of an organic solution containing polar ingredients is controlled to 13 or less to enable its solidification. The polarity of the organic solution can be evaluated quantitatively by using the relative dielectric constant. If the relative dielectric constant is high, it can be controlled by dilution using a non-polar organic solvent of low relative dielectric constant. With such procedures, solidification can be conducted by using an economical 12-hydroxy stearic acid, process of liquid wastes can be facilitated and the safety can be ensured. (T.M.)
Solution of the Schroedinger equation by a spectral method
International Nuclear Information System (INIS)
Feit, M.D.; Fleck, J.A. Jr.; Steiger, A.
1982-01-01
A new computational method for determining the eigenvalues and eigenfunctions of the Schroedinger equation is described. Conventional methods for solving this problem rely on diagonalization of a Hamiltonian matrix or iterative numerical solutions of a time independent wave equation. The new method, in contrast, is based on the spectral properties of solutions to the time-dependent Schroedinger equation. The method requires the computation of a correlation function from a numerical solution psi(r, t). Fourier analysis of this correlation function reveals a set of resonant peaks that correspond to the stationary states of the system. Analysis of the location of these peaks reveals the eigenvalues with high accuracy. Additional Fourier transforms of psi(r, t) with respect to time generate the eigenfunctions. The effectiveness of the method is demonstrated for a one-dimensional asymmetric double well potential and for the two-dimensional Henon--Heiles potential
The boundary element method for the solution of the multidimensional inverse heat conduction problem
International Nuclear Information System (INIS)
Lagier, Guy-Laurent
1999-01-01
This work focuses on the solution of the inverse heat conduction problem (IHCP), which consists in the determination of boundary conditions from a given set of internal temperature measurements. This problem is difficult to solve due to its ill-posedness and high sensitivity to measurement error. As a consequence, numerical regularization procedures are required to solve this problem. However, most of these methods depend on the dimension and the nature, stationary or transient, of the problem. Furthermore, these methods introduce parameters, called hyper-parameters, which have to be chosen optimally, but can not be determined a priori. So, a new general method is proposed for solving the IHCP. This method is based on a Boundary Element Method formulation, and the use of the Singular Values Decomposition as a regularization procedure. Thanks to this method, it's possible to identify and eliminate the directions of the solution where the measurement error plays the major role. This algorithm is first validated on two-dimensional stationary and one-dimensional transient problems. Some criteria are presented in order to choose the hyper-parameters. Then, the methodology is applied to two-dimensional and three-dimensional, theoretical or experimental, problems. The results are compared with those obtained by a standard method and show the accuracy of the method, its generality, and the validity of the proposed criteria. (author) [fr
International Nuclear Information System (INIS)
Belendez, A; Pascual, C; Fernandez, E; Neipp, C; Belendez, T
2008-01-01
A modified He's homotopy perturbation method is used to calculate higher-order analytical approximate solutions to the relativistic and Duffing-harmonic oscillators. The He's homotopy perturbation method is modified by truncating the infinite series corresponding to the first-order approximate solution before introducing this solution in the second-order linear differential equation, and so on. We find this modified homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. The approximate formulae obtained show excellent agreement with the exact solutions, and are valid for small as well as large amplitudes of oscillation, including the limiting cases of amplitude approaching zero and infinity. For the relativistic oscillator, only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate frequency of less than 1.6% for small and large values of oscillation amplitude, while this relative error is 0.65% for two iterations with two harmonics and as low as 0.18% when three harmonics are considered in the second approximation. For the Duffing-harmonic oscillator the relative error is as low as 0.078% when the second approximation is considered. Comparison of the result obtained using this method with those obtained by the harmonic balance methods reveals that the former is very effective and convenient
International Nuclear Information System (INIS)
Demirbas, E.; Kobya, M.; Konukman, A.E.S.
2008-01-01
In this study, the preparation of activated carbon from almond shell with H 2 SO 4 activation and its ability to remove toxic hexavalent chromium from aqueous solutions are reported. The influences of several operating parameters such as pH, particle size and temperature on the adsorption capacity were investigated. Adsorption of Cr(VI) is found to be highly pH, particle size and temperature dependent. Four adsorption isotherm models namely, Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich were used to analyze the equilibrium data. The Langmuir isotherm provided the best correlation for Cr(VI) onto the almond shell activated carbon (ASC). Adsorption capacity was calculated from the Langmuir isotherm as 190.3 mg/g at 323 K. Thermodynamic parameters were evaluated and the adsorption was endothermic showing monolayer adsorption of Cr(VI). Five error functions were used to treat the equilibrium data using non-linear optimization techniques for evaluating the fit of the isotherm equations. The highest correlation for the isotherm equations in this system was obtained for the Freundlich isotherm. ASC is found to be inexpensive and effective adsorbent for removal of Cr(VI) from aqueous solutions
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current
Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion
Jin, B.; Lazarov, R.; Pasciak, J.; Zhou, Z.
2014-01-01
© 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.
Error analysis of semidiscrete finite element methods for inhomogeneous time-fractional diffusion
Jin, B.
2014-05-30
© 2014 Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. We consider the initial-boundary value problem for an inhomogeneous time-fractional diffusion equation with a homogeneous Dirichlet boundary condition, a vanishing initial data and a nonsmooth right-hand side in a bounded convex polyhedral domain. We analyse two semidiscrete schemes based on the standard Galerkin and lumped mass finite element methods. Almost optimal error estimates are obtained for right-hand side data f (x, t) ε L∞ (0, T; Hq(ω)), ≤1≥ 1, for both semidiscrete schemes. For the lumped mass method, the optimal L2(ω)-norm error estimate requires symmetric meshes. Finally, twodimensional numerical experiments are presented to verify our theoretical results.
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti
2013-01-01
We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.
Evaluating Method Engineer Performance: an error classification and preliminary empirical study
Directory of Open Access Journals (Sweden)
Steven Kelly
1998-11-01
Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.
Hydrogen/deuterium substitution methods: understanding water structure in solution
International Nuclear Information System (INIS)
Soper, A.K.
1993-01-01
The hydrogen/deuterium substitution method has been used for different applications, such as the short range order between water molecules in a number of different environments (aqueous solutions of organic molecules), or to study the partial structure factors of water at high pressure and temperature. The absolute accuracy that can be obtained remains uncertain, but important qualitative information can be obtained on the local organization of water in aqueous solution. Some recent results with pure water, methanol and dimethyl sulphoxide (DMSO) solutions are presented. It is shown that the short range water structure is not greatly affected by most solutes except at high concentrations and when the solute species has its own distinctive interaction with water (such as a dissolved small ion). 3 figs., 14 refs
A method for optical ground station reduce alignment error in satellite-ground quantum experiments
He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei
2018-03-01
A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.
Umari, Amjad M.J.; Gorelick, Steven M.
1986-01-01
In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.
Cost–benefit analysis method for building solutions
International Nuclear Information System (INIS)
Araújo, Catarina; Almeida, Manuela; Bragança, Luís; Barbosa, José Amarilio
2016-01-01
Highlights: • A new cost–benefit method was developed to compare building solutions. • The method considers energy performance, life cycle costs and investment willingness. • The graphical analysis helps stakeholders to easily compare building solutions. • The method was applied to a case study showing consistency and feasibility. - Abstract: The building sector is responsible for consuming approximately 40% of the final energy in Europe. However, more than 50% of this consumption can be reduced through energy-efficient measures. Our society is facing not only a severe and unprecedented environmental crisis but also an economic crisis of similar magnitude. In light of this, EU has developed legislation promoting the use of the Cost-Optimal (CO) method in order to improve building energy efficiency, in which selection criteria is based on life cycle costs. Nevertheless, studies show that the implementation of energy-efficient solutions is far from ideal. Therefore, it is very important to analyse the reasons for this gap between theory and implementation as well as improve selection methods. This study aims to develop a methodology based on a cost-effectiveness analysis, which can be seen as an improvement to the CO method as it considers the investment willingness of stakeholders in the selection process of energy-efficient solutions. The method uses a simple graphical display in which the stakeholders’ investment willingness is identified as the slope of a reference line, allowing easy selection between building solutions. This method will lead to the selection of more desired – from stakeholders’ point of view – and more energy-efficient solutions than those selected through the CO method.
Impact of Channel Estimation Errors on Multiuser Detection via the Replica Method
Directory of Open Access Journals (Sweden)
Li Husheng
2005-01-01
Full Text Available For practical wireless DS-CDMA systems, channel estimation is imperfect due to noise and interference. In this paper, the impact of channel estimation errors on multiuser detection (MUD is analyzed under the framework of the replica method. System performance is obtained in the large system limit for optimal MUD, linear MUD, and turbo MUD, and is validated by numerical results for finite systems.
Circuit and method for comparator offset error detection and correction in ADC
2017-01-01
PROBLEM TO BE SOLVED: To provide a method for calibrating an analog-to-digital converter (ADC).SOLUTION: The method comprises: sampling an input voltage signal; comparing the sampled input voltage signal with an output signal of a feedback digital-to-analog converter (DAC) 40; determining in a
Method for improving solution flow in solution mining of a mineral
International Nuclear Information System (INIS)
Moore, T.
1980-01-01
An improved method for the solution mining of a mineral from a subterranean formation containing same in which an injection and production well are drilled and completed within said formation, leach solution and an oxidant are injected through said injection well into said formation to dissolve said mineral, and said dissolved mineral is recovered via said production well, wherein the improvement comprises pretreating said formation with an acid gas to improve the permeabiltiy thereof
The characterization methods for colloids in aqueous solutions
International Nuclear Information System (INIS)
Vuorinen, U.; Kumpulainen, H.
1993-11-01
This literature review deals with characterization methods for colloids in aqueous solutions and in groundwater. The basis for the review has been the needs of nuclear waste disposal studies and methods applicable in such studies. The methods considered include non-destructive laserspectroscopic methods (e.g. TRLFS, LPAS, PALS), several separation methods (e.g. ultrafiltration, dialysis, electrophoresis, field-flow-fractionation) and also some surface analytical methods, as well as some other methods giving additional information on formation and migration properties of colloids. (au.) (71 refs., 13 figs., 3 tabs.)
Differential and difference equations a comparison of methods of solution
Maximon, Leonard C
2016-01-01
This book, intended for researchers and graduate students in physics, applied mathematics and engineering, presents a detailed comparison of the important methods of solution for linear differential and difference equations - variation of constants, reduction of order, Laplace transforms and generating functions - bringing out the similarities as well as the significant differences in the respective analyses. Equations of arbitrary order are studied, followed by a detailed analysis for equations of first and second order. Equations with polynomial coefficients are considered and explicit solutions for equations with linear coefficients are given, showing significant differences in the functional form of solutions of differential equations from those of difference equations. An alternative method of solution involving transformation of both the dependent and independent variables is given for both differential and difference equations. A comprehensive, detailed treatment of Green’s functions and the associat...
Particular solution of the discrete-ordinate method.
Qin, Yi; Box, Michael A; Jupp, David L
2004-06-20
We present two methods that can be used to derive the particular solution of the discrete-ordinate method (DOM) for an arbitrary source in a plane-parallel atmosphere, which allows us to solve the transfer equation 12-18% faster in the case of a single beam source and is even faster for the atmosphere thermal emission source. We also remove the divide by zero problem that occurs when a beam source coincides with a Gaussian quadrature point. In our implementation, solution for multiple sources can be obtained simultaneously. For each extra source, it costs only 1.3-3.6% CPU time required for a full solution. The GDOM code that we developed previously has been revised to integrate with the DOM. Therefore we are now able to compute the Green's function and DOM solutions simultaneously.
Milestones in the Development of Iterative Solution Methods
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe
2010-01-01
Roč. 2010, - (2010), s. 1-33 ISSN 2090-0147 Institutional research plan: CEZ:AV0Z30860518 Keywords : iterative solution methods * convergence acceleration methods * linear systems Subject RIV: JC - Computer Hardware ; Software http://www.hindawi.com/journals/jece/2010/972794.html
Comparing numerical methods for the solutions of the Chen system
International Nuclear Information System (INIS)
Noorani, M.S.M.; Hashim, I.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.
2007-01-01
In this paper, the Adomian decomposition method (ADM) is applied to the Chen system which is a three-dimensional system of ODEs with quadratic nonlinearities. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the classical fourth-order Runge-Kutta (RK4) numerical solutions are made. In particular we look at the accuracy of the ADM as the Chen system changes from a non-chaotic system to a chaotic one. To highlight some computational difficulties due to a high Lyapunov exponent, a comparison with the Lorenz system is given
Solution Methods for the Periodic Petrol Station Replenishment Problem
Directory of Open Access Journals (Sweden)
C Triki
2013-12-01
Full Text Available In this paper we introduce the Periodic Petrol Station Replenishment Problem (PPSRP over a T-day planning horizon and describe four heuristic methods for its solution. Even though all the proposed heuristics belong to the common partitioning-then-routing paradigm, they differ in assigning the stations to each day of the horizon. The resulting daily routing problems are then solved exactly until achieving optimalization. Moreover, an improvement procedure is also developed with the aim of ensuring a better quality solution. Our heuristics are tested and compared in two real-life cases, and our computational results show encouraging improvements with respect to a human planning solution
Directory of Open Access Journals (Sweden)
Kim Hyang-Mi
2012-09-01
Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their
Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun
2016-01-01
Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.
Spectral radiative property control method based on filling solution
International Nuclear Information System (INIS)
Jiao, Y.; Liu, L.H.; Hsu, P.-F.
2014-01-01
Controlling thermal radiation by tailoring spectral properties of microstructure is a promising method, can be applied in many industrial systems and have been widely researched recently. Among various property tailoring schemes, geometry design of microstructures is a commonly used method. However, the existing radiation property tailoring is limited by adjustability of processed microstructures. In other words, the spectral radiative properties of microscale structures are not possible to change after the gratings are fabricated. In this paper, we propose a method that adjusts the grating spectral properties by means of injecting filling solution, which could modify the thermal radiation in a fabricated microstructure. Therefore, this method overcomes the limitation mentioned above. Both mercury and water are adopted as the filling solution in this study. Aluminum and silver are selected as the grating materials to investigate the generality and limitation of this control method. The rigorous coupled-wave analysis is used to investigate the spectral radiative properties of these filling solution grating structures. A magnetic polaritons mechanism identification method is proposed based on LC circuit model principle. It is found that this control method could be used by different grating materials. Different filling solutions would enable the high absorption peak to move to longer or shorter wavelength band. The results show that the filling solution grating structures are promising for active control of spectral radiative properties. -- Highlights: • A filling solution grating structure is designed to adjust spectral radiative properties. • The mechanism of radiative property control is studied for engineering utilization. • Different grating materials are studied to find multi-functions for grating
Mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods
International Nuclear Information System (INIS)
Baker, A.R.
1982-07-01
A study has been performed of mesh-size errors in diffusion-theory calculations using finite-difference and finite-element methods. As the objective was to illuminate the issues, the study was performed for a 1D slab model of a reactor with one neutron-energy group for which analytical solutions were possible. A computer code SLAB was specially written to perform the finite-difference and finite-element calculations and also to obtain the analytical solutions. The standard finite-difference equations were obtained by starting with an expansion of the neutron current in powers of the mesh size, h, and keeping terms as far as h 2 . It was confirmed that these equations led to the well-known result that the criticality parameter varied with the square of the mesh size. An improved form of the finite-difference equations was obtained by continuing the expansion for the neutron current as far as the term in h 4 . In this case, the critical parameter varied as the fourth power of the mesh size. The finite-element solutions for 2 and 3 nodes per element revealed that the criticality parameter varied as the square and fourth power of the mesh size, respectively. Numerical results are presented for a bare reactive core of uniform composition with 2 zones of different uniform mesh and for a reactive core with an absorptive reflector. (author)
Generalized Truncated Methods for an Efficient Solution of Retrial Systems
Directory of Open Access Journals (Sweden)
Ma Jose Domenech-Benlloch
2008-01-01
Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.
A method for the quantification of model form error associated with physical systems.
Energy Technology Data Exchange (ETDEWEB)
Wallen, Samuel P.; Brake, Matthew Robert
2014-03-01
In the process of model validation, models are often declared valid when the differences between model predictions and experimental data sets are satisfactorily small. However, little consideration is given to the effectiveness of a model using parameters that deviate slightly from those that were fitted to data, such as a higher load level. Furthermore, few means exist to compare and choose between two or more models that reproduce data equally well. These issues can be addressed by analyzing model form error, which is the error associated with the differences between the physical phenomena captured by models and that of the real system. This report presents a new quantitative method for model form error analysis and applies it to data taken from experiments on tape joint bending vibrations. Two models for the tape joint system are compared, and suggestions for future improvements to the method are given. As the available data set is too small to draw any statistical conclusions, the focus of this paper is the development of a methodology that can be applied to general problems.
Recursive prediction error methods for online estimation in nonlinear state-space models
Directory of Open Access Journals (Sweden)
Dag Ljungquist
1994-04-01
Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.
Human reliability analysis of errors of commission: a review of methods and applications
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2007-06-15
Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for
Error analysis in Fourier methods for option pricing for exponential Lévy processes
Crocce, Fabian
2015-01-07
We derive an error bound for utilising the discrete Fourier transform method for solving Partial Integro-Differential Equations (PIDE) that describe european option prices for exponential Lévy driven asset prices. We give sufficient conditions for the existence of a L? bound that separates the dynamical contribution from that arising from the type of the option n in question. The bound achieved does not rely on information of the asymptotic behaviour of option prices at extreme asset values. In addition, we demonstrate improved numerical performance for select examples of practical relevance when compared to established bounding methods.
Human reliability analysis of errors of commission: a review of methods and applications
International Nuclear Information System (INIS)
Reer, B.
2007-06-01
Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for
Milestones in the Development of Iterative Solution Methods
Directory of Open Access Journals (Sweden)
Owe Axelsson
2010-01-01
Full Text Available Iterative solution methods to solve linear systems of equations were originally formulated as basic iteration methods of defect-correction type, commonly referred to as Richardson's iteration method. These methods developed further into various versions of splitting methods, including the successive overrelaxation (SOR method. Later, immensely important developments included convergence acceleration methods, such as the Chebyshev and conjugate gradient iteration methods and preconditioning methods of various forms. A major strive has been to find methods with a total computational complexity of optimal order, that is, proportional to the degrees of freedom involved in the equation. Methods that have turned out to have been particularly important for the further developments of linear equation solvers are surveyed. Some of them are presented in greater detail.
Rahman, Md Sayedur; Sathasivam, Kathiresan V
2015-01-01
Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb(2+), Cu(2+), Fe(2+), and Zn(2+) onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.
Directory of Open Access Journals (Sweden)
Madeiro Francisco
2010-01-01
Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization
Foster, John V.; Cunningham, Kevin
2010-01-01
Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the
Passive Methods as a Solution for Improving Indoor Environments
Orosa, José A
2012-01-01
There are many aspects to consider when evaluating or improving an indoor environment; thermal comfort, energy saving, preservation of materials, hygiene and health are all key aspects which can be improved by passive methods of environmental control. Passive Methods as a Solution for Improving Indoor Environments endeavours to fill the lack of analysis in this area by using over ten years of research to illustrate the effects of methods such as thermal inertia and permeable coverings; for example, the use of permeable coverings is a well known passive method, but its effects and ways to improve indoor environments have been rarely analyzed. Passive Methods as a Solution for Improving Indoor Environments includes both software simulations and laboratory and field studies. Through these, the main parameters that characterize the behavior of internal coverings are defined. Furthermore, a new procedure is explained in depth which can be used to identify the real expected effects of permeable coverings such ...
Comparative analysis of solution methods of the punctual kinetic equations
International Nuclear Information System (INIS)
Hernandez S, A.
2003-01-01
The following one written it presents a comparative analysis among different analytical solutions for the punctual kinetics equation, which present two variables of interest: a) the temporary behavior of the neutronic population, and b) The temporary behavior of the different groups of precursors of delayed neutrons. The first solution is based on a method that solves the transfer function of the differential equation for the neutronic population, in which intends to obtain the different poles that give the stability of this transfer function. In this section it is demonstrated that the temporary variation of the reactivity of the system can be managed as it is required, since the integration time for this method doesn't affect the result. However, the second solution is based on an iterative method like that of Runge-Kutta or the Euler method where the algorithm was only used to solve first order differential equations giving this way solution to each differential equation that conforms the equations of punctual kinetics. In this section it is demonstrated that only it can obtain a correct temporary behavior of the neutronic population when it is integrated on an interval of very short time, forcing to the temporary variation of the reactivity to change very quick way without one has some control about the time. In both methods the same change is used so much in the reactivity of the system like in the integration times, giving validity to the results graph the one the temporary behavior of the neutronic population vs. time. (Author)
Beam-pointing error compensation method of phased array radar seeker with phantom-bit technology
Directory of Open Access Journals (Sweden)
Qiuqiu WEN
2017-06-01
Full Text Available A phased array radar seeker (PARS must be able to effectively decouple body motion and accurately extract the line-of-sight (LOS rate for target missile tracking. In this study, the real-time two-channel beam pointing error (BPE compensation method of PARS for LOS rate extraction is designed. The PARS discrete beam motion principium is analyzed, and the mathematical model of beam scanning control is finished. According to the principle of the antenna element shift phase, both the antenna element shift phase law and the causes of beam-pointing error under phantom-bit conditions are analyzed, and the effect of BPE caused by phantom-bit technology (PBT on the extraction accuracy of the LOS rate is examined. A compensation method is given, which includes coordinate transforms, beam angle margin compensation, and detector dislocation angle calculation. When the method is used, the beam angle margin in the pitch and yaw directions is calculated to reduce the effect of the missile body disturbance and to improve LOS rate extraction precision by compensating for the detector dislocation angle. The simulation results validate the proposed method.
Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard
2018-06-01
The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.
On nonstationarity-related errors in modal combination rules of the response spectrum method
Pathak, Shashank; Gupta, Vinay K.
2017-10-01
Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.
International Nuclear Information System (INIS)
Gillet, M.
1986-07-01
This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr
Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.
2018-01-01
A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.
International Nuclear Information System (INIS)
Wang Zhongming; Lu Min; Yao Zhibin; Guo Hongxia
2011-01-01
SRAM-based FPGAs are very susceptible to radiation-induced Single-Event Upsets (SEUs) in space applications. The failure mechanism in FPGA's configuration memory differs from those in traditional memory device. As a result, there is a growing demand for methodologies which could quantitatively evaluate the impact of this effect. Fault injection appears to meet such requirement. In this paper, we propose a new methodology to analyze the soft errors in SRAM-based FPGAs. This method is based on in depth understanding of the device architecture and failure mechanisms induced by configuration upsets. The developed programs read in the placed and routed netlist, search for critical logic nodes and paths that may destroy the circuit topological structure, and then query a database storing the decoded relationship of the configurable resources and corresponding control bit to get the sensitive bits. Accelerator irradiation test and fault injection experiments were carried out to validate this approach. (semiconductor integrated circuits)
Residual and Backward Error Bounds in Minimum Residual Krylov Subspace Methods
Czech Academy of Sciences Publication Activity Database
Paige, C. C.; Strakoš, Zdeněk
2002-01-01
Roč. 23, č. 6 (2002), s. 1899-1924 ISSN 1064-8275 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: AV0Z1030915 Keywords : linear equations * eigenproblem * large sparse matrices * iterative solutions * Krylov subspace methods * Arnoldi method * GMRES * modified Gram-Schmidt * least squares * total least squares * singular values Subject RIV: BA - General Mathematics Impact factor: 1.291, year: 2002
Method for improved decomposition of metal nitrate solutions
Haas, Paul A.; Stines, William B.
1983-10-11
A method for co-conversion of aqueous solutions of one or more heavy metal nitrates wherein thermal decomposition within a temperature range of about 300.degree. to 800.degree. C. is carried out in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal.
Analytic method for solitary solutions of some partial differential equations
Energy Technology Data Exchange (ETDEWEB)
Ugurlu, Yavuz [Firat University, Department of Mathematics, 23119 Elazig (Turkey); Kaya, Dogan [Firat University, Department of Mathematics, 23119 Elazig (Turkey)], E-mail: dkaya@firat.edu.tr
2007-10-22
In this Letter by considering an improved tanh function method, we found some exact solutions of the clannish random walker's parabolic equation, the modified Korteweg-de Vries (KdV) equation, and the Sharma-Tasso-Olver (STO) equation with its fission and fusion, the Jaulent-Miodek equation.
Analytic method for solitary solutions of some partial differential equations
International Nuclear Information System (INIS)
Ugurlu, Yavuz; Kaya, Dogan
2007-01-01
In this Letter by considering an improved tanh function method, we found some exact solutions of the clannish random walker's parabolic equation, the modified Korteweg-de Vries (KdV) equation, and the Sharma-Tasso-Olver (STO) equation with its fission and fusion, the Jaulent-Miodek equation
Solutions of hyperbolic equations with the CIP-BS method
International Nuclear Information System (INIS)
Utsumi, Takayuki; Koga, James; Yamagiwa, Mitsuru; Yabe, Takashi; Aoki, Takayuki
2004-01-01
In this paper, we show that a new numerical method, the Constrained Interpolation Profile - Basis Set (CIP-BS) method, can solve general hyperbolic equations efficiently. This method uses a simple polynomial basis set that is easily extendable to any desired higher-order accuracy. The interpolating profile is chosen so that the subgrid scale solution approaches the local real solution owing to the constraints from the spatial derivatives of the master equations. Then, introducing scalar products, the linear and nonlinear partial differential equations are uniquely reduced to the ordinary differential equations for values and spatial derivatives at the grid points. The method gives stable, less diffusive, and accurate results. It is successfully applied to the continuity equation, the Burgers equation, the Korteweg-de Vries equation, and one-dimensional shock tube problems. (author)
International Nuclear Information System (INIS)
Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.
1996-01-01
This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst
A fast method for optimal reactive power flow solution
Energy Technology Data Exchange (ETDEWEB)
Sadasivam, G; Khan, M A [Anna Univ., Madras (IN). Coll. of Engineering
1990-01-01
A fast successive linear programming (SLP) method for minimizing transmission losses and improving the voltage profile is proposed. The method uses the same compactly stored, factorized constant matrices in all the LP steps, both for power flow solution and for constructing the LP model. The inherent oscillatory convergence of SLP methods is overcome by proper selection of initial step sizes and their gradual reduction. Detailed studies on three systems, including a 109-bus system, reveal the fast and reliable convergence property of the method. (author).
Multigroup adjoint transport solution using the method of cyclic characteristics
International Nuclear Information System (INIS)
Assawaroongruengchot, M.; Marleau, G.
2005-01-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2-dimensional geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 17*17 PWR and Watanabe-Maynard benchmark problems. Comparisons of adjoint flux and k eff results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. It appears that the pseudo-adjoint flux by CP method is equivalent to the adjoint flux by MOCC method and that the MOCC method requires lower computing time than the CP method for a single adjoint flux calculation
Multigroup adjoint transport solution using the method of cyclic characteristics
Energy Technology Data Exchange (ETDEWEB)
Assawaroongruengchot, M.; Marleau, G. [Ecole Polytechnique de Montreal, Institut de Genie Nucleaire, Montreal, Quebec (Canada)
2005-07-01
The adjoint transport solution algorithm based on the method of cyclic characteristics (MOCC) is developed for the heterogeneous 2-dimensional geometries. The adjoint characteristics equation associated with a cyclic tracking line is formulated, then a closed form for adjoint angular flux can be determined. The acceleration techniques are implemented using the group-reduction and group-splitting techniques. To demonstrate the efficacy of the algorithm, the calculations are performed on the 17*17 PWR and Watanabe-Maynard benchmark problems. Comparisons of adjoint flux and k{sub eff} results obtained by MOCC and collision probability (CP) methods are performed. The mathematical relationship between pseudo-adjoint flux obtained by CP method and adjoint flux by MOCC method is presented. It appears that the pseudo-adjoint flux by CP method is equivalent to the adjoint flux by MOCC method and that the MOCC method requires lower computing time than the CP method for a single adjoint flux calculation.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.
2010-08-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems
Almeida, Regina C.; Oden, J. Tinsley
2010-01-01
A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
Synthetic methods in phase equilibria: A new apparatus and error analysis of the method
DEFF Research Database (Denmark)
Fonseca, José; von Solms, Nicolas
2014-01-01
of the equipment was confirmed through several tests, including measurements along the three phase co-existence line for the system ethane + methanol, the study of the solubility of methane in water, and of carbon dioxide in water. An analysis regarding the application of the synthetic isothermal method...
Error characterization methods for surface soil moisture products from remote sensing
International Nuclear Information System (INIS)
Doubková, M.
2012-01-01
To support the operational use of Synthetic Aperture Radar (SAR) earth observation systems, the European Space Agency (ESA) is developing Sentinel-1 radar satellites operating in C-band. Much like its SAR predecessors (Earth Resource Satellite, ENVISAT, and RADARSAT), the Sentinel-1 will operate at a medium spatial resolution (ranging from 5 to 40 m), but with a greatly improved revisit period, especially over Europe (∼2 days). Given the planned high temporal sampling and the operational configuration Sentinel-1 is expected to be beneficial for operational monitoring of dynamic processes in hydrology and phenology. The benefit of a C-band SAR monitoring service in hydrology has already been demonstrated within the scope of the Soil Moisture for Hydrometeorologic Applications (SHARE) project using data from the Global Mode (GM) of the Advanced Synthetic Aperture Radar (ASAR). To fully exploit the potential of the SAR soil moisture products, well characterized error needs to be provided with the products. Understanding errors of remotely sensed surface soil moisture (SSM) datasets was indispensible for their application in models, for extractions of blended SSM products, as well as for their usage in evaluation of other soil moisture datasets. This thesis has several objectives. First, it provides the basics and state of the art methods for evaluating measures of SSM, including both the standard (e.g. Root Mean Square Error, Correlation coefficient) and the advanced (e.g. Error propagation, Triple collocation) evaluation measures. A summary of applications of soil moisture datasets is presented and evaluation measures are suggested for each application according to its requirement on the dataset quality. The evaluation of the Advanced Synthetic Aperture Radar (ASAR) Global Mode (GM) SSM using the standard and advanced evaluation measures comprises a second objective of the work. To achieve the second objective, the data from the Australian Water Assessment System
Lyons, I.; Furniss, D.; Blandford, A.; Chumbley, G.; Iacovides, I.; Wei, L.; Cox, A.; Mayer, A.; Vos, J.; Galal-Edeen, G. H.; Schnock, K. O.; Dykes, P. C.; Bates, D. W.; Franklin, B. D.
2018-01-01
INTRODUCTION: Intravenous medication administration has traditionally been regarded as error prone, with high potential for harm. A recent US multisite study revealed few potentially harmful errors despite a high overall error rate. However, there is limited evidence about infusion practices in England and how they relate to prevalence and types of error. OBJECTIVES: To determine the prevalence, types and severity of errors and discrepancies in infusion administration in English hospitals, an...
Beam-Based Error Identification and Correction Methods for Particle Accelerators
AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas
2014-06-10
Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...
Error Analysis of a Finite Element Method for the Space-Fractional Parabolic Equation
Jin, Bangti; Lazarov, Raytcho; Pasciak, Joseph; Zhou, Zhi
2014-01-01
© 2014 Society for Industrial and Applied Mathematics We consider an initial boundary value problem for a one-dimensional fractional-order parabolic equation with a space fractional derivative of Riemann-Liouville type and order α ∈ (1, 2). We study a spatial semidiscrete scheme using the standard Galerkin finite element method with piecewise linear finite elements, as well as fully discrete schemes based on the backward Euler method and the Crank-Nicolson method. Error estimates in the L2(D)- and Hα/2 (D)-norm are derived for the semidiscrete scheme and in the L2(D)-norm for the fully discrete schemes. These estimates cover both smooth and nonsmooth initial data and are expressed directly in terms of the smoothness of the initial data. Extensive numerical results are presented to illustrate the theoretical results.
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
International Nuclear Information System (INIS)
Zhang, Zhenjiu; Hu, Hong
2013-01-01
The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.
Determination of plutonium in pure plutonium nitrate solutions - Gravimetric method
International Nuclear Information System (INIS)
1987-01-01
This International Standard specifies a precise and accurate gravimetric method for determining the concentration of plutonium in pure plutonium nitrate solutions and reference solutions, containing between 100 and 300 g of plutonium per litre, in a nitric acid medium. The weighed portion of the plutonium nitrate is treated with sulfuric acid and evaporated to dryness. The plutonium sulfate is decomposed and formed to oxide by heating in air. The oxide is ignited in air at 1200 to 1250 deg. C and weighed as stoichiometric plutonium dioxide, which is stable and non-hygroscopic
A low error reconstruction method for confocal holography to determine 3-dimensional properties
Energy Technology Data Exchange (ETDEWEB)
Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)
2012-06-15
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the
A low error reconstruction method for confocal holography to determine 3-dimensional properties
International Nuclear Information System (INIS)
Jacquemin, P.B.; Herring, R.A.
2012-01-01
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary
Solution methods for large systems of linear equations in BACCHUS
International Nuclear Information System (INIS)
Homann, C.; Dorr, B.
1993-05-01
The computer programme BACCHUS is used to describe steady state and transient thermal-hydraulic behaviour of a coolant in a fuel element with intact geometry in a fast breeder reactor. In such computer programmes generally large systems of linear equations with sparse matrices of coefficients, resulting from discretization of coolant conservation equations, must be solved thousands of times giving rise to large demands of main storage and CPU time. Direct and iterative solution methods of the systems of linear equations, available in BACCHUS, are described, giving theoretical details and experience with their use in the programme. Besides use of a method of lines, a Runge-Kutta-method, for solution of the partial differential equation is outlined. (orig.) [de
Finite element method solution of simplified P3 equation for flexible geometry handling
International Nuclear Information System (INIS)
Ryu, Eun Hyun; Joo, Han Gyu
2011-01-01
In order to obtain efficiently core flux solutions which would be much closer to the transport solution than the diffusion solution is, not being limited by the geometry of the core, the simplified P 3 (SP 3 ) equation is solved with the finite element method (FEM). A generic mesh generator, GMSH, is used to generate linear and quadratic mesh data. The linear system resulting from the SP 3 FEM discretization is solved by Krylov subspace methods (KSM). A symmetric form of the SP 3 equation is derived to apply the conjugate gradient method rather than the KSMs for nonsymmetric linear systems. An optional iso-parametric quadratic mapping scheme, which is to selectively model nonlinear shapes with a quadratic mapping to prevent significant mismatch in local domain volume, is also implemented for efficient application of arbitrary geometry handling. The gain in the accuracy attainable by the SP 3 solution over the diffusion solution is assessed by solving numerous benchmark problems having various core geometries including the IAEA PWR problems involving rectangular fuels and the Takeda fast reactor problems involving hexagonal fuels. The reference transport solution is produced by the McCARD Monte Carlo code and the multiplication factor and power distribution errors are assessed. In addition, the effect of quadratic mapping is examined for circular cell problems. It is shown that significant accuracy gain is possible with the SP 3 solution for the fast reactor problems whereas only marginal improvement is noted for thermal reactor problems. The quadratic mapping is also quite effective handling geometries with curvature. (author)
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION
Directory of Open Access Journals (Sweden)
Daniel Arana
Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.
Evaluation of roundness error using a new method based on a small displacement screw
International Nuclear Information System (INIS)
Nouira, Hichem; Bourdet, Pierre
2014-01-01
In relation to industrial need and the progress of technology, LNE would like to improve the measurement of its primary pressure, spherical and flick standards. The spherical and flick standards are respectively used to calibrate the spindle motion error and the probe which equips commercial conventional cylindricity measuring machines. The primary pressure standards are obtained using pressure balances equipped with rotary pistons with an uncertainty of 5 nm for a piston diameter of 10 mm. Conventional machines are not able to reach such an uncertainty level. That is why the development of a new machine is necessary. To ensure such a level of uncertainty, both stability and performance of the machine are not sufficient, and the data processing should also be done with accuracy less than a nanometre. In this paper, a new method based on the small displacement screw (SDS) model is proposed. A first validation of this method is proposed on a theoretical dataset published by the European Community Bureau of Reference (BCR) in report no 3327. Then, an experiment is prepared in order to validate the new method on real datasets. Specific environment conditions are taken into account and many precautions are considered. The new method is applied to analyse the least-squares circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The results are compared to those done by the reference Chebyshev best-fit method and reveal perfect agreement. The sensibilities of the SDS and Chebyshev methodologies are investigated, and it is revealed that results remain unchanged when the value of the diameter exceeds 700 times the form error. (paper)
International Nuclear Information System (INIS)
Alomari, A. K.; Noorani, M. S. M.; Nazar, R.
2008-01-01
We employ the homotopy analysis method (HAM) to obtain approximate analytical solutions to the heat-like and wave-like equations. The HAM contains the auxiliary parameter ħ, which provides a convenient way of controlling the convergence region of series solutions. The analysis is accompanied by several linear and nonlinear heat-like and wave-like equations with initial boundary value problems. The results obtained prove that HAM is very effective and simple with less error than the Adomian decomposition method and the variational iteration method
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non
A new method for the solution of the Schroedinger equation
International Nuclear Information System (INIS)
Amore, Paolo; Aranda, Alfredo; De Pace, Arturo
2004-01-01
We present a new method for the solution of the Schroedinger equation applicable to problems of a non-perturbative nature. The method works by identifying three different scales in the problem, which then are treated independently: an asymptotic scale, which depends uniquely on the form of the potential at large distances; an intermediate scale, still characterized by an exponential decay of the wavefunction; and, finally, a short distance scale, in which the wavefunction is sizable. The notion of optimized perturbation is then used in the last two regimes. We apply the method to the quantum anharmonic oscillator and find it suitable to treat both energy eigenvalues and wavefunctions, even for strong couplings
A channel-by-channel method of reducing the errors associated with peak area integration
International Nuclear Information System (INIS)
Luedeke, T.P.; Tripard, G.E.
1996-01-01
A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)
Improved parallel solution techniques for the integral transport matrix method
Energy Technology Data Exchange (ETDEWEB)
Zerr, R. Joseph, E-mail: rjz116@psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, University Park, PA (United States); Azmy, Yousry Y., E-mail: yyazmy@ncsu.edu [Department of Nuclear Engineering, North Carolina State University, Burlington Engineering Laboratories, Raleigh, NC (United States)
2011-07-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
Improved parallel solution techniques for the integral transport matrix method
International Nuclear Information System (INIS)
Zerr, R. Joseph; Azmy, Yousry Y.
2011-01-01
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution time by up to 10´ when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing cases are optically thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block pre conditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient pre conditioner. (author)
International Nuclear Information System (INIS)
Moguilnaya, T.; Suminov, Y.; Botikov, A.; Ignatov, S.; Kononenko, A.; Agibalov, A.
2017-01-01
We developed the new automatic method that combines the method of forced luminescence and stimulated Brillouin scattering. This method is used for monitoring pathogens, genetically modified products and nanostructured materials in colloidal solution. We carried out the statistical spectral analysis of pathogens, genetically modified soy and nano-particles of silver in water from different regions in order to determine the statistical errors of the method. We studied spectral characteristics of these objects in water to perform the initial identification with 95% probability. These results were used for creation of the model of the device for monitor of pathogenic organisms and working model of the device to determine the genetically modified soy in meat.
Removal of round off errors in the matrix exponential method for solving the heavy nuclide chain
International Nuclear Information System (INIS)
Lee, Hyun Chul; Noh, Jae Man; Joo, Hyung Kook
2005-01-01
Many nodal codes for core simulation adopt the micro-depletion procedure for the depletion analysis. Unlike the macro-depletion procedure, the microdepletion procedure uses micro-cross sections and number densities of important nuclides to generate the macro cross section of a spatial calculational node. Therefore, it needs to solve the chain equations of the nuclides of interest to obtain their number densities. There are several methods such as the matrix exponential method (MEM) and the chain linearization method (CLM) for solving the nuclide chain equations. The former solves chain equations exactly even when the cycles that come from the alpha decay exist in the chain while the latter solves the chain approximately when the cycles exist in the chain. The former has another advantage over the latter. Many nodal codes for depletion analysis, such as MASTER, solve only the hard coded nuclide chains with the CLM. Therefore, if we want to extend the chain by adding some more nuclides to the chain, we have to modify the source code. In contrast, we can extend the chain just by modifying the input in the MEM because it is easy to implement the MEM solver for solving an arbitrary nuclide chain. In spite of these advantages of the MEM, many nodal codes adopt the chain linearization because the former has a large round off error when the flux level is very high or short lived or strong absorber nuclides exist in the chain. In this paper, we propose a new technique to remove the round off errors in the MEM and we compared the performance of the two methods
Hoel, Hakon
2016-06-13
A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).
Analysis of S-box in Image Encryption Using Root Mean Square Error Method
Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan
2012-07-01
The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes
Pediatric Nurses' Perceptions of Medication Safety and Medication Error: A Mixed Methods Study.
Alomari, Albara; Wilson, Val; Solman, Annette; Bajorek, Beata; Tinsley, Patricia
2017-05-30
This study aims to outline the current workplace culture of medication practice in a pediatric medical ward. The objective is to explore the perceptions of nurses in a pediatric clinical setting as to why medication administration errors occur. As nurses have a central role in the medication process, it is essential to explore nurses' perceptions of the factors influencing the medication process. Without this understanding, it is difficult to develop effective prevention strategies aimed at reducing medication administration errors. Previous studies were limited to exploring a single and specific aspect of medication safety. The methods used in these studies were limited to survey designs which may lead to incomplete or inadequate information being provided. This study is phase 1 on an action research project. Data collection included a direct observation of nurses during medication preparation and administration, audit based on the medication policy, and guidelines and focus groups with nursing staff. A thematic analysis was undertaken by each author independently to analyze the observation notes and focus group transcripts. Simple descriptive statistics were used to analyze the audit data. The study was conducted in a specialized pediatric medical ward. Four key themes were identified from the combined quantitative and qualitative data: (1) understanding medication errors, (2) the busy-ness of nurses, (3) the physical environment, and (4) compliance with medication policy and practice guidelines. Workload, frequent interruptions to process, poor physical environment design, lack of preparation space, and impractical medication policies are identified as barriers to safe medication practice. Overcoming these barriers requires organizations to review medication process policies and engage nurses more in medication safety research and in designing clinical guidelines for their own practice.
Hybrid Fundamental Solution Based Finite Element Method: Theory and Applications
Directory of Open Access Journals (Sweden)
Changyong Cao
2015-01-01
Full Text Available An overview on the development of hybrid fundamental solution based finite element method (HFS-FEM and its application in engineering problems is presented in this paper. The framework and formulations of HFS-FEM for potential problem, plane elasticity, three-dimensional elasticity, thermoelasticity, anisotropic elasticity, and plane piezoelectricity are presented. In this method, two independent assumed fields (intraelement filed and auxiliary frame field are employed. The formulations for all cases are derived from the modified variational functionals and the fundamental solutions to a given problem. Generation of elemental stiffness equations from the modified variational principle is also described. Typical numerical examples are given to demonstrate the validity and performance of the HFS-FEM. Finally, a brief summary of the approach is provided and future trends in this field are identified.
Solution of the isotopic depletion equation using decomposition method and analytical solution
Energy Technology Data Exchange (ETDEWEB)
Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: fprata@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)
Solution of the isotopic depletion equation using decomposition method and analytical solution
International Nuclear Information System (INIS)
Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S.
2011-01-01
In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)
Linear facility location in three dimensions - Models and solution methods
DEFF Research Database (Denmark)
Brimberg, Jack; Juel, Henrik; Schöbel, Anita
2002-01-01
We consider the problem of locating a line or a line segment in three-dimensional space, such that the sum of distances from the facility represented by the line (segment) to a given set of points is minimized. An example is planning the drilling of a mine shaft, with access to ore deposits through...... horizontal tunnels connecting the deposits and the shaft. Various models of the problem are developed and analyzed, and efficient solution methods are given....
Solution Methods for the Periodic Petrol Station Replenishment Problem
C Triki
2013-01-01
In this paper we introduce the Periodic Petrol Station Replenishment Problem (PPSRP) over a T-day planning horizon and describe four heuristic methods for its solution. Even though all the proposed heuristics belong to the common partitioning-then-routing paradigm, they differ in assigning the stations to each day of the horizon. The resulting daily routing problems are then solved exactly until achieving optimalization. Moreover, an improvement procedure is also developed with the aim of ens...
Method of solution mining subsurface orebodies to reduce restoration activities
Energy Technology Data Exchange (ETDEWEB)
Hartman, G.J.
1984-01-24
A method of solution mining is claimed wherein a lixiviant containing both leaching and oxidizing agents is injected into the subsurface orebody. The composition of the lixiviant is changed by reducing the level of oxidizing agent to zero so that soluble species continue to be removed from the subsurface environment. This reduces the uranium level of the ground water aquifer after termination of the lixiviant injection.
Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J
2006-01-01
The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.
International Nuclear Information System (INIS)
Esmaeilzadeh, Hamid; Arzi, Ezatollah; Légaré, François; Hassani, Alireza
2013-01-01
In this paper, using the boundary integral method (BIM), we simulate the effect of temperature fluctuation on the sensitivity of microstructured optical fibre (MOF) surface plasmon resonance (SPR) sensors. The final results indicate that, as the temperature increases, the refractometry sensitivity of our sensor decreases from 1300 nm/RIU at 0 °C to 1200 nm/RIU at 50 °C, leading to ∼7.7% sensitivity reduction and the sensitivity temperature error of 0.15% °C −1 for this case. These results can be used for biosensing temperature-error adjustment in MOF SPR sensors, since biomaterials detection usually happens in this temperature range. Moreover, the signal-to-noise ratio (SNR) of our sensor decreases from 0.265 at 0 °C to 0.154 at 100 °C with the average reduction rate of ∼0.42% °C −1 . The results suggest that at lower temperatures the sensor has a higher SNR. (paper)
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
International Nuclear Information System (INIS)
Gonzalez Cuesta, M.; Okrent, D.
1985-01-01
This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered
Computer methods in physics 250 problems with guided solutions
Landau, Rubin H
2018-01-01
Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). Its also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.
International Nuclear Information System (INIS)
Erdtmann, G.
1993-08-01
A sufficiently accurate characterization of the neutron flux and spectrum, i.e. the determination of the thermal flux, the flux ratio and the epithermal flux spectrum shape factor, α, is a prerequisite for all types of absolute and monostandard methods of reactor neutron activation analysis. A convenient method for these measurements is the bare triple monitor method. However, the results of this method, are very imprecise, because there are high error propagation factors form the counting errors of the monitor activities. Procedures are described to calculate the errors of the flux parameters, the α-dependent cross-section ratios, and of the analytical results from the errors of the activities of the monitor isotopes. They are included in FORTRAN programs which also allow a graphical representation of the results. A great number of examples were calculated for ten different irradiation facilities in four reactors and for 28 elements. Plots of the results are presented and discussed. (orig./HP) [de
A general solution strategy of modified power method for higher mode solutions
International Nuclear Information System (INIS)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung
2016-01-01
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the new strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.
Do natural methods for fertility regulation increase the risks of genetic errors?
Serra, A
1981-09-01
Genetic errors of many kinds are connected with the reproductive processes and are favored by a nunber of largely uncontrollable, endogenous, and/or exogenous factors. For a long time human beings have taken into their own hands the control of this process. The regulation of fertility is clearly a forceful request to any family, to any community, were it only to lower the level of the consequences of genetic errors. In connection with this request, and in the context of the Congress for the Family of Africa and Europe (Catholic University, January 1981), 1 question must still be raised and possibly answered. The question is: do or can the so called "natural methods" for the regulation of fertility increase the risks of genetic errors with their generally dramatic effects on families and on communities. It is important to try to give as far as possible a scientifically based answer to this question. Fr. Haring, a moral theologian, citing scientific evidence finds it shocking that the rhythm method, so strongly and recently endorsed again by Church authorities, should be classified among the means of "birth control" by way of spontaneous abortion or at least by spontaneous loss of a large number of zygotes which, due to the concrete application of the rhythm method, lack of necessary vitality for survival. He goes on to state that the scientific research provides overwhelming evidence that the rhythm method in its traditional form is responsible for a disproportionate waste of zygotes and a disproportionate frequency of spontaneous abortions and a defective childern. Professor Hilgers, a reproductive physiologist, takes on opposite view, maintaining that the hypotheses are arbitrary and the alarm false. The strongest evidence upon which Fr. Haring bases his moral principles about the use of the natural methods of fertility regulation is a paper by Guerrero and Rojos (1975). These authors examined, retrospectively, the success of 965 pregnancies which occurred in
Valuing urban open space using the travel-cost method and the implications of measurement error.
Hanauer, Merlin M; Reid, John
2017-08-01
Urbanization has placed pressure on open space within and adjacent to cities. In recent decades, a greater awareness has developed to the fact that individuals derive multiple benefits from urban open space. Given the location, there is often a high opportunity cost to preserving urban open space, thus it is important for both public and private stakeholders to justify such investments. The goals of this study are twofold. First, we use detailed surveys and precise, accessible, mapping methods to demonstrate how travel-cost methods can be applied to the valuation of urban open space. Second, we assess the degree to which typical methods of estimating travel times, and thus travel costs, introduce bias to the estimates of welfare. The site we study is Taylor Mountain Regional Park, a 1100-acre space located immediately adjacent to Santa Rosa, California, which is the largest city (∼170,000 population) in Sonoma County and lies 50 miles north of San Francisco. We estimate that the average per trip access value (consumer surplus) is $13.70. We also demonstrate that typical methods of measuring travel costs significantly understate these welfare measures. Our study provides policy-relevant results and highlights the sensitivity of urban open space travel-cost studies to bias stemming from travel-cost measurement error. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lower and Upper Solutions Method for Positive Solutions of Fractional Boundary Value Problems
Directory of Open Access Journals (Sweden)
R. Darzi
2013-01-01
Full Text Available We apply the lower and upper solutions method and fixed-point theorems to prove the existence of positive solution to fractional boundary value problem D0+αut+ft,ut=0, 0
Directory of Open Access Journals (Sweden)
Kaspar Küng
2013-01-01
Full Text Available The purpose of this study was (1 to determine frequency and type of medication errors (MEs, (2 to assess the number of MEs prevented by registered nurses, (3 to assess the consequences of ME for patients, and (4 to compare the number of MEs reported by a newly developed medication error self-reporting tool to the number reported by the traditional incident reporting system. We conducted a cross-sectional study on ME in the Cardiovascular Surgery Department of Bern University Hospital in Switzerland. Eligible registered nurses ( involving in the medication process were included. Data on ME were collected using an investigator-developed medication error self reporting tool (MESRT that asked about the occurrence and characteristics of ME. Registered nurses were instructed to complete a MESRT at the end of each shift even if there was no ME. All MESRTs were completed anonymously. During the one-month study period, a total of 987 MESRTs were returned. Of the 987 completed MESRTs, 288 (29% indicated that there had been an ME. Registered nurses reported preventing 49 (5% MEs. Overall, eight (2.8% MEs had patient consequences. The high response rate suggests that this new method may be a very effective approach to detect, report, and describe ME in hospitals.
Directory of Open Access Journals (Sweden)
Cai Ligang
2017-01-01
Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.
The treatment of commission errors in first generation human reliability analysis methods
Energy Technology Data Exchange (ETDEWEB)
Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear
2011-07-01
Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)
Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.
Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae
2016-01-01
Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Ma Songhua; Fang Jianping; Zheng Chunlong
2009-01-01
By means of an extended mapping method and a variable separation method, a series of solitary wave solutions, periodic wave solutions and variable separation solutions to the (2 + 1)-dimensional breaking soliton system is derived.
An Analytical Method of Auxiliary Sources Solution for Plane Wave Scattering by Impedance Cylinders
DEFF Research Database (Denmark)
Larsen, Niels Vesterdal; Breinbjerg, Olav
2004-01-01
Analytical Method of Auxiliary Sources solutions for plane wave scattering by circular impedance cylinders are derived by transformation of the exact eigenfunction series solutions employing the Hankel function wave transformation. The analytical Method of Auxiliary Sources solution thus obtained...
Identification of Error of Commissions in the LOCA Using the CESA Method
Energy Technology Data Exchange (ETDEWEB)
Tukhbyet-olla, Myeruyert; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)
2015-10-15
An Errors of commission (EOCs) can be defined as the performance of any inappropriate action that aggravates the situation. The primary focus in current PSA is placed on those sequences of hardware failures and/or EOOs that lead to unsafe system states. Although EOCs can be treated when identified, a systematic and comprehensive treatment of EOC opportunities remains outside the scope of PSAs. However, some past experiences in the nuclear industry show that EOCs have contributed to severe accidents. Some recent and emerging human reliability analysis (HRA) methods suggest approaches to identify and quantify EOCs, such as ATHEANA, MERMOS, GRS, MDTA, and CESA. The CESA method, developed by the Risk and Human Reliability Group at the Paul Scherrer Institute, is to identify potentially risk-significant EOCs, given an existing PSA. The main idea underlying the method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. This paper aims at identifying EOCs in the LOCA by using the CESA method. This study is focused on the identification of EOCs, while the quantification of EOCs is out of scope. Then, this paper applies the CESA method to the emergency operating procedure (EOP) of LOCA for APR1400. Finally, this study presents potential EOCs that may lead to the aggravation in the mitigation of LOCA. This study has identified the EOC events for APR1400 in the LOCA using CESA method. The result identified three candidate EOCs event using operator action catalog and RAW cutset of LOCA. These candidate EOC events are inappropriate terminations of safety injection system, safety injection tank and containment spray system. Then after reviewing top 100 accident sequences of PSA, this study finally identified one EOC scenario and EOC path, that is, inappropriate termination of safety injection system.
On the economical solution method for a system of linear algebraic equations
Directory of Open Access Journals (Sweden)
Jan Awrejcewicz
2004-01-01
Full Text Available The present work proposes a novel optimal and exact method of solving large systems of linear algebraic equations. In the approach under consideration, the solution of a system of algebraic linear equations is found as a point of intersection of hyperplanes, which needs a minimal amount of computer operating storage. Two examples are given. In the first example, the boundary value problem for a three-dimensional stationary heat transfer equation in a parallelepiped in ℝ3 is considered, where boundary value problems of first, second, or third order, or their combinations, are taken into account. The governing differential equations are reduced to algebraic ones with the help of the finite element and boundary element methods for different meshes applied. The obtained results are compared with known analytical solutions. The second example concerns computation of a nonhomogeneous shallow physically and geometrically nonlinear shell subject to transversal uniformly distributed load. The partial differential equations are reduced to a system of nonlinear algebraic equations with the error of O(hx12+hx22. The linearization process is realized through either Newton method or differentiation with respect to a parameter. In consequence, the relations of the boundary condition variations along the shell side and the conditions for the solution matching are reported.
Shirley, Natalie R; Ramirez Montes, Paula Andrea
2015-01-01
The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.
Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors
International Nuclear Information System (INIS)
Gordon, J J; Siebers, J V
2007-01-01
The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin
Li, Xingxing
2014-05-01
displacements is accompanied by a drift due to the potential uncompensated errors. Li et al. (2013) presented a temporal point positioning (TPP) method to quickly capture coseismic displacements with a single GPS receiver in real-time. The TPP approach can overcome the convergence problem of precise point positioning (PPP), and also avoids the integration and de-trending process of the variometric approach. The performance of TPP is demonstrated to be at few centimeters level of displacement accuracy for even twenty minutes interval with real-time precise orbit and clock products. In this study, we firstly present and compare the observation models and processing strategies of the current existing single-receiver methods for real-time GPS seismology. Furthermore, we propose several refinements to the variometric approach in order to eliminate the drift trend in the integrated coseismic displacements. The mathematical relationship between these methods is discussed in detail and their equivalence is also proved. The impact of error components such as satellite ephemeris, ionospheric delay, tropospheric delay, and geometry change on the retrieved displacements are carefully analyzed and investigated. Finally, the performance of these single-receiver approaches for real-time GPS seismology is validated using 1 Hz GPS data collected during the Tohoku-Oki earthquake (Mw 9.0, March 11, 2011) in Japan. It is shown that few centimeters accuracy of coseismic displacements is achievable. Keywords: High-rate GPS; real-time GPS seismology; a single receiver; PPP; variometric approach; temporal point positioning; error analysis; coseismic displacement; fault slip inversion;
Intelligent error correction method applied on an active pixel sensor based star tracker
Schmidt, Uwe
2005-10-01
Star trackers are opto-electronic sensors used on-board of satellites for the autonomous inertial attitude determination. During the last years star trackers became more and more important in the field of the attitude and orbit control system (AOCS) sensors. High performance star trackers are based up today on charge coupled device (CCD) optical camera heads. The active pixel sensor (APS) technology, introduced in the early 90-ties, allows now the beneficial replacement of CCD detectors by APS detectors with respect to performance, reliability, power, mass and cost. The company's heritage in star tracker design started in the early 80-ties with the launch of the worldwide first fully autonomous star tracker system ASTRO1 to the Russian MIR space station. Jena-Optronik recently developed an active pixel sensor based autonomous star tracker "ASTRO APS" as successor of the CCD based star tracker product series ASTRO1, ASTRO5, ASTRO10 and ASTRO15. Key features of the APS detector technology are, a true xy-address random access, the multiple windowing read out and the on-chip signal processing including the analogue to digital conversion. These features can be used for robust star tracking at high slew rates and under worse conditions like stray light and solar flare induced single event upsets. A special algorithm have been developed to manage the typical APS detector error contributors like fixed pattern noise (FPN), dark signal non-uniformity (DSNU) and white spots. The algorithm works fully autonomous and adapts to e.g. increasing DSNU and up-coming white spots automatically without ground maintenance or re-calibration. In contrast to conventional correction methods the described algorithm does not need calibration data memory like full image sized calibration data sets. The application of the presented algorithm managing the typical APS detector error contributors is a key element for the design of star trackers for long term satellite applications like
Higher order methods for burnup calculations with Bateman solutions
International Nuclear Information System (INIS)
Isotalo, A.E.; Aarnio, P.A.
2011-01-01
Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.
Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H
2012-06-01
Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in
Properties of gases, liquids, and solutions principles and methods
Mason, Warren P
2013-01-01
Physical Acoustics: Principles and Methods, Volume ll-Part A: Properties of Gases, Liquids, and Solutions ponders on high frequency sound waves in gases, liquids, and solids that have been proven as effective tools in examining the molecular, domain wall, and other types of motions. The selection first offers information on the transmission of sound waves in gases at very low pressures and the phenomenological theory of the relaxation phenomena in gases. Topics include free molecule propagation, phenomenological thermodynamics of irreversible processes, and simultaneous multiple relaxation pro
Methods for removing transuranic elements from waste solutions
International Nuclear Information System (INIS)
Slater, S.A.; Chamberlain, D.B.; Connor, C.; Sedlet, J.; Srinivasan, B.; Vandegrift, G.F.
1994-11-01
This report outlines a treatment scheme for separating and concentrating the transuranic (TRU) elements present in aqueous waste solutions stored at Argonne National Laboratory (ANL). The treatment method selected is carrier precipitation. Potential carriers will be evaluated in future laboratory work, beginning with ferric hydroxide and magnetite. The process will result in a supernatant with alpha activity low enough that it can be treated in the existing evaporator/concentrator at ANL. The separated TRU waste will be packaged for shipment to the Waste Isolation Pilot Plant
Water flux in animals: analysis of potential errors in the tritiated water method
Energy Technology Data Exchange (ETDEWEB)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.
Water flux in animals: analysis of potential errors in the tritiated water method
International Nuclear Information System (INIS)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations
International Nuclear Information System (INIS)
Fernández-Ahumada, E; Gómez, A; Vallesquino, P; Guerrero, J E; Pérez-Marín, D; Garrido-Varo, A; Fearn, T
2008-01-01
According to the current demands of the authorities, the manufacturers and the consumers, controls and assessments of the feed compound manufacturing process have become a key concern. Among others, it must be assured that a given compound feed is well manufactured and labelled in terms of the ingredient composition. When near-infrared spectroscopy (NIRS) together with linear models were used for the prediction of the ingredient composition, the results were not always acceptable. Therefore, the performance of nonlinear methods has been investigated. Artificial neural networks and least squares support vector machines (LS-SVM) have been applied to a large (N = 20 320) and heterogeneous population of non-milled feed compounds for the NIR prediction of the inclusion percentage of wheat and sunflower meal, as representative of two different classes of ingredients. Compared to partial least squares regression, results showed considerable reductions of standard error of prediction values for both methods and ingredients: reductions of 45% with ANN and 49% with LS-SVM for wheat and reductions of 44% with ANN and 46% with LS-SVM for sunflower meal. These improvements together with the facility of NIRS technology to be implemented in the process make it ideal for meeting the requirements of the animal feed industry
Application of homotopy analysis method and inverse solution of a rectangular wet fin
International Nuclear Information System (INIS)
Panda, Srikumar; Bhowmik, Arka; Das, Ranjan; Repaka, Ramjee; Martha, Subash C.
2014-01-01
Highlights: • Solution of a wet fin with is obtained by homotopy analysis method (HAM). • Present HAM results have been well-validated with literature results. • Inverse analysis is done using genetic algorithm. • Measurement error of ±10–12% (approx.) is found to yield satisfactory reconstructions. - Abstract: This paper presents the analytical solution of a rectangular fin under the simultaneous heat and mass transfer across the fin surface and the fin tip, and estimates the unknown thermal and geometrical configurations of the fin using inverse heat transfer analysis. The local temperature field is obtained by using homotopy analysis method for insulated and convective fin tip boundary conditions. Using genetic algorithm, the thermal and geometrical parameters, viz., thermal conductivity of the material, surface heat transfer coefficient and dimensions of the fin have been simultaneously estimated for the prescribed temperature field. Earlier inverse studies on wet fin have been restricted to the analysis of nonlinear governing equation with either insulated tip condition or finite tip temperature only. The present study developed a closed-form solution with the consideration of nonlinearity effects in both governing equation and boundary condition. The study on inverse optimization leads to many feasible combination of fin materials, thermal conditions and fin dimensions. Thus allows the flexibility for designing a fin under wet conditions, based on multiple combinations of fin materials, fin dimensions and thermal configurations to achieve the required heat transfer duty. It is further determined that the allowable measurement error should be limited to ±10–12% in order to achieve satisfactory reconstruction
The method of fundamental solutions for computing acoustic interior transmission eigenvalues
Kleefeld, Andreas; Pieronek, Lukas
2018-03-01
We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.
Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System.
Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou
2017-03-22
The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras' coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit.
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Stand-alone error characterisation of microwave satellite soil moisture using a Fourier method
Error characterisation of satellite-retrieved soil moisture (SM) is crucial for maximizing their utility in research and applications in hydro-meteorology and climatology. Error characteristics can provide insights for retrieval development and validation, and inform suitable strategies for data fus...
A generalized trial solution method for solving the aerosol equation
International Nuclear Information System (INIS)
Simons, S.; Simpson, D.R.
1988-01-01
It is shown how the introduction of orthogonal functions together with a time-dependent scaling factor may be used to develop a generalized trial solution method for tackling the aerosol equation. The approach is worked out in detail for the case where the initial particle size spectrum follows a γ-distribution, and it is shown to be a viable technique as long as the initial volume fraction of particulate material is not too large. The method is applied to several situations of interest, and is shown to give more accurate results (with marginally shorter computing times) than are given by the three-parameter log-normal or γ distribution trial functions. (author)
A finite element solution method for quadrics parallel computer
International Nuclear Information System (INIS)
Zucchini, A.
1996-08-01
A distributed preconditioned conjugate gradient method for finite element analysis has been developed and implemented on a parallel SIMD Quadrics computer. The main characteristic of the method is that it does not require any actual assembling of all element equations in a global system. The physical domain of the problem is partitioned in cells of n p finite elements and each cell element is assigned to a different node of an n p -processors machine. Element stiffness matrices are stored in the data memory of the assigned processing node and the solution process is completely executed in parallel at element level. Inter-element and therefore inter-processor communications are required once per iteration to perform local sums of vector quantities between neighbouring elements. A prototype implementation has been tested on an 8-nodes Quadrics machine in a simple 2D benchmark problem
The method of lines solution of discrete ordinates method for non-grey media
International Nuclear Information System (INIS)
Cayan, Fatma Nihan; Selcuk, Nevin
2007-01-01
A radiation code based on method of lines (MOL) solution of discrete ordinates method (DOM) for radiative heat transfer in non-grey absorbing-emitting media was developed by incorporation of a gas spectral radiative property model, namely wide band correlated-k (WBCK) model, which is compatible with MOL solution of DOM. Predictive accuracy of the code was evaluated by applying it to 1-D parallel plate and 2-D axisymmetric cylindrical enclosure problems containing absorbing-emitting medium and benchmarking its predictions against line-by-line solutions available in the literature. Comparisons reveal that MOL solution of DOM with WBCK model produces accurate results for radiative heat fluxes and source terms and can be used with confidence in conjunction with computational fluid dynamics codes based on the same approach
Directory of Open Access Journals (Sweden)
Antonio Gledson Goulart
2013-12-01
Full Text Available In this paper, the equation for the gravity wave spectra in mean atmosphere is analytically solved without linearization by the Adomian decomposition method. As a consequence, the nonlinear nature of problem is preserved and the errors found in the results are only due to the parameterization. The results, with the parameterization applied in the simulations, indicate that the linear solution of the equation is a good approximation only for heights shorter than ten kilometers, because the linearization the equation leads to a solution that does not correctly describe the kinetic energy spectra.
Analysis of the Block-Grid Method for the Solution of Laplace's Equation on Polygons with a Slit
Directory of Open Access Journals (Sweden)
S. Cival Buranay
2013-01-01
Full Text Available The error estimates obtained for solving Laplace's boundary value problem on polygons by the block-grid method contain constants that are difficult to calculate accurately. Therefore, the experimental analysis of the method could be essential. The real characteristics of the block-grid method for solving Laplace's equation on polygons with a slit are analysed by experimental investigations. The numerical results obtained show that the order of convergence of the approximate solution is the same as in the case of a smooth solution. To illustrate the singular behaviour around the singular point, the shape of the highly accurate approximate solution and the figures of its partial derivatives up to second order are given in the “singular” part of the domain. Finally a highly accurate formula is given to calculate the stress intensity factor, which is an important quantity in fracture mechanics.
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
A new method to assess the statistical convergence of monte carlo solutions
International Nuclear Information System (INIS)
Forster, R.A.
1991-01-01
Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig
Wavelets and triple difference as a mathematical method for filtering and mitigation of DGPS errors
Directory of Open Access Journals (Sweden)
Aly M. El-naggar
2015-12-01
Wavelet spectral techniques can separate GPS signals into sub-bands where different errors can be separated and mitigated. The main goal of this paper was the development and implementation of DGPS error mitigation techniques using triple difference and wavelet. This paper studies, analyzes and provides new techniques that will help mitigate these errors in the frequency domain. The proposed technique applied to smooth noise for GPS receiver positioning data is based upon the analysis of wavelet transform (WT. The technique is applied using wavelet as a de-noising tool to tackle the high-frequency errors in the triple difference domain and to obtain a de-noised triple difference signal that can be used in a positioning calculation.
A multigrid solution method for mixed hybrid finite elements
Energy Technology Data Exchange (ETDEWEB)
Schmid, W. [Universitaet Augsburg (Germany)
1996-12-31
We consider the multigrid solution of linear equations arising within the discretization of elliptic second order boundary value problems of the form by mixed hybrid finite elements. Using the equivalence of mixed hybrid finite elements and non-conforming nodal finite elements, we construct a multigrid scheme for the corresponding non-conforming finite elements, and, by this equivalence, for the mixed hybrid finite elements, following guidelines from Arbogast/Chen. For a rectangular triangulation of the computational domain, this non-conforming schemes are the so-called nodal finite elements. We explicitly construct prolongation and restriction operators for this type of non-conforming finite elements. We discuss the use of plain multigrid and the multilevel-preconditioned cg-method and compare their efficiency in numerical tests.
Development of production methods of volume source by the resinous solution which has hardening
Motoki, R
2002-01-01
Volume sources is used for standard sources by radioactive measurement using Ge semiconductor detector of environmental sample, e.g. water, soil and etc. that require large volume. The commercial volume source used in measurement of the water sample is made of agar-agar, and that used in measurement of the soil sample is made of alumina powder. When the plastic receptacles of this two kinds of volume sources were damaged, the leakage contents cause contamination. Moreover, if hermetically sealing performance of volume source made of agar-agar fell, volume decrease due to an evaporation off moisture gives an error to radioactive measurement. Therefore, we developed the two type methods using unsaturated polyester resin, vinilester resin, their hardening agent and acrylicresin. The first type is due to dispersing the hydrochloric acid solution included the radioisotopes uniformly in each resin and hardening the resin. The second is due to dispersing the alumina powder absorbed the radioisotopes in each resin an...
Papadopoulos , D. F.; Anastassi , Z. A.; Simos , T. E.
2010-01-01
Abstract A new Runge-Kutta-Nystrom method, with phase-lag and amplification error of order infinity, for the numerical solution of the Schrodinger equation is developed in this paper. The new method is based on the Runge-Kutta-Nystrom method with fourth algebraic order, developed by Dormand, El-Mikkawy and Prince. Numerical illustrations indicate that the new method is much more efficient than other methods derived for the same purpose. phone: +30-210-9421510 (Simos, T. E.) ...
Directory of Open Access Journals (Sweden)
Mehmet Tarik Atay
2013-01-01
Full Text Available The Variational Iteration Method (VIM and Modified Variational Iteration Method (MVIM are used to find solutions of systems of stiff ordinary differential equations for both linear and nonlinear problems. Some examples are given to illustrate the accuracy and effectiveness of these methods. We compare our results with exact results. In some studies related to stiff ordinary differential equations, problems were solved by Adomian Decomposition Method and VIM and Homotopy Perturbation Method. Comparisons with exact solutions reveal that the Variational Iteration Method (VIM and the Modified Variational Iteration Method (MVIM are easier to implement. In fact, these methods are promising methods for various systems of linear and nonlinear stiff ordinary differential equations. Furthermore, VIM, or in some cases MVIM, is giving exact solutions in linear cases and very satisfactory solutions when compared to exact solutions for nonlinear cases depending on the stiffness ratio of the stiff system to be solved.
On matrix diffusion: formulations, solution methods and qualitative effects
Carrera, Jesús; Sánchez-Vila, Xavier; Benet, Inmaculada; Medina, Agustín; Galarza, Germán; Guimerà, Jordi
Matrix diffusion has become widely recognized as an important transport mechanism. Unfortunately, accounting for matrix diffusion complicates solute-transport simulations. This problem has led to simplified formulations, partly motivated by the solution method. As a result, some confusion has been generated about how to properly pose the problem. One of the objectives of this work is to find some unity among existing formulations and solution methods. In doing so, some asymptotic properties of matrix diffusion are derived. Specifically, early-time behavior (short tests) depends only on φm2RmDm / Lm2, whereas late-time behavior (long tracer tests) depends only on φmRm, and not on matrix diffusion coefficient or block size and shape. The latter is always true for mean arrival time. These properties help in: (a) analyzing the qualitative behavior of matrix diffusion; (b) explaining one paradox of solute transport through fractured rocks (the apparent dependence of porosity on travel time); (c) discriminating between matrix diffusion and other problems (such as kinetic sorption or heterogeneity); and (d) describing identifiability problems and ways to overcome them. RésuméLa diffusion matricielle est un phénomène reconnu maintenant comme un mécanisme de transport important. Malheureusement, la prise en compte de la diffusion matricielle complique la simulation du transport de soluté. Ce problème a conduit à des formulations simplifiées, en partie à cause de la méthode de résolution. Il s'en est suivi une certaine confusion sur la façon de poser correctement le problème. L'un des objectifs de ce travail est de trouver une certaine unité parmi les formulations et les méthodes de résolution. C'est ainsi que certaines propriétés asymptotiques de la diffusion matricielle ont été dérivées. En particulier, le comportement à l'origine (expériences de traçage courtes) dépend uniquement du terme φm2RmDm / Lm2, alors que le comportement à long terme
Electroerosion method for preparation of saturated solutions of ruthenium hydroxochloride
International Nuclear Information System (INIS)
Mikhalev, V.A.; Andrianov, G.A.; Zhadanov, B.V.; Ryazanov, A.I.
1987-01-01
A pilot plant for carrying out electroerosion processes using pulse current of high unit power is developed. The solution process of metallic Ru in concentrated HCl is investigated. The possibility of preparation of ruthenium hydroxochloride solutions of 300 g/l concentration is established; it gives the possibility of Ru solution under conditions similar to the process of salting out
Composition and method for solution mining of uranium ores
International Nuclear Information System (INIS)
Lawes, B.C.; Watts, J.C.
1981-01-01
It has been found that, in the solution mining of uranium ores using ammonium carbonate solutions containing hydrogen peroxide or ozone as an oxidant, the tendency of the formation being treated to become less permeable during the leaching process can be overcome by including in the leaching solution a very small concentration of sodium silicate
CO2 production in animals: analysis of potential errors in the doubly labeled water method
International Nuclear Information System (INIS)
Nagy, K.A.
1979-03-01
Laboratory validation studies indicate that doubly labeled water ( 3 HH 18 O and 2 HH 18 O) measurements of CO 2 production are accurate to within +-9% in nine species of mammals and reptiles, a bird, and an insect. However, in field studies, errors can be much larger under certain circumstances. Isotopic fraction of labeled water can cause large errors in animals whose evaporative water loss comprises a major proportion of total water efflux. Input of CO 2 across lungs and skin caused errors exceeding +80% in kangaroo rats exposed to air containing 3.4% unlabeled CO 2 . Analytical errors of +-1% in isotope concentrations can cause calculated rates of CO 2 production to contain errors exceeding +-70% in some circumstances. These occur: 1) when little decline in isotope concentractions has occured during the measurement period; 2) when final isotope concentrations closely approach background levels; and 3) when the rate of water flux in an animal is high relative to its rate of CO 2 production. The following sources of error are probably negligible in most situations: 1) use of an inappropriate equation for calculating CO 2 production, 2) variations in rates of water or CO 2 flux through time, 3) use of H 2 O-18 dilution space as a measure of body water volume, 4) exchange of 0-18 between water and nonaqueous compounds in animals (including excrement), 5) incomplete mixing of isotopes in the animal, and 6) input of unlabeled water via lungs and skin. Errors in field measurements of CO 2 production can be reduced to acceptable levels (< 10%) by appropriate selection of study subjects and recapture intervals
Choi, Sae Il
2009-01-01
This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…
International Nuclear Information System (INIS)
Su Qiong; Cheng Jianping; Diao Lijun; Li Guiqun
2006-01-01
A remarkable systemic error which was unknown in past long time has been indicated. The error appears in the calibration methods of determining activity of 238 U is used with γ-spectrometer with high resolution. When the γ-ray of 92.6 keV as the characteristic radiation from 238 U is used to determine the activity of 238 U in natural environment samples, the disturbing radiation produced by external excitation (or called outer sourcing X-ray radiation) is the main problem. Because the X-ray intensity is changed with many indefinite factors, it is advised that the calibration methods should be put away. As the influence of the systemic errors has been left in some past research papers, the authors suggest that the data from those papers should be cited carefully and if possible the data ought to be re-determined. (authors)
Method for regeneration of electroless nickel plating solution
Eisenmann, E.T.
1997-03-11
An electroless nickel(EN)/hypophosphite plating bath is provided employing acetic acid/acetate as a buffer and which is, as a result, capable of perpetual regeneration while avoiding the production of hazardous waste. A regeneration process is provided to process the spent EN plating bath solution. A concentrated starter and replenishment solution is provided for ease of operation of the plating bath. The regeneration process employs a chelating ion exchange system to remove nickel cations from spent EN plating solution. Phosphites are then removed from the solution by precipitation. The nickel cations are removed from the ion exchange system by elution with hypophosphorus acid and the nickel concentration of the eluate adjusted by addition of nickel salt. The treated solution and adjusted eluate are combined, stabilizer added, and the volume of resulting solution reduced by evaporation to form the bath starter and replenishing solution. 1 fig.
Method for regeneration of electroless nickel plating solution
Eisenmann, Erhard T.
1997-01-01
An electroless nickel(EN)/hypophosphite plating bath is provided employing acetic acid/acetate as a buffer and which is, as a result, capable of perpetual regeneration while avoiding the production of hazardous waste. A regeneration process is provided to process the spent EN plating bath solution. A concentrated starter and replenishment solution is provided for ease of operation of the plating bath. The regeneration process employs a chelating ion exchange system to remove nickel cations from spent EN plating solution. Phosphites are then removed from the solution by precipitation. The nickel cations are removed from the ion exchange system by elution with hypophosphorous acid and the nickel concentration of the eluate adjusted by addition of nickel salt. The treated solution and adjusted eluate are combined, stabilizer added, and the volume of resulting solution reduced by evaporation to form the bath starter and replenishing solution.
Matrix method for two-dimensional waveguide mode solution
Sun, Baoguang; Cai, Congzhong; Venkatesh, Balajee Seshasayee
2018-05-01
In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a 'folded' plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.
Methods of using the quadratic assignment problem solution
Directory of Open Access Journals (Sweden)
Izabela Kudelska
2012-09-01
Full Text Available Background: Quadratic assignment problem (QAP is one of the most interesting of combinatorial optimization. Was presented by Koopman and Beckamanna in 1957, as a mathematical model of the location of indivisible tasks. This problem belongs to the class NP-hard issues. This forces the application to the solution already approximate methods for tasks with a small size (over 30. Even though it is much harder than other combinatorial optimization problems, it enjoys wide interest because it models the important class of decision problems. Material and methods: The discussion was an artificial intelligence tool that allowed to solve the problem QAP, among others are: genetic algorithms, Tabu Search, Branch and Bound. Results and conclusions: QAP did not arise directly as a model for certain actions, but he found its application in many areas. Examples of applications of the problem is: arrangement of buildings on the campus of the university, layout design of electronic components in systems with large scale integration (VLSI, design a hospital, arrangement of keys on the keyboard.
International Nuclear Information System (INIS)
Smirnov, G.I.; Kachur, N.Ya.; Kostromina, O.N.; Ogorodnikova, A.A.; Khajnakov, S.A.
1990-01-01
A method of deep ion exchange purification of sodium iodide solution from heavy metals (iron, nickel, copper, lead) and potassium microimpurities is developed. The method includes multiple sorption of microimpurities on titanium phosphate with their subsequent desorption by sorbent processing with a solution with a solution of 3-6 N nitric acid, first, and then with a neutral solution of 2 % sodium thiosulfate. The given method permits to increase the purification degree of sodium iodide solution by 25-30 %. 2 tabs
Weng, Hanli; Li, Youping
2017-04-01
The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.
An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.
Deng, Zhongliang; Fu, Xiao; Wang, Hanhua
2018-01-20
Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.
An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning
Directory of Open Access Journals (Sweden)
Zhongliang Deng
2018-01-01
Full Text Available Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS. Wireless positioning signals have a considerable attenuation in received signal strength (RSS when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.
New Exact Solutions of Time Fractional Gardner Equation by Using New Version of F -Expansion Method
International Nuclear Information System (INIS)
Pandir, Yusuf; Duzgun, Hasan Huseyin
2017-01-01
In this article, we consider analytical solutions of the time fractional derivative Gardner equation by using the new version of F-expansion method. With this proposed method multiple Jacobi elliptic functions are situated in the solution function. As a result, various exact analytical solutions consisting of single and combined Jacobi elliptic functions solutions are obtained. (paper)
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
Directory of Open Access Journals (Sweden)
Mehmet DEMİREZEN
2006-10-01
Full Text Available The Fossilized pronunciation errors constitute a great problem in the mastery of L2in second or foreign language learning and teaching (Odlin 1989; Demirezen, 2003;Demirezen, 2004; Johnson, 2001. One of such errors, which is committed by a greatmajority of Turkish teachers of English and student teachers, is the acquisition of ƒËƒÁƒÍ andƒËƒµƒÍ vowel sounds of the English language. There has been no specific material or lessonplan encountered so far in the literature to rehabilitate the pronunciation difficulty, createdby ƒËƒÁƒÍ and ƒËƒµƒÍ vowel sounds of the English language. Therefore, this article aims toprovide pronunciation teaching material and a sample lesson on two difficult sounds forTurks, like ƒËƒÁƒÍ and ƒËƒµƒÍ, to the Turkish teachers-on-the-job and student teachers ofEnglish.
The Scientific Method, Diagnostic Bayes, and How to Detect Epistemic Errors
Vrugt, J. A.
2015-12-01
In the past decades, Bayesian methods have found widespread application and use in environmental systems modeling. Bayes theorem states that the posterior probability, P(H|D) of a hypothesis, H is proportional to the product of the prior probability, P(H) of this hypothesis and the likelihood, L(H|hat{D}) of the same hypothesis given the new/incoming observations, \\hat {D}. In science and engineering, H often constitutes some numerical simulation model, D = F(x,.) which summarizes using algebraic, empirical, and differential equations, state variables and fluxes, all our theoretical and/or practical knowledge of the system of interest, and x are the d unknown parameters which are subject to inference using some data, \\hat {D} of the observed system response. The Bayesian approach is intimately related to the scientific method and uses an iterative cycle of hypothesis formulation (model), experimentation and data collection, and theory/hypothesis refinement to elucidate the rules that govern the natural world. Unfortunately, model refinement has proven to be very difficult in large part because of the poor diagnostic power of residual based likelihood functions tep{gupta2008}. This has inspired te{vrugt2013} to advocate the use of 'likelihood-free' inference using approximate Bayesian computation (ABC). This approach uses one or more summary statistics, S(\\hat {D}) of the original data, \\hat {D} designed ideally to be sensitive only to one particular process in the model. Any mismatch between the observed and simulated summary metrics is then easily linked to a specific model component. A recurrent issue with the application of ABC is self-sufficiency of the summary statistics. In theory, S(.) should contain as much information as the original data itself, yet complex systems rarely admit sufficient statistics. In this article, we propose to combine the ideas of ABC and regular Bayesian inference to guarantee that no information is lost in diagnostic model
Error analysis of isotope dilution mass spectrometry method with internal standard
International Nuclear Information System (INIS)
Rizhinskii, M.W.; Vitinskii, M.Y.
1989-02-01
The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs
Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.
Zaitsev, M; Steinhoff, S; Shah, N J
2003-06-01
A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.
Goswami, Deepjyoti
2013-05-01
In the first part of this article, a new mixed method is proposed and analyzed for parabolic integro-differential equations (PIDE) with nonsmooth initial data. Compared to the standard mixed method for PIDE, the present method does not bank on a reformulation using a resolvent operator. Based on energy arguments combined with a repeated use of an integral operator and without using parabolic type duality technique, optimal L2 L2-error estimates are derived for semidiscrete approximations, when the initial condition is in L2 L2. Due to the presence of the integral term, it is, further, observed that a negative norm estimate plays a crucial role in our error analysis. Moreover, the proposed analysis follows the spirit of the proof techniques used in deriving optimal error estimates for finite element approximations to PIDE with smooth data and therefore, it unifies both the theories, i.e., one for smooth data and other for nonsmooth data. Finally, we extend the proposed analysis to the standard mixed method for PIDE with rough initial data and provide an optimal error estimate in L2, L 2, which improves upon the results available in the literature. © 2013 Springer Science+Business Media New York.
Quantile Regression With Measurement Error
Wei, Ying
2009-08-27
Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.
Directory of Open Access Journals (Sweden)
Saman Dastaran
2016-03-01
Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control
Methods for measuring risk-aversion: problems and solutions
International Nuclear Information System (INIS)
Thomas, P J
2013-01-01
Risk-aversion is a fundamental parameter determining how humans act when required to operate in situations of risk. Its general applicability has been discussed in a companion presentation, and this paper examines methods that have been used in the past to measure it and their attendant problems. It needs to be borne in mind that risk-aversion varies with the size of the possible loss, growing strongly as the possible loss becomes comparable with the decision maker's assets. Hence measuring risk-aversion when the potential loss or gain is small will produce values close to the risk-neutral value of zero, irrespective of who the decision maker is. It will also be shown how the generally accepted practice of basing a measurement on the results of a three-term Taylor series will estimate a limiting value, minimum or maximum, rather than the value utilised in the decision. A solution is to match the correct utility function to the results instead
Methods for measuring risk-aversion: problems and solutions
Thomas, P. J.
2013-09-01
Risk-aversion is a fundamental parameter determining how humans act when required to operate in situations of risk. Its general applicability has been discussed in a companion presentation, and this paper examines methods that have been used in the past to measure it and their attendant problems. It needs to be borne in mind that risk-aversion varies with the size of the possible loss, growing strongly as the possible loss becomes comparable with the decision maker's assets. Hence measuring risk-aversion when the potential loss or gain is small will produce values close to the risk-neutral value of zero, irrespective of who the decision maker is. It will also be shown how the generally accepted practice of basing a measurement on the results of a three-term Taylor series will estimate a limiting value, minimum or maximum, rather than the value utilised in the decision. A solution is to match the correct utility function to the results instead.
Batistatou, Evridiki; McNamee, Roseanne
2012-12-10
It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.
An explicit approximate solution to the Duffing-harmonic oscillator by a cubication method
International Nuclear Information System (INIS)
Belendez, A.; Mendez, D.I.; Fernandez, E.; Marini, S.; Pascual, I.
2009-01-01
The nonlinear oscillations of a Duffing-harmonic oscillator are investigated by an approximated method based on the 'cubication' of the initial nonlinear differential equation. In this cubication method the restoring force is expanded in Chebyshev polynomials and the original nonlinear differential equation is approximated by a Duffing equation in which the coefficients for the linear and cubic terms depend on the initial amplitude, A. The replacement of the original nonlinear equation by an approximate Duffing equation allows us to obtain explicit approximate formulas for the frequency and the solution as a function of the complete elliptic integral of the first kind and the Jacobi elliptic function, respectively. These explicit formulas are valid for all values of the initial amplitude and we conclude this cubication method works very well for the whole range of initial amplitudes. Excellent agreement of the approximate frequencies and periodic solutions with the exact ones is demonstrated and discussed and the relative error for the approximate frequency is as low as 0.071%. Unlike other approximate methods applied to this oscillator, which are not capable to reproduce exactly the behaviour of the approximate frequency when A tends to zero, the cubication method used in this Letter predicts exactly the behaviour of the approximate frequency not only when A tends to infinity, but also when A tends to zero. Finally, a closed-form expression for the approximate frequency is obtained in terms of elementary functions. To do this, the relationship between the complete elliptic integral of the first kind and the arithmetic-geometric mean as well as Legendre's formula to approximately obtain this mean are used.
International Nuclear Information System (INIS)
Lydia, Emilio J.; Barros, Ricardo C.
2011-01-01
In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)
Filatovas, Ernestas; Podkopaev, Dmitry; Kurasova, Olga
2015-01-01
Interactive methods of multiobjective optimization repetitively derive Pareto optimal solutions based on decision maker’s preference information and present the obtained solutions for his/her consideration. Some interactive methods save the obtained solutions into a solution pool and, at each iteration, allow the decision maker considering any of solutions obtained earlier. This feature contributes to the flexibility of exploring the Pareto optimal set and learning about the op...
International Nuclear Information System (INIS)
Kamiya, Yukihide.
1980-05-01
Has been developed a computational method for the astral survey procedure of the primary monuments that consists in the measurements of short chords and perpendicular distances. This method can be applied to any astral polygon with the lengths of chords and vertical angles different from each other. We will study the propagation of measurement errors for KEK-PF storage ring, and also examine its effect on the closed orbit distortion. (author)
Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L
2011-01-01
To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit
International Nuclear Information System (INIS)
Jung, Won Dea; Kim, Jae Whan; Ha, Jae Joo; Yoon, Wan C.
1999-01-01
This study was performed to comparatively evaluate selected Human Reliability Analysis (HRA) methods which mainly focus on cognitive error analysis, and to derive the requirement of a new human error analysis (HEA) framework for Accident Management (AM) in nuclear power plants(NPPs). In order to achieve this goal, we carried out a case study of human error analysis on an AM task in NPPs. In the study we evaluated three cognitive HEA methods, HRMS, CREAM and PHECA, which were selected through the review of the currently available seven cognitive HEA methods. The task of reactor cavity flooding was chosen for the application study as one of typical tasks of AM in NPPs. From the study, we derived seven requirement items for a new HEA method of AM in NPPs. We could also evaluate the applicability of three cognitive HEA methods to AM tasks. CREAM is considered to be more appropriate than others for the analysis of AM tasks. But, PHECA is regarded less appropriate for the predictive HEA technique as well as for the analysis of AM tasks. In addition to these, the advantages and disadvantages of each method are described. (author)
Directory of Open Access Journals (Sweden)
Suheel Abdullah Malik
Full Text Available In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE through substitution is converted into a nonlinear ordinary differential equation (NODE. The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM, homotopy perturbation method (HPM, and optimal homotopy asymptotic method (OHAM, show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Directory of Open Access Journals (Sweden)
A. Zakerian
2011-12-01
Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods. Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars. Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets. Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors . ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.
Ibrahim, Musadiq; Lapthorn, Adrian Jonathan; Ibrahim, Mohammad
2017-08-01
The Protein Data Bank (PDB) is the single most important repository of structural data for proteins and other biologically relevant molecules. Therefore, it is critically important to keep the PDB data, error-free as much as possible. In this study, we have critically examined PDB structures of 292 protein molecules which have been deposited in the repository along with potentially incorrect ligands labelled as Unknown ligands (UNK). Pharmacophores were generated for all the protein structures by using Discovery Studio Visualizer (DSV) and Accelrys, Catalyst ® . The generated pharmacophores were subjected to the database search containing the reported ligand. Ligands obtained through Pharmacophore searching were then checked for fitting the observed electron density map by using Coot ® . The predicted ligands obtained via Pharmacophore searching fitted well with the observed electron density map, in comparison to the ligands reported in the PDB's. Based on our study we have learned that till may 2016, among 292 submitted structures in the PDB, at least 20 structures have ligands with a clear electron density but have been incorrectly labelled as unknown ligands (UNK). We have demonstrated that Pharmacophore searching and Coot ® can provide potential help to find suitable known ligands for these protein structures, the former for ligand search and the latter for electron density analysis. The use of these two techniques can facilitate the quick and reliable labelling of ligands where the electron density map serves as a reference. Copyright © 2017 Elsevier Inc. All rights reserved.
Methods for determining and processing 3D errors and uncertainties for AFM data analysis
Klapetek, P.; Nečas, D.; Campbellová, A.; Yacoot, A.; Koenders, L.
2011-02-01
This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion.
Methods for determining and processing 3D errors and uncertainties for AFM data analysis
International Nuclear Information System (INIS)
Klapetek, P; Campbellová, A; Nečas, D; Yacoot, A; Koenders, L
2011-01-01
This paper describes the processing of three-dimensional (3D) scanning probe microscopy (SPM) data. It is shown that 3D volumetric calibration error and uncertainty data can be acquired for both metrological atomic force microscope systems and commercial SPMs. These data can be used within nearly all the standard SPM data processing algorithms to determine local values of uncertainty of the scanning system. If the error function of the scanning system is determined for the whole measurement volume of an SPM, it can be converted to yield local dimensional uncertainty values that can in turn be used for evaluation of uncertainties related to the acquired data and for further data processing applications (e.g. area, ACF, roughness) within direct or statistical measurements. These have been implemented in the software package Gwyddion
Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System
Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou
2017-01-01
The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a n...
Error analysis of the finite element and finite volume methods for some viscoelastic fluids
Czech Academy of Sciences Publication Activity Database
Lukáčová-Medviďová, M.; Mizerová, H.; She, B.; Stebel, Jan
2016-01-01
Roč. 24, č. 2 (2016), s. 105-123 ISSN 1570-2820 R&D Projects: GA ČR(CZ) GAP201/11/1304 Institutional support: RVO:67985840 Keywords : error analysis * Oldroyd-B type models * viscoelastic fluids Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2016 http://www.degruyter.com/view/j/jnma.2016.24.issue-2/jnma-2014-0057/jnma-2014-0057. xml
1983-08-01
Standard Errors for B1 Bell-shaped distribution Rectangular Item b Bn-45 n=90 n-45 n=45 -No. i i N-1500 N=1500 N-6000 N=1500 1 -2.01 -1.75 0.516 0.466...34th Streets Lawrence, KS 66045 Baltimore, MD 21218 ENIC Facility-Acquisitions 1 Dr. Ron Hambleton 4t33 Rugby Avenue School of Education Lcthesda, !ID
CSIR Research Space (South Africa)
Kruger, OA
2000-01-01
Full Text Available on face-to-face angle measurements. The results show that flatness and eccentricity deviations have less effect on angle measurements than do pyramidal errors. 1. Introduction Polygons and angle blocks are the most important transfer standards in the field... of angle metrology. Polygons are used by national metrology institutes (NMIs) as transfer standards to industry, where they are used in conjunction with autocollimators to calibrate index tables, rotary tables and other forms of angle- measuring equipment...
2014-04-01
Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
International Nuclear Information System (INIS)
Shang Yadong
2008-01-01
The extended hyperbolic functions method for nonlinear wave equations is presented. Based on this method, we obtain a multiple exact explicit solutions for the nonlinear evolution equations which describe the resonance interaction between the long wave and the short wave. The solutions obtained in this paper include (a) the solitary wave solutions of bell-type for S and L, (b) the solitary wave solutions of kink-type for S and bell-type for L, (c) the solitary wave solutions of a compound of the bell-type and the kink-type for S and L, (d) the singular travelling wave solutions, (e) periodic travelling wave solutions of triangle function types, and solitary wave solutions of rational function types. The variety of structure to the exact solutions of the long-short wave equation is illustrated. The methods presented here can also be used to obtain exact solutions of nonlinear wave equations in n dimensions
Soliton-like solutions to the GKdV equation by extended mapping method
International Nuclear Information System (INIS)
Wu Ranchao; Sun Jianhua
2007-01-01
In this note, many new exact solutions of the generalized KdV equation, such as rational solutions, periodic solutions like Jacobian elliptic and triangular functions, soliton-like solutions, are constructed by symbolic computation and the extended mapping method, with the auxiliary ordinary equation replaced by a more general one
Multiple travelling wave solutions of nonlinear evolution equations using a unified algebraic method
International Nuclear Information System (INIS)
Fan Engui
2002-01-01
A new direct and unified algebraic method for constructing multiple travelling wave solutions of general nonlinear evolution equations is presented and implemented in a computer algebraic system. Compared with most of the existing tanh methods, the Jacobi elliptic function method or other sophisticated methods, the proposed method not only gives new and more general solutions, but also provides a guideline to classify the various types of the travelling wave solutions according to the values of some parameters. The solutions obtained in this paper include (a) kink-shaped and bell-shaped soliton solutions, (b) rational solutions, (c) triangular periodic solutions and (d) Jacobi and Weierstrass doubly periodic wave solutions. Among them, the Jacobi elliptic periodic wave solutions exactly degenerate to the soliton solutions at a certain limit condition. The efficiency of the method can be demonstrated on a large variety of nonlinear evolution equations such as those considered in this paper, KdV-MKdV, Ito's fifth MKdV, Hirota, Nizhnik-Novikov-Veselov, Broer-Kaup, generalized coupled Hirota-Satsuma, coupled Schroedinger-KdV, (2+1)-dimensional dispersive long wave, (2+1)-dimensional Davey-Stewartson equations. In addition, as an illustrative sample, the properties of the soliton solutions and Jacobi doubly periodic solutions for the Hirota equation are shown by some figures. The links among our proposed method, the tanh method, extended tanh method and the Jacobi elliptic function method are clarified generally. (author)
International Nuclear Information System (INIS)
Chen Yong; Wang Qi; Li Biao
2005-01-01
Based on a new general ansatz and a general subepuation, a new general algebraic method named elliptic equation rational expansion method is devised for constructing multiple travelling wave solutions in terms of rational special function for nonlinear evolution equations (NEEs). We apply the proposed method to solve Whitham-Broer-Kaup equation and explicitly construct a series of exact solutions which include rational form solitary wave solution, rational form triangular periodic wave solutions and rational wave solutions as special cases. In addition, the links among our proposed method with the method by Fan [Chaos, Solitons and Fractals 2004;20:609], are also clarified generally
Methods of Uranium Determination in solutions of Tributyl Phosphate and Kerosene
International Nuclear Information System (INIS)
Petrement Eguiluz, J.; Palomares Delgado, F.
1962-01-01
A new analytical method for the determination of uranium in organic solutions of tributyl phosphate and kerosene is proposed. In this method the uranium is reextracted from the aqueous phase by reduction with cadmium in acid solution. The uranium can be determined in this solution by the usual methods. In case of very diluted solutions, a direct spectrophtometrical determination of uranium in the organic phase with dibenzoylmethane is proposed. (Author) 21 refs
An Algebraic Method for Constructing Exact Solutions to Difference-Differential Equations
International Nuclear Information System (INIS)
Wang Zhen; Zhang Hongqing
2006-01-01
In this paper, we present a method to solve difference differential equation(s). As an example, we apply this method to discrete KdV equation and Ablowitz-Ladik lattice equation. As a result, many exact solutions are obtained with the help of Maple including soliton solutions presented by hyperbolic functions sinh and cosh, periodic solutions presented by sin and cos and rational solutions. This method can also be used to other nonlinear difference-differential equation(s).
Method of precipitating uranium from an aqueous solution and/or sediment
Tokunaga, Tetsu K; Kim, Yongman; Wan, Jiamin
2013-08-20
A method for precipitating uranium from an aqueous solution and/or sediment comprising uranium and/or vanadium is presented. The method includes precipitating uranium as a uranyl vanadate through mixing an aqueous solution and/or sediment comprising uranium and/or vanadium and a solution comprising a monovalent or divalent cation to form the corresponding cation uranyl vanadate precipitate. The method also provides a pathway for extraction of uranium and vanadium from an aqueous solution and/or sediment.
Method for Non-Invasive Determination of Chemical Properties of Aqueous Solutions
Todd, Paul W. (Inventor); Jones, Alan (Inventor); Thomas, Nathan A. (Inventor)
2016-01-01
A method for non-invasively determining a chemical property of an aqueous solution is provided. The method provides the steps of providing a colored solute having a light absorbance spectrum and transmitting light through the colored solute at two different wavelengths. The method further provides the steps of measuring light absorbance of the colored solute at the two different transmitted light wavelengths, and comparing the light absorbance of the colored solute at the two different wavelengths to determine a chemical property of an aqueous solution.
Construct solitary solutions of discrete hybrid equation by Adomian Decomposition Method
International Nuclear Information System (INIS)
Wang Zhen; Zhang Hongqing
2009-01-01
In this paper, we apply the Adomian Decomposition Method to solving the differential-difference equations. A typical example is applied to illustrate the validity and the great potential of the Adomian Decomposition Method in solving differential-difference equation. Kink shaped solitary solution and Bell shaped solitary solution are presented. Comparisons are made between the results of the proposed method and exact solutions. The results show that the Adomian Decomposition Method is an attractive method in solving the differential-difference equations.
Modified harmonic balance method for the solution of nonlinear jerk equations
Rahman, M. Saifur; Hasan, A. S. M. Z.
2018-03-01
In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.
Further improved F-expansion method and new exact solutions of Konopelchenko-Dubrovsky equation
International Nuclear Information System (INIS)
Wang Dengshan; Zhang Hongqing
2005-01-01
In this paper, with the aid of the symbolic computation we improve the extended F-expansion method in [Chaos, Solitons and Fractals 2004; 22:111] and propose the further improved F-expansion method. Using this method, we have gotten many new exact solutions which we have never seen before within our knowledge of the (2 + 1)-dimensional Konopelchenko-Dubrovsky equation. In addition,the solutions we get are more general than the solutions that the extended F-expansion method gets.The solutions we get include Jacobi elliptic function solutions, soliton-like solutions, trigonometric function solutions and so on. Our method can also apply to other partial differential equations and can also get many new exact solutions
Du, Chen-Zhao; Wu, Zhi-Sheng; Zhao, Na; Zhou, Zheng; Shi, Xin-Yuan; Qiao, Yan-Jiang
2016-10-01
To establish a rapid quantitative analysis method for online monitoring of chlorogenic acid in aqueous solution of Lonicera Japonica Flos extraction by using micro-electromechanical near infrared spectroscopy (MEMS-NIR). High performance liquid chromatography(HPLC) was used as reference method．Kennard-Stone (K-S) algorithm was used to divide sample sets, and partial least square(PLS) regression was adopted to establish the multivariate analysis model between the HPLC analysis contents and NIR spectra. The synergy interval partial least squares (SiPLS) was used to selected modeling waveband to establish PLS models. RPD was used to evaluate the prediction performance of the models. MDLs was calculated based on two types of error detection theory, on-line analytical modeling approach of Lonicera Japonica Flos extraction process was expressed scientifically by MDL. The result shows that the model established by multiplicative scatter correction(MSC) was the best, with the root mean square with cross validation(RMSECV), root mean square error of correction(RMSEC) and root mean square error of prediction(RMSEP) of chlorogenic acid as 1.707, 1.489, 2.362, respectively, the determination coefficient of the calibration model was 0.998 5, and the determination coefficient of the prediction was 0.988 1．The value of RPD is 9.468.The MDL (0.042 15 g•L⁻¹) selected by SiPLS is less than the original,which demonstrated that SiPLS was beneficial to improve the prediction performance of the model. In this study, a more accurate expression of the prediction performance of the model from the two types of error detection theory, to further illustrate MEMS-NIR spectroscopy can be used for on-line monitoring of Lonicera Japonica Flos extraction process. Copyright© by the Chinese Pharmaceutical Association.
International Nuclear Information System (INIS)
Kupka, F.
1997-11-01
This thesis deals with the extension of sparse grid techniques to spectral methods for the solution of partial differential equations with periodic boundary conditions. A review on boundary and initial-boundary value problems and a discussion on numerical resolution is used to motivate this research. Spectral methods are introduced by projection techniques, and by three model problems: the stationary and the transient Helmholtz equations, and the linear advection equation. The approximation theory on the hyperbolic cross is reviewed and its close relation to sparse grids is demonstrated. This approach extends to non-periodic problems. Various Sobolev spaces with dominant mixed derivative are introduced to provide error estimates for Fourier approximation and interpolation on the hyperbolic cross and on sparse grids by means of Sobolev norms. The theorems are immediately applicable to the stability and convergence analysis of sparse grid spectral methods. This is explicitly demonstrated for the three model problems. A variant of the von Neumann condition is introduced to simplify the stability analysis of the time-dependent model problems. The discrete Fourier transformation on sparse grids is discussed together with its software implementation. Results on numerical experiments are used to illustrate the performance of the new method with respect to the smoothness properties of each example. The potential of the method in mathematical modelling is estimated and generalizations to other sparse grid methods are suggested. The appendix includes a complete Fortran90 program to solve the linear advection equation by the sparse grid Fourier collocation method and a third-order Runge-Kutta routine for integration in time. (author)
Longitudinal Cut Method Revisited: A Survey on the Main Error Sources
Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo
2000-01-01
Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...
2013-06-24
l> h L), MFE2 or GFV2 = (RUR:<p h R-n h R<t> h R), MFEi or GFV, = (RPLXl-P hdhL), MFE , or GFV4 = (RPR.,^-P^), MFE5 or GFV5 = {Ri,ß h...common to both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric progression and of the use of quadrature, we...their true value, the error in the quantity of interest MFE £(e,!//) or GFV £(<?, y/). Tables 1 and 2 show this using coarse and fine forward
Cockrell, C. R.
1989-01-01
Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.
International Nuclear Information System (INIS)
Zhong, Z.
1985-01-01
A new approach to the solution of certain differential equations, the double complex function method, is developed, combining ordinary complex numbers and hyperbolic complex numbers. This method is applied to the theory of stationary axisymmetric Einstein equations in general relativity. A family of exact double solutions, double transformation groups, and n-soliton double solutions are obtained
Fundamental solution of the problem of linear programming and method of its determination
Petrunin, S. V.
1978-01-01
The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using some exact solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct the exact complex solutions for nonlinear partial differential equations. The method is implemented for the NLS equation, a new Hamiltonian amplitude equation, the coupled Schrodinger-KdV equations and the Hirota-Maccari equations. New exact complex solutions are obtained.
Efimova, Olga Yu.
2010-01-01
The modification of simplest equation method to look for exact solutions of nonlinear partial differential equations is presented. Using this method we obtain exact solutions of generalized Korteweg-de Vries equation with cubic source and exact solutions of third-order Kudryashov-Sinelshchikov equation describing nonlinear waves in liquids with gas bubbles.
International Nuclear Information System (INIS)
Marseguerra, Marzio; Zio, Enrico; Librizzi, Massimo
2006-01-01
The current 'second generation' approaches in human reliability analysis focus their attention on the contextual conditions under which a given action is performed rather than on the notion of inherent human error probabilities, as was done in the earlier 'first generation' techniques. Among the 'second generation' methods, this paper considers the Cognitive Reliability and Error Analysis Method (CREAM) and proposes some developments with respect to a systematic procedure for computing probabilities of action failure. The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm which is here further extended to include uncertainty on the qualification of the conditions under which the action is performed and to account for the fact that the effects of the common performance conditions (CPCs) on performance reliability may not all be equal. By the proposed approach, the probability of action failure is estimated by rating the performance conditions in terms of their effect on the action
Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration
2017-07-01
We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Medviďová-Lukáčová, M.; Nečasová, Šárka; Novotný, A.; She, Bangwei
2018-01-01
Roč. 16, č. 1 (2018), s. 150-183 ISSN 1540-3459 R&D Projects: GA ČR GA16-03230S EU Projects: European Commission(XE) 320078 - MATHEF Institutional support: RVO:67985840 Keywords : Navier-Stokes system * finite element numerical method * finite volume numerical method * asymptotic preserving schemes Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.865, year: 2016 http://epubs.siam.org/doi/10.1137/16M1094233
International Nuclear Information System (INIS)
Hubert, J.
1979-01-01
The variational finite element method (of the Rayleigh-Ritz type) has been applied to solve the standard diffusion-convection equation of radial flow in a dispersive medium. It was shown that the imposing of the boundary condition ΔC/Δx = 0 (=null concentration gradient) introduced great errors in computation results. To remedy it this condition was imposed at the free end of the artifical domain. Its other end joined to the downstream boundary of the investigated domain. The results of calculations compared with the known analytical solutions of the parallel flow show their good accuracy. The method was used to discuss the applicability of the approximate analytical solutions of the radial flow. (author)
Radioactivity measurements of 32P solutions by calorimetric methods
International Nuclear Information System (INIS)
Genka, T.; Nataredja, I.K.
1992-01-01
Radioactivity of 32 P solution is measured with a twin-cup heat-flow microcalorimeter. In order to convert whole decay energy evolved from the 32 P solution in a glass vial into thermal power, 5 mm-thick lead container was used as a radiation absorber. Corrections for heat loss due to thermal radiation and bremsstrahlung escape as well as an effect of impurity ( 33 P) are conducted. The overall uncertainty of the nondestructive measurement as a sample is in a container is estimated to be ± 1.5 %. Discussion about estimates of uncertainties is also given in detail. (author)
International Nuclear Information System (INIS)
Kaya, Dogan; El-Sayed, Salah M.
2003-01-01
In this Letter we present an Adomian's decomposition method (shortly ADM) for obtaining the numerical soliton-like solutions of the potential Kadomtsev-Petviashvili (shortly PKP) equation. We will prove the convergence of the ADM. We obtain the exact and numerical solitary-wave solutions of the PKP equation for certain initial conditions. Then ADM yields the analytic approximate solution with fast convergence rate and high accuracy through previous works. The numerical solutions are compared with the known analytical solutions
Error-free pathology: applying lean production methods to anatomic pathology.
Condel, Jennifer L; Sharbaugh, David T; Raab, Stephen S
2004-12-01
The current state of our health care system calls for dramatic changes. In their pathology department, the authors believe these changes may be accomplished by accepting the long-term commitment of applying a lean production system. The ideal state of zero pathology errors is one that should be pursued by consistently asking, "Why can't we?" The philosophy of lean production systems began in the manufacturing industry: "All we are doing is looking at the time from the moment the customer gives us an order to the point when we collect the cash. And we are reducing that time line by removing non-value added wastes". The ultimate goals in pathology and overall health care are not so different. The authors' intention is to provide the patient (customer) with the most accurate diagnostic information in a timely and efficient manner. Their lead histotechnologist recently summarized this philosophy: she indicated that she felt she could sleep better at night knowing she truly did the best job she could. Her chances of making an error (in cutting or labeling) were dramatically decreased in the one-by-one continuous flow work process compared with previous practices. By designing a system that enables employees to be successful in meeting customer demand, and by empowering the frontline staff in the development and problem solving processes, one can meet the challenges of eliminating waste and build an improved, efficient system.
Methods to reduce medication errors in a clinical trial of an investigational parenteral medication
Directory of Open Access Journals (Sweden)
Gillian L. Fell
2016-12-01
Full Text Available There are few evidence-based guidelines to inform optimal design of complex clinical trials, such as those assessing the safety and efficacy of intravenous drugs administered daily with infusion times over many hours per day and treatment durations that may span years. This study is a retrospective review of inpatient administration deviation reports for an investigational drug that is administered daily with infusion times of 8–24 h, and variable treatment durations for each patient. We report study design modifications made in 2007–2008 aimed at minimizing deviations from an investigational drug infusion protocol approved by an institutional review board and the United States Food and Drug Administration. Modifications were specifically aimed at minimizing errors of infusion rate, incorrect dose, incorrect patient, or wrong drug administered. We found that the rate of these types of administration errors of the study drug was significantly decreased following adoption of the specific study design changes. This report provides guidance in the design of clinical trials testing the safety and efficacy of study drugs administered via intravenous infusion in an inpatient setting so as to minimize drug administration protocol deviations and optimize patient safety.
BLESS 2: accurate, memory-efficient and fast error correction method.
Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming
2016-08-01
The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Chi-Chang Wang
2013-09-01
Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.
Tight Error Bounds for Fourier Methods for Option Pricing for Exponential Levy Processes
Crocce, Fabian; Hä ppö lä , Juho; Keissling, Jonas; Tempone, Raul
2016-01-01
for the discontinuities in the asset price. The Levy -Khintchine formula provides an explicit representation of the characteristic function of a L´evy process (cf, [6]): One can derive an exact expression for the Fourier transform of the solution of the relevant PIDE
Energy Technology Data Exchange (ETDEWEB)
Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com [Physics Department, Faculty of Mathematics and Science, Sebelas Maret University, Jl. Ir. Sutami 36A Kentingan Surakarta 57126 (Indonesia); Deta, U. A. [Physics Department, Faculty of Science and Mathematics Education and Teacher Training, Surabaya State University, Surabaya (Indonesia)
2016-02-08
The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy. The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.
Some error estimates for the lumped mass finite element method for a parabolic problem
Chatzipantelidis, P.; Lazarov, R. D.; Thomé e, V.
2012-01-01
for the standard Galerkin method carry over to the lumped mass method whereas nonsmooth initial data estimates require special assumptions on the triangulation. We also discuss the application to time discretization by the backward Euler and Crank-Nicolson methods
A numerical dressing method for the nonlinear superposition of solutions of the KdV equation
International Nuclear Information System (INIS)
Trogdon, Thomas; Deconinck, Bernard
2014-01-01
In this paper we present the unification of two existing numerical methods for the construction of solutions of the Korteweg–de Vries (KdV) equation. The first method is used to solve the Cauchy initial-value problem on the line for rapidly decaying initial data. The second method is used to compute finite-genus solutions of the KdV equation. The combination of these numerical methods allows for the computation of exact solutions that are asymptotically (quasi-)periodic finite-gap solutions and are a nonlinear superposition of dispersive, soliton and (quasi-)periodic solutions in the finite (x, t)-plane. Such solutions are referred to as superposition solutions. We compute these solutions accurately for all values of x and t. (paper)
The generalized tanh method to obtain exact solutions of nonlinear partial differential equation
Gómez, César
2007-01-01
In this paper, we present the generalized tanh method to obtain exact solutions of nonlinear partial differential equations, and we obtain solitons and exact solutions of some important equations of the mathematical physics.
Traveling Wave Solutions of ZK-BBM Equation Sine-Cosine Method
Directory of Open Access Journals (Sweden)
Sadaf Bibi
2014-03-01
Full Text Available Travelling wave solutions are obtained by using a relatively new technique which is called sine-cosine method for ZK-BBM equations. Solution procedure and obtained results re-confirm the efficiency of the proposed scheme.
Directory of Open Access Journals (Sweden)
Xiao-zhe Bai
2017-01-01
Full Text Available Globally, cyanobacteria blooms frequently occur, and effective prediction of cyanobacteria blooms in lakes and reservoirs could constitute an essential proactive strategy for water-resource protection. However, cyanobacteria blooms are very complicated because of the internal stochastic nature of the system evolution and the external uncertainty of the observation data. In this study, an adaptive-clustering algorithm is introduced to obtain some typical operating intervals. In addition, the number of nearest neighbors used for modeling was optimized by particle swarm optimization. Finally, a fuzzy linear regression method based on error-correction was used to revise the model dynamically near the operating point. We found that the combined method can characterize the evolutionary track of cyanobacteria blooms in lakes and reservoirs. The model constructed in this paper is compared to other cyanobacteria-bloom forecasting methods (e.g., phase space reconstruction and traditional-clustering linear regression, and, then, the average relative error and average absolute error are used to compare the accuracies of these models. The results suggest that the proposed model is superior. As such, the newly developed approach achieves more precise predictions, which can be used to prevent the further deterioration of the water environment.
Enhanced exact solution methods for the Team Orienteering Problem
Keshtkaran, M.; Ziarati, K.; Bettinelli, A.; Vigo, D.
2016-01-01
The Team Orienteering Problem (TOP) is one of the most investigated problems in the family of vehicle routing problems with profits. In this paper, we propose a Branch-and-Price approach to find proven optimal solutions to TOP. The pricing sub-problem is solved by a bounded bidirectional dynamic
WYD method for an eigen solution of coupled problems
Directory of Open Access Journals (Sweden)
A Harapin
2016-04-01
Full Text Available Designing efficient and stable algorithm for finding the eigenvalues andeigenvectors is very important from the static as well as the dynamic aspectin coupled problems. Modal analysis requires first few significant eigenvectorsand eigenvalues while direct integration requires the highest value toascertain the length of the time step that satisfies the stability condition.The paper first presents the modification of the well known WYDmethod for a solution of single field problems: an efficient and numericallystable algorithm for computing eigenvalues and the correspondingeigenvectors. The modification is based on the special choice of thestarting vector. The starting vector is the static solution of displacements forthe applied load, defined as the product of the mass matrix and the unitdisplacement vector. The starting vector is very close to the theoreticalsolution, which is important in cases of small subspaces.Additionally, the paper briefly presents the adopted formulation for solvingthe fluid-structure coupled systems problems which is based on a separatesolution for each field. Individual fields (fluid and structure are solvedindependently, taking in consideration the interaction information transferbetween them at every stage of the iterative solution process. The assessmentof eigenvalues and eigenvectors for multiple fields is also presented. This eigenproblem is more complicated than the one for the ordinary structural analysis,as the formulation produces non-symmetrical matrices.Finally, a numerical example for the eigen solution coupled fluidstructureproblem is presented to show the efficiency and the accuracy ofthe developed algorithm.
Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution
Subramanian, Venkat R.
2006-01-01
High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…
A Novel Method for Analytical Solutions of Fractional Partial Differential Equations
Mehmet Ali Akinlar; Muhammet Kurulay
2013-01-01
A new solution technique for analytical solutions of fractional partial differential equations (FPDEs) is presented. The solutions are expressed as a finite sum of a vector type functional. By employing MAPLE software, it is shown that the solutions might be extended to an arbitrary degree which makes the present method not only different from the others in the literature but also quite efficient. The method is applied to special Bagley-Torvik and Diethelm fractional differential equations as...
DEFF Research Database (Denmark)
Jensen, Jesper; Tan, Zheng-Hua
2014-01-01
We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...
DEFF Research Database (Denmark)
Jung, Jaesoon; Kook, Junghwan; Goo, Seongyeol
2017-01-01
combines the FEM and Elementary Radiator Approach (ERA) is proposed. The FE-ERA method analyzes the vibrational response of the plate structure excited by incident sound using FEM and then computes the transmitted acoustic pressure from the vibrating plate using ERA. In order to improve the accuracy...... and efficiency of the FE-ERA method, a novel criterion for the optimal number of elementary radiators is proposed. The criterion is based on the radiator error index that is derived to estimate the accuracy of the computation with used number of radiators. Using the proposed criterion a radiator selection method...... is presented for determining the optimum number of radiators. The presented radiator selection method and the FE-ERA method are combined to improve the computational accuracy and efficiency. Several numerical examples that have been rarely addressed in previous studies, are presented with the proposed method...
Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department
2017-01-01
Nonlinear errors in experimental insertions can pose a signiﬁcant challenge to the operability of low-β∗ colliders. Previously such errors in the LHC have been studied via their feed-down to tune and coupling under the inﬂuence of the nominal crossing angle bumps. This method has proved useful in validating various components of the magnetic model. To understand and correct those errors where signiﬁcant discrepancies exist with the magnetic model however, will require further development of this technique, in addition to the application of novel methods. In 2016 studies were performed to test new methods for the study of the IR-nonlinear errors.
Directory of Open Access Journals (Sweden)
Murat Hişmanoğlu
2007-04-01
Full Text Available The acquisition of [ ƒı: ] and [ ƒßƒÅ ] vowel sounds of the English languageconstitutes a serious problem for Turkish learners of English. There are nopedagogically-developed specific materials or sample lesson plans in the literature toremedy the pronunciation difficulty brought about by [ ƒı: ] and [ ƒßƒÅ ] vowel sounds ofthe English language. Therefore, this article aims at providing Turkish learners ofEnglish with pronunciation teaching material and a sample lesson on two problemcausing sounds like [ ƒı: ] and [ ƒßƒÅ ] by using the audio-articulation method developedby Demirezen (2003, 2004.
Evolutionary enhancement of the SLIM-MAUD method of estimating human error rates
International Nuclear Information System (INIS)
Zamanali, J.H.; Hubbard, F.R.; Mosleh, A.; Waller, M.A.
1992-01-01
The methodology described in this paper assigns plant-specific dynamic human error rates (HERs) for individual plant examinations based on procedural difficulty, on configuration features, and on the time available to perform the action. This methodology is an evolutionary improvement of the success likelihood index methodology (SLIM-MAUD) for use in systemic scenarios. It is based on the assumption that the HER in a particular situation depends of the combined effects of a comprehensive set of performance-shaping factors (PSFs) that influence the operator's ability to perform the action successfully. The PSFs relate the details of the systemic scenario in which the action must be performed according to the operator's psychological and cognitive condition
Error Concealment Method Based on Motion Vector Prediction Using Particle Filters
Directory of Open Access Journals (Sweden)
B. Hrusovsky
2011-09-01
Full Text Available Video transmitted over unreliable environment, such as wireless channel or in generally any network with unreliable transport protocol, is facing the losses of video packets due to network congestion and different kind of noises. The problem is becoming more important using highly effective video codecs. Visual quality degradation could propagate into subsequent frames due to redundancy elimination in order to obtain high compression ratio. Since the video stream transmission in real time is limited by transmission channel delay, it is not possible to retransmit all faulty or lost packets. It is therefore inevitable to conceal these defects. To reduce the undesirable effects of information losses, the lost data is usually estimated from the received data, which is generally known as error concealment problem. This paper discusses packet loss modeling in order to simulate losses during video transmission, packet losses analysis and their impacts on the motion vectors losses.
Directory of Open Access Journals (Sweden)
Z. Pashazadeh Atabakan
2013-01-01
Full Text Available Spectral homotopy analysis method (SHAM as a modification of homotopy analysis method (HAM is applied to obtain solution of high-order nonlinear Fredholm integro-differential problems. The existence and uniqueness of the solution and convergence of the proposed method are proved. Some examples are given to approve the efficiency and the accuracy of the proposed method. The SHAM results show that the proposed approach is quite reasonable when compared to homotopy analysis method, Lagrange interpolation solutions, and exact solutions.
A general method for enclosing solutions of interval linear equations
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2012-01-01
Roč. 6, č. 4 (2012), s. 709-717 ISSN 1862-4472 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval linear equations * solution set * enclosure * absolute value inequality Subject RIV: BA - General Mathematics Impact factor: 1.654, year: 2012
Directory of Open Access Journals (Sweden)
Ji Juan-Juan
2017-01-01
Full Text Available A table lookup method for solving nonlinear fractional partial differential equations (fPDEs is proposed in this paper. Looking up the corresponding tables, we can quickly obtain the exact analytical solutions of fPDEs by using this method. To illustrate the validity of the method, we apply it to construct the exact analytical solutions of four nonlinear fPDEs, namely, the time fractional simplified MCH equation, the space-time fractional combined KdV-mKdV equation, the (2+1-dimensional time fractional Zoomeron equation, and the space-time fractional ZKBBM equation. As a result, many new types of exact analytical solutions are obtained including triangular periodic solution, hyperbolic function solution, singular solution, multiple solitary wave solution, and Jacobi elliptic function solution.
Directory of Open Access Journals (Sweden)
Masson Lindsey F
2011-10-01
Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of
A Four-Step Block Hybrid Adams-Moulton Methods For The Solution ...
African Journals Online (AJOL)
This paper examines application of the Adam-Moulton's Method and proposes a modified self-starting continuous formula Called hybrid Adams-Moulton methods for the case k=4. It allows evaluation at both grid and off grid points to obtain the discrete schemes used in the block methods. The order, error constant and ...