WorldWideScience

Sample records for accurate approximation method

  1. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Yu-Chen, E-mail: ycshu@mail.ncku.edu.tw [Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (South), Tainan 701, Taiwan (China); Chern, I-Liang, E-mail: chern@math.ntu.edu.tw [Department of Applied Mathematics, National Chiao Tung University, Hsin Chu 300, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (Taipei Office), Taipei 106, Taiwan (China); Chang, Chien C., E-mail: mechang@iam.ntu.edu.tw [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China)

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  2. An Accurate Approximate-Analytical Technique for Solving Time-Fractional Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    M. Bishehniasar

    2017-01-01

    Full Text Available The demand of many scientific areas for the usage of fractional partial differential equations (FPDEs to explain their real-world systems has been broadly identified. The solutions may portray dynamical behaviors of various particles such as chemicals and cells. The desire of obtaining approximate solutions to treat these equations aims to overcome the mathematical complexity of modeling the relevant phenomena in nature. This research proposes a promising approximate-analytical scheme that is an accurate technique for solving a variety of noninteger partial differential equations (PDEs. The proposed strategy is based on approximating the derivative of fractional-order and reducing the problem to the corresponding partial differential equation (PDE. Afterwards, the approximating PDE is solved by using a separation-variables technique. The method can be simply applied to nonhomogeneous problems and is proficient to diminish the span of computational cost as well as achieving an approximate-analytical solution that is in excellent concurrence with the exact solution of the original problem. In addition and to demonstrate the efficiency of the method, it compares with two finite difference methods including a nonstandard finite difference (NSFD method and standard finite difference (SFD technique, which are popular in the literature for solving engineering problems.

  3. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  4. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    Science.gov (United States)

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  5. Efficient solution of parabolic equations by Krylov approximation methods

    Science.gov (United States)

    Gallopoulos, E.; Saad, Y.

    1990-01-01

    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.

  6. The generalized approximation method and nonlinear heat transfer equations

    Directory of Open Access Journals (Sweden)

    Rahmat Khan

    2009-01-01

    Full Text Available Generalized approximation technique for a solution of one-dimensional steady state heat transfer problem in a slab made of a material with temperature dependent thermal conductivity, is developed. The results obtained by the generalized approximation method (GAM are compared with those studied via homotopy perturbation method (HPM. For this problem, the results obtained by the GAM are more accurate as compared to the HPM. Moreover, our (GAM generate a sequence of solutions of linear problems that converges monotonically and rapidly to a solution of the original nonlinear problem. Each approximate solution is obtained as the solution of a linear problem. We present numerical simulations to illustrate and confirm the theoretical results.

  7. Precise and accurate train run data: Approximation of actual arrival and departure times

    DEFF Research Database (Denmark)

    Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær

    with the approximated actual arrival and departure times. As a result, all future statistics can now either be based on track circuit data with high precision or approximated actual arrival times with a high accuracy. Consequently, performance analysis will be more accurate, punctuality statistics more correct, KPI...

  8. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  9. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  10. Born approximation to a perturbative numerical method for the solution of the Schrodinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-05-01

    A perturbative numerical (PN) method is given for the solution of a regular one-dimensional Cauchy problem arising from the Schroedinger equation. The present method uses a step function approximation for the potential. Global, free of scaling difficulty, forward and backward PN algorithms are derived within first order perturbation theory (Born approximation). A rigorous analysis of the local truncation errors is performed. This shows that the order of accuracy of the method is equal to four. In between the mesh points, the global formula for the wavefunction is accurate within O(h 4 ), while that for the first order derivative is accurate within O(h 3 ). (author)

  11. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  12. Analytical Evaluation of Beam Deformation Problem Using Approximate Methods

    DEFF Research Database (Denmark)

    Barari, Amin; Kimiaeifar, A.; Domairry, G.

    2010-01-01

    The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with the solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Perturbation, Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM) and Variational...... Iteration Method (VIM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate for systems of non-linear differential equation....

  13. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    Science.gov (United States)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  14. Optoelectronic properties of XIn2S4 (X = Cd, Mg) thiospinels through highly accurate all-electron FP-LAPW method coupled with modified approximations

    International Nuclear Information System (INIS)

    Yousaf, Masood; Dalhatu, S.A.; Murtaza, G.; Khenata, R.; Sajjad, M.; Musa, A.; Rahnamaye Aliabad, H.A.; Saeed, M.A.

    2015-01-01

    Highlights: • Highly accurate all-electron FP-LAPW+lo method is used. • New physical parameters are reported, important for the fabrication of optoelectronic devices. • A comparative study that involves FP-LAPW+lo method and modified approximations. • Computed band gap values have good agreement with the experimental values. • Optoelectronic results of fundamental importance can be utilized for the fabrication of devices. - Abstract: We report the structural, electronic and optical properties of the thiospinels XIn 2 S 4 (X = Cd, Mg), using highly accurate all-electron full potential linearized augmented plane wave plus local orbital method. In order to calculate the exchange and correlation energies, the method is coupled with modified techniques such as GGA+U and mBJ-GGA, which yield improved results as compared to the previous studies. GGA+SOC approximation is also used for the first time on these compounds to examine the spin orbit coupling effect on the band structure. From the analysis of the structural parameters, robust character is predicted for both materials. Energy band structures profiles are fairly the same for GGA, GGA+SOC, GGA+U and mBJ-GGA, confirming the indirect and direct band gap nature of CdIn 2 S 4 and MgIn 2 S 4 materials, respectively. We report the trend of band gap results as: (mBJ-GGA) > (GGA+U) > (GGA) > (GGA+SOC). Localized regions appearing in the valence bands for CdIn 2 S 4 tend to split up nearly by ≈1 eV in the case of GGA+SOC. Many new physical parameters are reported that can be important for the fabrication of optoelectronic devices. Optical spectra namely, dielectric function (DF), refractive index n(ω), extinction coefficient k(ω), reflectivity R(ω), optical conductivity σ(ω), absorption coefficient α(ω) and electron loss function are discussed. Optical’s absorption edge is noted to be 1.401 and 1.782 for CdIn 2 S 4 and MgIn 2 S 4 , respectively. The prominent peaks in the electron energy spectrum

  15. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  16. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  17. Optoelectronic properties of XIn{sub 2}S{sub 4} (X = Cd, Mg) thiospinels through highly accurate all-electron FP-LAPW method coupled with modified approximations

    Energy Technology Data Exchange (ETDEWEB)

    Yousaf, Masood [Department of Physics, Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Physics Department, Faculty of Science, Universiti Teknologi Malaysia, Skudai 81310, Johor (Malaysia); Dalhatu, S.A. [Physics Department, Faculty of Science, Universiti Teknologi Malaysia, Skudai 81310, Johor (Malaysia); Murtaza, G. [Department of Physics, Islamia College, Peshawar, KPK (Pakistan); Khenata, R. [Laboratoire de Physique Quantique et de Modélisation Mathématique (LPQ3M), Département de Technologie, Université de Mascara, 29000 Mascara (Algeria); Sajjad, M. [School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Musa, A. [Physics Department, Faculty of Science, Universiti Teknologi Malaysia, Skudai 81310, Johor (Malaysia); Rahnamaye Aliabad, H.A. [Department of Physics, Hakim Sabzevari University (Iran, Islamic Republic of); Saeed, M.A., E-mail: saeed@utm.my [Physics Department, Faculty of Science, Universiti Teknologi Malaysia, Skudai 81310, Johor (Malaysia)

    2015-03-15

    Highlights: • Highly accurate all-electron FP-LAPW+lo method is used. • New physical parameters are reported, important for the fabrication of optoelectronic devices. • A comparative study that involves FP-LAPW+lo method and modified approximations. • Computed band gap values have good agreement with the experimental values. • Optoelectronic results of fundamental importance can be utilized for the fabrication of devices. - Abstract: We report the structural, electronic and optical properties of the thiospinels XIn{sub 2}S{sub 4} (X = Cd, Mg), using highly accurate all-electron full potential linearized augmented plane wave plus local orbital method. In order to calculate the exchange and correlation energies, the method is coupled with modified techniques such as GGA+U and mBJ-GGA, which yield improved results as compared to the previous studies. GGA+SOC approximation is also used for the first time on these compounds to examine the spin orbit coupling effect on the band structure. From the analysis of the structural parameters, robust character is predicted for both materials. Energy band structures profiles are fairly the same for GGA, GGA+SOC, GGA+U and mBJ-GGA, confirming the indirect and direct band gap nature of CdIn{sub 2}S{sub 4} and MgIn{sub 2}S{sub 4} materials, respectively. We report the trend of band gap results as: (mBJ-GGA) > (GGA+U) > (GGA) > (GGA+SOC). Localized regions appearing in the valence bands for CdIn{sub 2}S{sub 4} tend to split up nearly by ≈1 eV in the case of GGA+SOC. Many new physical parameters are reported that can be important for the fabrication of optoelectronic devices. Optical spectra namely, dielectric function (DF), refractive index n(ω), extinction coefficient k(ω), reflectivity R(ω), optical conductivity σ(ω), absorption coefficient α(ω) and electron loss function are discussed. Optical’s absorption edge is noted to be 1.401 and 1.782 for CdIn{sub 2}S{sub 4} and MgIn{sub 2}S{sub 4}, respectively. The

  18. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  19. A trigonometric approximation for the tension in the string of a simple pendulum accurate for all amplitudes

    International Nuclear Information System (INIS)

    Lima, F M S

    2009-01-01

    In a previous work, O'Connell (Phys. Teach. 40, 24 (2002)) investigated the time dependence of the tension in the string of a simple pendulum oscillating within the small-angle regime. In spite of the approximation sin θ ∼ θ being accurate only for amplitudes below 7 deg., his experimental results are for a pendulum oscillating with an amplitude of about 18 deg., therefore beyond the small-angle regime. This lapse may also be found in some textbooks, laboratory manuals and internet. By noting that the exact analytical solution for this problem involves the so-called Jacobi elliptic functions, which are unknown to most students (even instructors), I take into account a sinusoidal approximate solution for the pendulum equation I introduced in a recent work (Eur. J. Phys. 29 1091 (2008)) for deriving a simple trigonometric approximation for the tension valid for all possible amplitudes. This approximation is compared to both the O'Connell and the exact results, revealing that it is accurate enough for analysing large-angle pendulum experiments. (letters and comments)

  20. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  1. Using function approximation to determine neural network accuracy

    International Nuclear Information System (INIS)

    Wichman, R.F.; Alexander, J.

    2013-01-01

    Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)

  2. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  3. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    Science.gov (United States)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  4. A class of fully second order accurate projection methods for solving the incompressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Liu Miaoer; Ren Yuxin; Zhang Hanxin

    2004-01-01

    In this paper, a continuous projection method is designed and analyzed. The continuous projection method consists of a set of partial differential equations which can be regarded as an approximation of the Navier-Stokes (N-S) equations in each time interval of a given time discretization. The local truncation error (LTE) analysis is applied to the continuous projection methods, which yields a sufficient condition for the continuous projection methods to be temporally second order accurate. Based on this sufficient condition, a fully second order accurate discrete projection method is proposed. A heuristic stability analysis is performed to this projection method showing that the present projection method can be stable. The stability of the present scheme is further verified through numerical experiments. The second order accuracy of the present projection method is confirmed by several numerical test cases

  5. Direct application of Padé approximant for solving nonlinear differential equations.

    Science.gov (United States)

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  6. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  7. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Science.gov (United States)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  8. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    International Nuclear Information System (INIS)

    Du, Qiang; Yang, Jiang

    2017-01-01

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.

  9. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  10. A simple method to approximate liver size on cross-sectional images using living liver models

    International Nuclear Information System (INIS)

    Muggli, D.; Mueller, M.A.; Karlo, C.; Fornaro, J.; Marincek, B.; Frauenfelder, T.

    2009-01-01

    Aim: To assess whether a simple. diameter-based formula applicable to cross-sectional images can be used to calculate the total liver volume. Materials and methods: On 119 cross-sectional examinations (62 computed tomography and 57 magnetic resonance imaging) a simple, formula-based method to approximate the liver volume was evaluated. The total liver volume was approximated measuring the largest craniocaudal (cc), ventrodorsal (vd), and coronal (cor) diameters by two readers and implementing the equation: Vol estimated =ccxvdxcorx0.31. Inter-rater reliability, agreement, and correlation between liver volume calculation and virtual liver volumetry were analysed. Results: No significant disagreement between the two readers was found. The formula correlated significantly with the volumetric data (r > 0.85, p < 0.0001). In 81% of cases the error of the approximated volume was <10% and in 92% of cases <15% compared to the volumetric data. Conclusion: Total liver volume can be accurately estimated on cross-sectional images using a simple, diameter-based equation.

  11. A summary of methods for approximating salt creep and disposal room closure in numerical models of multiphase flow

    Energy Technology Data Exchange (ETDEWEB)

    Freeze, G.A.; Larson, K.W. [INTERA, Inc., Albuquerque, NM (United States); Davies, P.B. [Sandia National Labs., Albuquerque, NM (United States)

    1995-10-01

    Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.

  12. A summary of methods for approximating salt creep and disposal room closure in numerical models of multiphase flow

    International Nuclear Information System (INIS)

    Freeze, G.A.; Larson, K.W.; Davies, P.B.

    1995-10-01

    Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time

  13. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  14. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  15. Legendre-tau approximations for functional differential equations

    Science.gov (United States)

    Ito, K.; Teglas, R.

    1986-01-01

    The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.

  16. Accurate approximation method for prediction of class I MHC affinities for peptides of length 8, 10 and 11 using prediction tools trained on 9mers

    DEFF Research Database (Denmark)

    Lundegaard, Claus; Lund, Ole; Nielsen, Morten

    2008-01-01

    Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate...

  17. Enriched Meshfree Method for an Accurate Numerical Solution of the Motz Problem

    Directory of Open Access Journals (Sweden)

    Won-Tak Hong

    2016-01-01

    Full Text Available We present an enriched meshfree solution of the Motz problem. The Motz problem has been known as a benchmark problem to verify the efficiency of numerical methods in the presence of a jump boundary data singularity at a point, where an abrupt change occurs for the boundary condition. We propose a singular basis function enrichment technique in the context of partition of unity based meshfree method. We take the leading terms of the local series expansion at the point singularity and use them as enrichment functions for the local approximation space. As a result, we obtain highly accurate leading coefficients of the Motz problem that are comparable to the most accurate numerical solution. The proposed singular enrichment technique is highly effective in the case of the local series expansion of the solution being known. The enrichment technique that is used in this study can be applied to monotone singularities (of type rα with α<1 as well as oscillating singularities (of type rαsin⁡(ϵlog⁡r. It is the first attempt to apply singular meshfree enrichment technique to the Motz problem.

  18. Approximate analytical methods for solving ordinary differential equations

    CERN Document Server

    Radhika, TSL; Rani, T Raja

    2015-01-01

    Approximate Analytical Methods for Solving Ordinary Differential Equations (ODEs) is the first book to present all of the available approximate methods for solving ODEs, eliminating the need to wade through multiple books and articles. It covers both well-established techniques and recently developed procedures, including the classical series solution method, diverse perturbation methods, pioneering asymptotic methods, and the latest homotopy methods.The book is suitable not only for mathematicians and engineers but also for biologists, physicists, and economists. It gives a complete descripti

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  20. A simple approximation method for dilute Ising systems

    International Nuclear Information System (INIS)

    Saber, M.

    1996-10-01

    We describe a simple approximate method to analyze dilute Ising systems. The method takes into consideration the fluctuations of the effective field, and is based on a probability distribution of random variables which correctly accounts for all the single site kinematic relations. It is shown that the simplest approximation gives satisfactory results when compared with other methods. (author). 12 refs, 2 tabs

  1. Spherical anharmonic oscillator in self-similar approximation

    International Nuclear Information System (INIS)

    Yukalova, E.P.; Yukalov, V.I.

    1992-01-01

    The method of self-similar approximation is applied here for calculating the eigenvalues of the three-dimensional spherical anharmonic oscillator. The advantage of this method is in its simplicity and high accuracy. The comparison with other known analytical methods proves that this method is more simple and accurate. 25 refs

  2. A Comparison between Effective Cross Section Calculations using the Intermediate Resonance Approximation and More Exact Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haeggblom, H

    1969-02-15

    In order to investigate some aspects of the 'Intermediate Resonance Approximation' developed by Goldstein and Cohen, comparative calculations have been made using this method together with more accurate methods. The latter are as follows: a) For homogeneous materials the slowing down equation is solved in the fundamental mode approximation with the computer programme SPENG. All cross sections are given point by point. Because the spectrum can be calculated for at most 2000 energy points, the energy regions where the resonances are accurately described are limited. Isolated resonances in the region 100 to 240 eV are studied for {sup 238}U/Fe and {sup 238}U/Fe/Na mixtures. In the regions 161 to 251 eV and 701 to 1000 eV, mixtures of {sup 238}U and Na are investigated. {sup 239}Pu/Na and {sup 239}Pu/{sup 238}U/Na mixtures are studied in the region 161 to 251 eV. b) For heterogeneous compositions in slab geometry the integral transport equation is solved using the FLIS programme in 22 energy groups. Thus, only one resonance can be considered in each calculation. Two resonances are considered, namely those belonging to {sup 238}U at 190 and 937 eV. The compositions are lattices of {sup 238}U and Fe plates. The computer programme DORIX is used for the calculations using the Intermediate Resonance Approximation. Calculations of reaction rates and effective cross sections are made at 0, 300 and 1100 deg K for homogeneous media and at 300 deg K for heterogeneous media. The results are compared to those obtained by using the programmes SPENG and FLIS and using the narrow resonance approximation.

  3. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  5. Saddlepoint approximation methods in financial engineering

    CERN Document Server

    Kwok, Yue Kuen

    2018-01-01

    This book summarizes recent advances in applying saddlepoint approximation methods to financial engineering. It addresses pricing exotic financial derivatives and calculating risk contributions to Value-at-Risk and Expected Shortfall in credit portfolios under various default correlation models. These standard problems involve the computation of tail probabilities and tail expectations of the corresponding underlying state variables.  The text offers in a single source most of the saddlepoint approximation results in financial engineering, with different sets of ready-to-use approximation formulas. Much of this material may otherwise only be found in original research publications. The exposition and style are made rigorous by providing formal proofs of most of the results. Starting with a presentation of the derivation of a variety of saddlepoint approximation formulas in different contexts, this book will help new researchers to learn the fine technicalities of the topic. It will also be valuable to quanti...

  6. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  7. Multi-level methods and approximating distribution functions

    International Nuclear Information System (INIS)

    Wilson, D.; Baker, R. E.

    2016-01-01

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  8. Multi-level methods and approximating distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.

  9. The spectral element method for static neutron transport in AN approximation. Part I

    International Nuclear Information System (INIS)

    Barbarino, A.; Dulla, S.; Mund, E.H.; Ravetto, P.

    2013-01-01

    Highlights: ► Spectral elements methods (SEMs) are extended for the neutronics of nuclear reactor cores. ► The second-order, A N formulation of neutron trasport is adopted. ► Results for classical benchmark cases in 2D are presented and compared to finite elements. ► The advantages of SEM in terms of precision and convergence rate are illustrated. ► SEM consitutes a promising approach for the solution of neutron transport problems. - Abstract: Spectral elements methods provide very accurate solutions of elliptic problems. In this paper we apply the method to the A N (i.e. SP 2N−1 ) approximation of neutron transport. Numerical results for classical benchmark cases highlight its performance in comparison with finite element computations, in terms of accuracy per degree of freedom and convergence rate. All calculations presented in this paper refer to two-dimensional problems. The method can easily be extended to three-dimensional cases. The results illustrate promising features of the method for more complex transport problems

  10. Multiuser detection and channel estimation: Exact and approximate methods

    DEFF Research Database (Denmark)

    Fabricius, Thomas

    2003-01-01

    subtractive interference cancellation with hyperbolic tangent tentative decision device, in statistical mechanics and machine learning called the naive mean field approach. The differences between the proposed algorithms lie in how the bias is estimated/approximated. We propose approaches based on a second...... propose here to use accurate approximations borrowed from statistical mechanics and machine learning. These give us various algorithms that all can be formulated in a subtractive interference cancellation formalism. The suggested algorithms can e ectively be seen as bias corrections to standard...... of the Junction Tree Algorithm, which is a generalisation of Pearl's Belief Propagation, the BCJR, sum product, min/max sum, and Viterbi's algorithm. Although efficient algoithms, they have an inherent exponential complexity in the number of users when applied to CDMA multiuser detection. For this reason we...

  11. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  12. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  13. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  14. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  15. Beam shape coefficients calculation for an elliptical Gaussian beam with 1-dimensional quadrature and localized approximation methods

    Science.gov (United States)

    Wang, Wei; Shen, Jianqi

    2018-06-01

    The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.

  16. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey; Alkhalifah, Tariq Ali

    2013-01-01

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  17. Mapping moveout approximations in TI media

    KAUST Repository

    Stovas, Alexey

    2013-11-21

    Moveout approximations play a very important role in seismic modeling, inversion, and scanning for parameters in complex media. We developed a scheme to map one-way moveout approximations for transversely isotropic media with a vertical axis of symmetry (VTI), which is widely available, to the tilted case (TTI) by introducing the effective tilt angle. As a result, we obtained highly accurate TTI moveout equations analogous with their VTI counterparts. Our analysis showed that the most accurate approximation is obtained from the mapping of generalized approximation. The new moveout approximations allow for, as the examples demonstrate, accurate description of moveout in the TTI case even for vertical heterogeneity. The proposed moveout approximations can be easily used for inversion in a layered TTI medium because the parameters of these approximations explicitly depend on corresponding effective parameters in a layered VTI medium.

  18. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul

    2017-01-01

    is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  19. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Science.gov (United States)

    Saccà, Alessandro

    2016-01-01

    Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D) data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  20. A Simple yet Accurate Method for the Estimation of the Biovolume of Planktonic Microorganisms.

    Directory of Open Access Journals (Sweden)

    Alessandro Saccà

    Full Text Available Determining the biomass of microbial plankton is central to the study of fluxes of energy and materials in aquatic ecosystems. This is typically accomplished by applying proper volume-to-carbon conversion factors to group-specific abundances and biovolumes. A critical step in this approach is the accurate estimation of biovolume from two-dimensional (2D data such as those available through conventional microscopy techniques or flow-through imaging systems. This paper describes a simple yet accurate method for the assessment of the biovolume of planktonic microorganisms, which works with any image analysis system allowing for the measurement of linear distances and the estimation of the cross sectional area of an object from a 2D digital image. The proposed method is based on Archimedes' principle about the relationship between the volume of a sphere and that of a cylinder in which the sphere is inscribed, plus a coefficient of 'unellipticity' introduced here. Validation and careful evaluation of the method are provided using a variety of approaches. The new method proved to be highly precise with all convex shapes characterised by approximate rotational symmetry, and combining it with an existing method specific for highly concave or branched shapes allows covering the great majority of cases with good reliability. Thanks to its accuracy, consistency, and low resources demand, the new method can conveniently be used in substitution of any extant method designed for convex shapes, and can readily be coupled with automated cell imaging technologies, including state-of-the-art flow-through imaging devices.

  1. Improvement of Tone's method with two-term rational approximation

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Endo, Tomohiro; Chiba, Go

    2011-01-01

    An improvement of Tone's method, which is a resonance calculation method based on the equivalence theory, is proposed. In order to increase calculation accuracy, the two-term rational approximation is incorporated for the representation of neutron flux. Furthermore, some theoretical aspects of Tone's method, i.e., its inherent approximation and choice of adequate multigroup cross section for collision probability estimation, are also discussed. The validity of improved Tone's method is confirmed through a verification calculation in an irregular lattice geometry, which represents part of an LWR fuel assembly. The calculation result clarifies the validity of the present method. (author)

  2. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  3. Self-similar factor approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.; Sornette, D.

    2003-01-01

    The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties

  4. NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, Max La Cour [Technical Univ. of Denmark, Lyngby (Denmark); Villa, Umberto E. [Univ. of Texas, Austin, TX (United States); Engsig-Karup, Allan P. [Technical Univ. of Denmark, Lyngby (Denmark); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-22

    The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.

  5. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method.

    Science.gov (United States)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-21

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  6. Approximate solution methods in engineering mechanics

    International Nuclear Information System (INIS)

    Boresi, A.P.; Cong, K.P.

    1991-01-01

    This is a short book of 147 pages including references and sometimes bibliographies at the end of each chapter, and subject and author indices at the end of the book. The test includes an introduction of 3 pages, 29 pages explaining approximate analysis, 41 pages on finite differences, 36 pages on finite elements, and 17 pages on specialized methods

  7. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  8. Approximate Method for Solving the Linear Fuzzy Delay Differential Equations

    Directory of Open Access Journals (Sweden)

    S. Narayanamoorthy

    2015-01-01

    Full Text Available We propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using Adomian decomposition method. The detailed algorithm of the approach is provided. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show this proper features of this proposed method, numerical example is illustrated.

  9. Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2012-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy-driven ...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-Lévy approximation of annuities, which offers good approximations for high-volatility regimes....

  10. Evaluation of the successive approximations method for acoustic streaming numerical simulations.

    Science.gov (United States)

    Catarino, S O; Minas, G; Miranda, J M

    2016-05-01

    This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.

  11. Approximate Methods for the Generation of Dark Matter Halo Catalogs in the Age of Precision Cosmology

    Directory of Open Access Journals (Sweden)

    Pierluigi Monaco

    2016-10-01

    Full Text Available Precision cosmology has recently triggered new attention on the topic of approximate methods for the clustering of matter on large scales, whose foundations date back to the period from the late 1960s to early 1990s. Indeed, although the prospect of reaching sub-percent accuracy in the measurement of clustering poses a challenge even to full N-body simulations, an accurate estimation of the covariance matrix of clustering statistics, not to mention the sampling of parameter space, requires usage of a large number (hundreds in the most favourable cases of simulated (mock galaxy catalogs. Combination of few N-body simulations with a large number of realizations performed with approximate methods gives the most promising approach to solve these problems with a reasonable amount of resources. In this paper I review this topic, starting from the foundations of the methods, then going through the pioneering efforts of the 1990s, and finally presenting the latest extensions and a few codes that are now being used in present-generation surveys and thoroughly tested to assess their performance in the context of future surveys.

  12. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    Science.gov (United States)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  13. Method to Calculate Accurate Top Event Probability in a Seismic PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong Univ., Seoul (Korea, Republic of)

    2014-05-15

    ACUBE(Advanced Cutset Upper Bound Estimator) calculates the top event probability and importance measures from cutsets by dividing cutsets into major and minor groups depending on the cutset probability, where the cutsets that have higher cutset probability are included in the major group and the others in minor cutsets, converting major cutsets into a Binary Decision Diagram (BDD). By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. ACUBE works by dividing the cutsets into two groups (higher and lower cutset probability groups), calculating the top event probability and importance measures in each group, and combining the two results from the two groups. Here, ACUBE calculates the top event probability and importance measures of the higher cutset probability group exactly. On the other hand, ACUBE calculates these measures of the lower cutset probability group with an approximation such as MCUB. The ACUBE algorithm is useful for decreasing the conservatism that is caused by approximating the top event probability and importance measure calculations with given cutsets. By applying the ACUBE algorithm to the seismic PSA cutsets, the accuracy of a top event probability and importance measures can be significantly improved. This study shows that careful attention should be paid and an appropriate method be provided in order to avoid the significant overestimation of the top event probability calculation. Due to the strength of ACUBE that is explained in this study, the ACUBE became a vital tool for calculating more accurate CDF of the seismic PSA cutsets than the conventional probability calculation method.

  14. A highly accurate method to solve Fisher's equation

    Indian Academy of Sciences (India)

    The solution of the Helmholtz equation was approximated by a sixth-order compact finite difference. (CFD6) method in [29]. In [30], a CFD6 scheme has been presented to ... efficiency of the proposed method are reported in §3. Finally .... our discussion, one can apply the proposed method to solve the more general problem.

  15. Improved stochastic approximation methods for discretized parabolic partial differential equations

    Science.gov (United States)

    Guiaş, Flavius

    2016-12-01

    We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).

  16. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  17. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  18. Approximate solution fuzzy pantograph equation by using homotopy perturbation method

    Science.gov (United States)

    Jameel, A. F.; Saaban, A.; Ahadkulov, H.; Alipiah, F. M.

    2017-09-01

    In this paper, Homotopy Perturbation Method (HPM) is modified and formulated to find the approximate solution for its employment to solve (FDDEs) involving a fuzzy pantograph equation. The solution that can be obtained by using HPM is in the form of infinite series that converge to the actual solution of the FDDE and this is one of the benefits of this method In addition, it can be used for solving high order fuzzy delay differential equations directly without reduction to a first order system. Moreover, the accuracy of HPM can be detected without needing the exact solution. The HPM is studied for fuzzy initial value problems involving pantograph equation. Using the properties of fuzzy set theory, we reformulate the standard approximate method of HPM and obtain the approximate solutions. The effectiveness of the proposed method is demonstrated for third order fuzzy pantograph equation.

  19. Space-angle approximations in the variational nodal method

    International Nuclear Information System (INIS)

    Lewis, E. E.; Palmiotti, G.; Taiwo, T.

    1999-01-01

    The variational nodal method is formulated such that the angular and spatial approximations maybe examined separately. Spherical harmonic, simplified spherical harmonic, and discrete ordinate approximations are coupled to the primal hybrid finite element treatment of the spatial variables. Within this framework, two classes of spatial trial functions are presented: (1) orthogonal polynomials for the treatment of homogeneous nodes and (2) bilinear finite subelement trial functions for the treatment of fuel assembly sized nodes in which fuel-pin cell cross sections are represented explicitly. Polynomial and subelement trial functions are applied to benchmark water-reactor problems containing MOX fuel using spherical harmonic and simplified spherical harmonic approximations. The resulting accuracy and computing costs are compared

  20. A working-set framework for sequential convex approximation methods

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2008-01-01

    We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....

  1. A Highly Accurate Regular Domain Collocation Method for Solving Potential Problems in the Irregular Doubly Connected Domains

    Directory of Open Access Journals (Sweden)

    Zhao-Qing Wang

    2014-01-01

    Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.

  2. The C{sub n} method for approximation of the Boltzmann equation; La methode C{sub n} d'approximation de l'equation de Boltzmann

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P; Kavenoky, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-01-15

    In a new method of approximation of the Boltzmann equation, one starts from a particular form of the equation which involves only the angular flux at the boundary of the considered medium and where the space variable does not appear explicitly. Expanding in orthogonal polynomials the angular flux of neutrons leaking from the medium and making no assumption about the angular flux within the medium, very good approximations to several classical plane geometry problems, i.e. the albedo of slabs and the transmission by slabs, the extrapolation length of the Milne problem, the spectrum of neutrons reflected by a semi-infinite slowing down medium. The method can be extended to other geometries. (authors) [French] On etablit une nouvelle methode d'approximation pour l'equation de Boltzmann en partant d'une forme particuliere de cette equation qui n'implique que le flux angulaire a la frontiere du milieu et ou les variables d'espace n'apparaissent pas explicitement. Par un developpement en polynomes orthogonaux du flux angulaire sortant du milieu et sans faire d'hypothese sur le flux angulaire a l'interieur du milieu, on obtient de tres bonnes approximations pour plusieurs problemes classiques en geometrie plane: l'albedo et le facteur de transmission des plaques, la longueur d'extrapolation du probleme de Milne, le spectre des neutrons reflechis par un milieu semi-infini ralentisseur. La methode se generalise a d'autres geometries. (auteurs)

  3. An approximation to the interference term using Frobenius Method

    Energy Technology Data Exchange (ETDEWEB)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mail: aquilino@lmp.ufrj.br

    2007-07-01

    An analytical approximation of the interference term {chi}(x,{xi}) is proposed. The approximation is based on the differential equation to {chi}(x,{xi}) using the Frobenius method and the parameter variation. The analytical expression of the {chi}(x,{xi}) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U{sup 238} isotope for different energies and temperature ranges. (author)

  4. An approximation to the interference term using Frobenius Method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C. da

    2007-01-01

    An analytical approximation of the interference term χ(x,ξ) is proposed. The approximation is based on the differential equation to χ(x,ξ) using the Frobenius method and the parameter variation. The analytical expression of the χ(x,ξ) obtained in terms of the elementary functions is very simple and precise. In this work one applies the approximations to the Doppler broadening functions and to the interference term in determining the neutron cross sections. Results were validated for the resonances of the U 238 isotope for different energies and temperature ranges. (author)

  5. A cluster approximation for the transfer-matrix method

    International Nuclear Information System (INIS)

    Surda, A.

    1990-08-01

    A cluster approximation for the transfer-method is formulated. The calculation of the partition function of lattice models is transformed to a nonlinear mapping problem. The method yields the free energy, correlation functions and the phase diagrams for a large class of lattice models. The high accuracy of the method is exemplified by the calculation of the critical temperature of the Ising model. (author). 14 refs, 2 figs, 1 tab

  6. Introduction to Methods of Approximation in Physics and Astronomy

    Science.gov (United States)

    van Putten, Maurice H. P. M.

    2017-04-01

    Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify

  7. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  8. A new analytical approximation to the Duffing-harmonic oscillator

    International Nuclear Information System (INIS)

    Fesanghary, M.; Pirbodaghi, T.; Asghari, M.; Sojoudi, H.

    2009-01-01

    In this paper, a novel analytical approximation to the nonlinear Duffing-harmonic oscillator is presented. The variational iteration method (VIM) is used to obtain some accurate analytical results for frequency. The accuracy of the results is excellent in the whole range of oscillation amplitude variations.

  9. Approximation methods for efficient learning of Bayesian networks

    CERN Document Server

    Riggelsen, C

    2008-01-01

    This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.

  10. Approximation methods for the partition functions of anharmonic systems

    International Nuclear Information System (INIS)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations

  11. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  12. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  13. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing transient electromagnetic wave interactions on two-dimensional waveguides. Numerical results demonstrate the proposed method\\'s superiority over the TD-DG-FEM that employs approximate boundary conditions and perfectly matched layers. Additionally, it is shown that the proposed method can produce the solution with ten-eleven digit accuracy when high-order spatial basis functions are used to discretize the Maxwell equations as well as the EACs. © 1963-2012 IEEE.

  14. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  15. The complex variable boundary element method: Applications in determining approximative boundaries

    Science.gov (United States)

    Hromadka, T.V.

    1984-01-01

    The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.

  16. Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation

    DEFF Research Database (Denmark)

    Tataru, Paula; Bataillon, Thomas; Hobolth, Asger

    2015-01-01

    frequencies and the influence of evolutionary pressures, such as mutation and selection. Despite its simple mathematical formulation, exact results for the distribution of allele frequency (DAF) as a function of time are not available in closed analytic form. Existing approximations build......, the probability of being on the boundary can be positive, corresponding to the allele being either lost or fixed. Here, we introduce the beta with spikes, an extension of the beta approximation, which explicitly models the loss and fixation probabilities as two spikes at the boundaries. We show that the addition...

  17. A Gaussian Approximation Potential for Silicon

    Science.gov (United States)

    Bernstein, Noam; Bartók, Albert; Kermode, James; Csányi, Gábor

    We present an interatomic potential for silicon using the Gaussian Approximation Potential (GAP) approach, which uses the Gaussian process regression method to approximate the reference potential energy surface as a sum of atomic energies. Each atomic energy is approximated as a function of the local environment around the atom, which is described with the smooth overlap of atomic environments (SOAP) descriptor. The potential is fit to a database of energies, forces, and stresses calculated using density functional theory (DFT) on a wide range of configurations from zero and finite temperature simulations. These include crystalline phases, liquid, amorphous, and low coordination structures, and diamond-structure point defects, dislocations, surfaces, and cracks. We compare the results of the potential to DFT calculations, as well as to previously published models including Stillinger-Weber, Tersoff, modified embedded atom method (MEAM), and ReaxFF. We show that it is very accurate as compared to the DFT reference results for a wide range of properties, including low energy bulk phases, liquid structure, as well as point, line, and plane defects in the diamond structure.

  18. An Approximate Method for the Acoustic Attenuating VTI Eikonal Equation

    KAUST Repository

    Hao, Q.

    2017-05-26

    We present an approximate method to solve the acoustic eikonal equation for attenuating transversely isotropic media with a vertical symmetry axis (VTI). A perturbation method is used to derive the perturbation formula for complex-valued traveltimes. The application of Shanks transform further enhances the accuracy of approximation. We derive both analytical and numerical solutions to the acoustic eikonal equation. The analytic solution is valid for homogeneous VTI media with moderate anellipticity and strong attenuation and attenuation-anisotropy. The numerical solution is applicable for inhomogeneous attenuating VTI media.

  19. An Approximate Method for the Acoustic Attenuating VTI Eikonal Equation

    KAUST Repository

    Hao, Q.; Alkhalifah, Tariq Ali

    2017-01-01

    We present an approximate method to solve the acoustic eikonal equation for attenuating transversely isotropic media with a vertical symmetry axis (VTI). A perturbation method is used to derive the perturbation formula for complex-valued traveltimes. The application of Shanks transform further enhances the accuracy of approximation. We derive both analytical and numerical solutions to the acoustic eikonal equation. The analytic solution is valid for homogeneous VTI media with moderate anellipticity and strong attenuation and attenuation-anisotropy. The numerical solution is applicable for inhomogeneous attenuating VTI media.

  20. An explicit approximate solution to the Duffing-harmonic oscillator by a cubication method

    International Nuclear Information System (INIS)

    Belendez, A.; Mendez, D.I.; Fernandez, E.; Marini, S.; Pascual, I.

    2009-01-01

    The nonlinear oscillations of a Duffing-harmonic oscillator are investigated by an approximated method based on the 'cubication' of the initial nonlinear differential equation. In this cubication method the restoring force is expanded in Chebyshev polynomials and the original nonlinear differential equation is approximated by a Duffing equation in which the coefficients for the linear and cubic terms depend on the initial amplitude, A. The replacement of the original nonlinear equation by an approximate Duffing equation allows us to obtain explicit approximate formulas for the frequency and the solution as a function of the complete elliptic integral of the first kind and the Jacobi elliptic function, respectively. These explicit formulas are valid for all values of the initial amplitude and we conclude this cubication method works very well for the whole range of initial amplitudes. Excellent agreement of the approximate frequencies and periodic solutions with the exact ones is demonstrated and discussed and the relative error for the approximate frequency is as low as 0.071%. Unlike other approximate methods applied to this oscillator, which are not capable to reproduce exactly the behaviour of the approximate frequency when A tends to zero, the cubication method used in this Letter predicts exactly the behaviour of the approximate frequency not only when A tends to infinity, but also when A tends to zero. Finally, a closed-form expression for the approximate frequency is obtained in terms of elementary functions. To do this, the relationship between the complete elliptic integral of the first kind and the arithmetic-geometric mean as well as Legendre's formula to approximately obtain this mean are used.

  1. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki [Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); CREST, JST, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012 (Japan); Shiro, Masanori [Department of Mathematical Informatics, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656 (Japan); Mathematical Neuroinformatics Group, Advanced Industrial Science and Technology, Tsukuba, Ibaraki 305-8568 (Japan); Takahashi, Nozomu; Mas, Paloma [Center for Research in Agricultural Genomics (CRAG), Consorci CSIC-IRTA-UAB-UB, Barcelona 08193 (Spain)

    2015-01-15

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  2. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    Science.gov (United States)

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  3. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Aihara, Kazuyuki; Suzuki, Hideyuki; Shiro, Masanori; Takahashi, Nozomu; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data

  4. A Bayesian method and its variational approximation for prediction of genomic breeding values in multiple traits

    Directory of Open Access Journals (Sweden)

    Hayashi Takeshi

    2013-01-01

    Full Text Available Abstract Background Genomic selection is an effective tool for animal and plant breeding, allowing effective individual selection without phenotypic records through the prediction of genomic breeding value (GBV. To date, genomic selection has focused on a single trait. However, actual breeding often targets multiple correlated traits, and, therefore, joint analysis taking into consideration the correlation between traits, which might result in more accurate GBV prediction than analyzing each trait separately, is suitable for multi-trait genomic selection. This would require an extension of the prediction model for single-trait GBV to multi-trait case. As the computational burden of multi-trait analysis is even higher than that of single-trait analysis, an effective computational method for constructing a multi-trait prediction model is also needed. Results We described a Bayesian regression model incorporating variable selection for jointly predicting GBVs of multiple traits and devised both an MCMC iteration and variational approximation for Bayesian estimation of parameters in this multi-trait model. The proposed Bayesian procedures with MCMC iteration and variational approximation were referred to as MCBayes and varBayes, respectively. Using simulated datasets of SNP genotypes and phenotypes for three traits with high and low heritabilities, we compared the accuracy in predicting GBVs between multi-trait and single-trait analyses as well as between MCBayes and varBayes. The results showed that, compared to single-trait analysis, multi-trait analysis enabled much more accurate GBV prediction for low-heritability traits correlated with high-heritability traits, by utilizing the correlation structure between traits, while the prediction accuracy for uncorrelated low-heritability traits was comparable or less with multi-trait analysis in comparison with single-trait analysis depending on the setting for prior probability that a SNP has zero

  5. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Science.gov (United States)

    Kelly, Steven; Maini, Philip K

    2013-01-01

    The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  6. DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    Full Text Available The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.

  7. Accuracy of the ''decoupled l-dominant'' approximation for atom--molecule scattering

    International Nuclear Information System (INIS)

    Green, S.

    1976-01-01

    Cross sections for rotational excitation and spectral pressure broadening of HD, HCl, CO, and HCN due to collisions with low energy He atoms have been computed within the ''decoupled l-dominant'' (DLD) approximation recently suggested by DePristo and Alexander. These are compared with accurate close coupling results and also with two similar approximations, the effective potential of Rabitz and the coupled states of McGuire and Kouri. These collision systems are all dominated by short-range repulsive interactions although they have varying degrees of anisotropy and inelasticity. The coupled states method is expected to be valid for such systems, but they should be a severe test to the DLD approximation which is expected to be better for long-range interactions. Nonetheless, DLD predictions of state-to-state cross sections are rather good, being only slightly less accurate than coupled states results. DLD is far superior to either the coupled states or effective potential methods for pressure broadening calculations, although it may not be uniformly of the quantitative accuracy desirable for obtaining intermolecular potentials from experimental data

  8. On quasiclassical approximation in the inverse scattering method

    International Nuclear Information System (INIS)

    Geogdzhaev, V.V.

    1985-01-01

    Using as an example quasiclassical limits of the Korteweg-de Vries equation and nonlinear Schroedinger equation, the quasiclassical limiting variant of the inverse scattering problem method is presented. In quasiclassical approximation the inverse scattering problem for the Schroedinger equation is reduced to the classical inverse scattering problem

  9. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-07

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  10. Hierarchical low-rank approximation for high dimensional approximation

    KAUST Repository

    Nouy, Anthony

    2016-01-01

    Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.

  11. Extended Finite Element Method with Simplified Spherical Harmonics Approximation for the Forward Model of Optical Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Wei Li

    2012-01-01

    Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.

  12. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  13. Accurate conjugate gradient methods for families of shifted systems

    NARCIS (Netherlands)

    Eshof, J. van den; Sleijpen, G.L.G.

    We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The

  14. Nonlinear Multigrid solver exploiting AMGe Coarse Spaces with Approximation Properties

    DEFF Research Database (Denmark)

    Christensen, Max la Cour; Villa, Umberto; Engsig-Karup, Allan Peter

    The paper introduces a nonlinear multigrid solver for mixed finite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstructured problems is the guaranteed approximation property of the AMGe coarse...... properties of the coarse spaces. With coarse spaces with approximation properties, our FAS approach on unstructured meshes has the ability to be as powerful/successful as FAS on geometrically refined meshes. For comparison, Newton’s method and Picard iterations with an inner state-of-the-art linear solver...... are compared to FAS on a nonlinear saddle point problem with applications to porous media flow. It is demonstrated that FAS is faster than Newton’s method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate...

  15. Smart density: a more accurate method of measuring rural residential density for health-related research

    Directory of Open Access Journals (Sweden)

    Gibson Lucinda

    2010-02-01

    Full Text Available Abstract Background Studies involving the built environment have typically relied on US Census data to measure residential density. However, census geographic units are often unsuited to health-related research, especially in rural areas where development is clustered and discontinuous. Objective We evaluated the accuracy of both standard census methods and alternative GIS-based methods to measure rural density. Methods We compared residential density (units/acre in 335 Vermont school neighborhoods using conventional census geographic units (tract, block group and block with two GIS buffer measures: a 1-kilometer (km circle around the school and a 1-km circle intersected with a 100-meter (m road-network buffer. The accuracy of each method was validated against the actual residential density for each neighborhood based on the Vermont e911 database, which provides an exact geo-location for all residential structures in the state. Results Standard census measures underestimate residential density in rural areas. In addition, the degree of error is inconsistent so even the relative rank of neighborhood densities varies across census measures. Census measures explain only 61% to 66% of the variation in actual residential density. In contrast, GIS buffer measures explain approximately 90% of the variation. Combining a 1-km circle with a road-network buffer provides the closest approximation of actual residential density. Conclusion Residential density based on census units can mask clusters of development in rural areas and distort associations between residential density and health-related behaviors and outcomes. GIS-defined buffers, including a 1-km circle and a road-network buffer, can be used in conjunction with census data to obtain a more accurate measure of residential density.

  16. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  17. Prestack wavefield approximations

    KAUST Repository

    Alkhalifah, Tariq

    2013-09-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.

  18. Circumstances under which various approximate relativistic and nonrelativistic theories yield accurate Compton scattering doubly differential cross sections at high photon energy

    International Nuclear Information System (INIS)

    LaJohn, L A; Pratt, R H

    2009-01-01

    We discuss the increase in error with increasing nuclear charge Z in the use of the relativistic impulse approximation (RIA) for the calculation of Compton K-shell scattering doubly differential cross sections (DDCS). We also show that nonrelativistic (nr) expressions can be used to obtain accurate peak region DDCS at scattering angles less than about 35 0 even at incident photon energies ω i exceeding 1 MeV, if Z<30. This is possible because in the Compton peak region, as θ→0, a low momentum transfer limit is being approached.

  19. Identification of approximately duplicate material records in ERP systems

    Science.gov (United States)

    Zong, Wei; Wu, Feng; Chu, Lap-Keung; Sculli, Domenic

    2017-03-01

    The quality of master data is crucial for the accurate functioning of the various modules of an enterprise resource planning (ERP) system. This study addresses specific data problems arising from the generation of approximately duplicate material records in ERP databases. Such problems are mainly due to the firm's lack of unique and global identifiers for the material records, and to the arbitrary assignment of alternative names for the same material by various users. Traditional duplicate detection methods are ineffective in identifying such approximately duplicate material records because these methods typically rely on string comparisons of each field. To address this problem, a machine learning-based framework is developed to recognise semantic similarity between strings and to further identify and reunify approximately duplicate material records - a process referred to as de-duplication in this article. First, the keywords of the material records are extracted to form vectors of discriminating words. Second, a machine learning method using a probabilistic neural network is applied to determine the semantic similarity between these material records. The approach was evaluated using data from a real case study. The test results indicate that the proposed method outperforms traditional algorithms in identifying approximately duplicate material records.

  20. Enhanced Multistage Homotopy Perturbation Method: Approximate Solutions of Nonlinear Dynamic Systems

    Directory of Open Access Journals (Sweden)

    Daniel Olvera

    2014-01-01

    Full Text Available We introduce a new approach called the enhanced multistage homotopy perturbation method (EMHPM that is based on the homotopy perturbation method (HPM and the usage of time subintervals to find the approximate solution of differential equations with strong nonlinearities. We also study the convergence of our proposed EMHPM approach based on the value of the control parameter h by following the homotopy analysis method (HAM. At the end of the paper, we compare the derived EMHPM approximate solutions of some nonlinear physical systems with their corresponding numerical integration solutions obtained by using the classical fourth order Runge-Kutta method via the amplitude-time response curves.

  1. Quantal density functional theory II. Approximation methods and applications

    International Nuclear Information System (INIS)

    Sahni, Viraht

    2010-01-01

    This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)

  2. Parallel iterative solvers and preconditioners using approximate hierarchical methods

    Energy Technology Data Exchange (ETDEWEB)

    Grama, A.; Kumar, V.; Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.

  3. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  4. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  5. Beyond the random phase approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2013-01-01

    We assess the performance of a recently proposed renormalized adiabatic local density approximation (rALDA) for ab initio calculations of electronic correlation energies in solids and molecules. The method is an extension of the random phase approximation (RPA) derived from time-dependent density...... functional theory and the adiabatic connection fluctuation-dissipation theorem and contains no fitted parameters. The new kernel is shown to preserve the accurate description of dispersive interactions from RPA while significantly improving the description of short-range correlation in molecules, insulators......, and metals. For molecular atomization energies, the rALDA is a factor of 7 better than RPA and a factor of 4 better than the Perdew-Burke-Ernzerhof (PBE) functional when compared to experiments, and a factor of 3 (1.5) better than RPA (PBE) for cohesive energies of solids. For transition metals...

  6. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  7. Variational, projection methods and Pade approximants in scattering theory

    International Nuclear Information System (INIS)

    Turchetti, G.

    1980-12-01

    Several aspects on the scattering theory are discussed in a perturbative scheme. The Pade approximant method plays an important role in such a scheme. Solitons solutions are also discussed in this same scheme. (L.C.) [pt

  8. Efficient and accurate log-Lévy approximations to Lévy driven LIBOR models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David

    2011-01-01

    The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we con...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-L\\'evy approximation of annuities, which offers good approximations for high volatility regimes....

  9. An approximate analysis of expected cycle time in business process execution

    NARCIS (Netherlands)

    Ha, B.H.; Reijers, H.A.; Bae, J.; Bae, H.; Eder, J.; Dustdar, S

    2006-01-01

    The accurate prediction of business process performance during its design phase can facilitate the assessment of existing processes and the generation of alternatives. In this paper, an approximation method to estimate the cycle time of a business process is introduced. First, we propose a process

  10. A test of the adhesion approximation for gravitational clustering

    Science.gov (United States)

    Melott, Adrian L.; Shandarin, Sergei; Weinberg, David H.

    1993-01-01

    We quantitatively compare a particle implementation of the adhesion approximation to fully non-linear, numerical 'N-body' simulations. Our primary tool, cross-correlation of N-body simulations with the adhesion approximation, indicates good agreement, better than that found by the same test performed with the Zel-dovich approximation (hereafter ZA). However, the cross-correlation is not as good as that of the truncated Zel-dovich approximation (TZA), obtained by applying the Zel'dovich approximation after smoothing the initial density field with a Gaussian filter. We confirm that the adhesion approximation produces an excessively filamentary distribution. Relative to the N-body results, we also find that: (a) the power spectrum obtained from the adhesion approximation is more accurate than that from ZA or TZA, (b) the error in the phase angle of Fourier components is worse than that from TZA, and (c) the mass distribution function is more accurate than that from ZA or TZA. It appears that adhesion performs well statistically, but that TZA is more accurate dynamically, in the sense of moving mass to the right place.

  11. A domian Decomposition Method for Transient Neutron Transport with Pomrning-Eddington Approximation

    International Nuclear Information System (INIS)

    Hendi, A.A.; Abulwafa, E.E.

    2008-01-01

    The time-dependent neutron transport problem is approximated using the Pomraning-Eddington approximation. This approximation is two-flux approximation that expands the angular intensity in terms of the energy density and the net flux. This approximation converts the integro-differential Boltzmann equation into two first order differential equations. The A domian decomposition method that used to solve the linear or nonlinear differential equations is used to solve the resultant two differential equations to find the neutron energy density and net flux, which can be used to calculate the neutron angular intensity through the Pomraning-Eddington approximation

  12. Introduction to methods of approximation in physics and astronomy

    CERN Document Server

    van Putten, Maurice H P M

    2017-01-01

    This textbook provides students with a solid introduction to the techniques of approximation commonly used in data analysis across physics and astronomy. The choice of methods included is based on their usefulness and educational value, their applicability to a broad range of problems and their utility in highlighting key mathematical concepts. Modern astronomy reveals an evolving universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data-analysis. The book is organized to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal dete...

  13. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic; Nouy, Anthony

    2017-01-01

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  14. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  15. An approximate method to calculate ionization of LTE and non-LTE plasma

    International Nuclear Information System (INIS)

    Zhang Jun; Gu Peijun

    1987-01-01

    When matter, especially high Z element, is heated to high temperature, it will be ionized many times. The degree of ionization has a strong effect on many plasma properties. So an approximate method to calculate the mean ionization degree is needed for solving many practical problems. An analytical expression which is convenient for the approximate numerical calculation is given by fitting it to the scaling law and numerical results of the ionization potential of Thomas-Fermi statistical model. In LTE case, the ionization degree of Au calculated by using the approximate method is in agreement with that of the average ion model. By extending the approximate method to non-LTE case, the ionization degree of Au is similarly calculated according to Corona model and Collision-Radiatoin model(C-R). The results of Corona model agree with the published data quite well, while the results of C-R approach those of Corona model as the density is reduced and approach those of LTE as the density is increased. Finally, all approximately calculated results of ionization degree of Au and the comparision of them are given in figures and tables

  16. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  17. Semi-implicit iterative methods for low Mach number turbulent reacting flows: Operator splitting versus approximate factorization

    Science.gov (United States)

    MacArt, Jonathan F.; Mueller, Michael E.

    2016-12-01

    Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.

  18. Optimization in engineering sciences approximate and metaheuristic methods

    CERN Document Server

    Stefanoiu, Dan; Popescu, Dumitru; Filip, Florin Gheorghe; El Kamel, Abdelkader

    2014-01-01

    The purpose of this book is to present the main metaheuristics and approximate and stochastic methods for optimization of complex systems in Engineering Sciences. It has been written within the framework of the European Union project ERRIC (Empowering Romanian Research on Intelligent Information Technologies), which is funded by the EU's FP7 Research Potential program and has been developed in co-operation between French and Romanian teaching researchers. Through the principles of various proposed algorithms (with additional references) this book allows the reader to explore various methods o

  19. A method of accurate determination of voltage stability margin

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology, Wroclaw (Poland); Klimek, A. [AREVA Transmission and Distribution, Stafford (United Kingdom)

    2008-07-01

    In the process of developing power system disturbance, voltage instability at the receiving substations often contributes to deteriorating system stability, which eventually may lead to severe blackouts. The voltage stability margin at receiving substations may be used to determine measures to prevent voltage collapse, primarily by operating or blocking the transformer tap changing device, or by load shedding. The best measure of the stability margin is the actual load to source impedance ratio and its critical value, which is unity. This paper presented an accurate method of calculating the load to source impedance ratio, derived from the Thevenin's equivalent circuit of the system, which led to calculation of the stability margin. The paper described the calculation of the load to source impedance ratio including the supporting equations. The calculation was based on the very definition of voltage stability, which says that system stability is maintained as long as the change of power, which follows the increase of admittance is positive. The testing of the stability margin assessment method was performed in a simulative way for a number of power network structures and simulation scenarios. Results of the simulations revealed that this method is accurate and stable for all possible events occurring downstream of the device location. 3 refs., 8 figs.

  20. Approximation of the Doppler broadening function by Frobenius method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.

    2005-01-01

    An analytical approximation of the Doppler broadening function ψ(x,ξ) is proposed. This approximation is based on the solution of the differential equation for ψ(x,ξ) using the methods of Frobenius and the parameters variation. The analytical form derived for ψ(x,ξ) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)

  1. Smart density: A more accurate method of measuring rural residential density for health-related research.

    Science.gov (United States)

    Owens, Peter M; Titus-Ernstoff, Linda; Gibson, Lucinda; Beach, Michael L; Beauregard, Sandy; Dalton, Madeline A

    2010-02-12

    Studies involving the built environment have typically relied on US Census data to measure residential density. However, census geographic units are often unsuited to health-related research, especially in rural areas where development is clustered and discontinuous. We evaluated the accuracy of both standard census methods and alternative GIS-based methods to measure rural density. We compared residential density (units/acre) in 335 Vermont school neighborhoods using conventional census geographic units (tract, block group and block) with two GIS buffer measures: a 1-kilometer (km) circle around the school and a 1-km circle intersected with a 100-meter (m) road-network buffer. The accuracy of each method was validated against the actual residential density for each neighborhood based on the Vermont e911 database, which provides an exact geo-location for all residential structures in the state. Standard census measures underestimate residential density in rural areas. In addition, the degree of error is inconsistent so even the relative rank of neighborhood densities varies across census measures. Census measures explain only 61% to 66% of the variation in actual residential density. In contrast, GIS buffer measures explain approximately 90% of the variation. Combining a 1-km circle with a road-network buffer provides the closest approximation of actual residential density. Residential density based on census units can mask clusters of development in rural areas and distort associations between residential density and health-related behaviors and outcomes. GIS-defined buffers, including a 1-km circle and a road-network buffer, can be used in conjunction with census data to obtain a more accurate measure of residential density.

  2. Evaluation of Fresnel's corrections to the eikonal approximation by the separabilization method

    International Nuclear Information System (INIS)

    Musakhanov, M.M.; Zubarev, A.L.

    1975-01-01

    Method of separabilization of potential over the Schroedinger approximate solutions, leading to Schwinger's variational principle for scattering amplitude, is suggested. The results are applied to calculation of the Fresnel corrections to the Glauber approximation

  3. Approximating methods for intractable probabilistic models: Applications in neuroscience

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro

    2002-01-01

    This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...

  4. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  5. An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2014-01-01

    Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.

  6. Heat rate curve approximation for power plants without data measuring devices

    Energy Technology Data Exchange (ETDEWEB)

    Poullikkas, Andreas [Electricity Authority of Cyprus, P.O. Box 24506, 1399 Nicosia (CY

    2012-07-01

    In this work, a numerical method, based on the one-dimensional finite difference technique, is proposed for the approximation of the heat rate curve, which can be applied for power plants in which no data acquisition is available. Unlike other methods in which three or more data points are required for the approximation of the heat rate curve, the proposed method can be applied when the heat rate curve data is available only at the maximum and minimum operating capacities of the power plant. The method is applied on a given power system, in which we calculate the electricity cost using the CAPSE (computer aided power economics) algorithm. Comparisons are made when the least squares method is used. The results indicate that the proposed method give accurate results.

  7. Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control

    KAUST Repository

    Elmetennani, Shahrazed

    2014-07-01

    This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.

  8. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  9. An approximate methods approach to probabilistic structural analysis

    Science.gov (United States)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.

  10. On an efficient and accurate method to integrate restricted three-body orbits

    Science.gov (United States)

    Murison, Marc A.

    1989-01-01

    This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

  11. Modeling C-band single scattering properties of hydrometeors using discrete-dipole approximation and T-matrix method

    International Nuclear Information System (INIS)

    Tyynelae, Jani; Nousiainen, Timo; Goeke, Sabine; Muinonen, Karri

    2009-01-01

    We study the applicability of the discrete-dipole approximation by modeling centimeter (C-band) radar echoes for hydrometeors, and compare the results to exact theories. We use ice and water particles of various shapes with varying water-content to investigate how the backscattering, extinction, and absorption cross sections change as a function of particle radius. We also compute radar parameters, such as the differential reflectivity, the linear depolarization ratio, and the copolarized correlation coefficient. We find that using discrete-dipole approximation (DDA) to model pure ice and pure water particles at the C-band, is a lot more accurate than particles containing both ice and water. For coated particles, a large grid-size is recommended so that the coating is modeled adequately. We also find that the absorption cross section is significantly less accurate than the scattering and backscattering cross sections. The accuracy of DDA can be increased by increasing the number of dipoles, but also by using the filtered coupled dipole-option for the polarizability. This halved the relative errors in cross sections.

  12. A hybrid method for accurate star tracking using star sensor and gyros.

    Science.gov (United States)

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  13. Adaptive ACMS: A robust localized Approximated Component Mode Synthesis Method

    OpenAIRE

    Madureira, Alexandre L.; Sarkis, Marcus

    2017-01-01

    We consider finite element methods of multiscale type to approximate solutions for two-dimensional symmetric elliptic partial differential equations with heterogeneous $L^\\infty$ coefficients. The methods are of Galerkin type and follows the Variational Multiscale and Localized Orthogonal Decomposition--LOD approaches in the sense that it decouples spaces into multiscale and fine subspaces. In a first method, the multiscale basis functions are obtained by mapping coarse basis functions, based...

  14. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  15. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Nam Zin

    2016-01-01

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  16. Comparison of Two-Block Decomposition Method and Chebyshev Rational Approximation Method for Depletion Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hee; Cho, Nam Zin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The code gives inaccurate results of nuclides for evaluation of source term analysis, e.g., Sr- 90, Ba-137m, Cs-137, etc. A Krylov Subspace method was suggested by Yamamoto et al. The method is based on the projection of solution space of Bateman equation to a lower dimension of Krylov subspace. It showed good accuracy in the detailed burnup chain calculation if dimension of the Krylov subspace is high enough. In this paper, we will compare the two methods in terms of accuracy and computing time. In this paper, two-block decomposition (TBD) method and Chebyshev rational approximation method (CRAM) are compared in the depletion calculations. In the two-block decomposition method, according to the magnitude of effective decay constant, the system of Bateman equation is decomposed into short- and longlived blocks. The short-lived block is calculated by the general Bateman solution and the importance concept. Matrix exponential with smaller norm is used in the long-lived block. In the Chebyshev rational approximation, there is no decomposition of the Bateman equation system, and the accuracy of the calculation is determined by the order of expansion in the partial fraction decomposition of the rational form. The coefficients in the partial fraction decomposition are determined by a Remez-type algorithm.

  17. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  18. An approximate method for lateral stability analysis of wall-frame ...

    Indian Academy of Sciences (India)

    Initially the stability differential equation of this equivalent sandwich beam is ... buckling loads of coupled shear-wall structures using continuous medium ... In this study, an approximate method based on continuum system model and transfer.

  19. Low rank approximation method for efficient Green's function calculation of dissipative quantum transport

    Science.gov (United States)

    Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann

    2013-06-01

    In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.

  20. An accurate and nondestructive GC method for determination of cocaine on US paper currency.

    Science.gov (United States)

    Zuo, Yuegang; Zhang, Kai; Wu, Jingping; Rego, Christopher; Fritz, John

    2008-07-01

    The presence of cocaine on US paper currency has been known for a long time. Banknotes become contaminated during the exchange, storage, and abuse of cocaine. The analysis of cocaine on various denominations of US banknotes in the general circulation can provide law enforcement circles and forensic epidemiologists objective and timely information on epidemiology of illicit drug use and on how to differentiate money contaminated in the general circulation from banknotes used in drug transaction. A simple, nondestructive, and accurate capillary gas chromatographic method has been developed for the determination of cocaine on various denominations of US banknotes in this study. The method comprises a fast ultrasonic extraction using water as a solvent followed by a SPE cleanup process with a C(18) cartridge and capillary GC separation, identification, and quantification. This nondestructive analytical method has been successfully applied to determine the cocaine contamination in US paper currency of all denominations. Standard calibration curve was linear over the concentration range from the LOQ (2.00 ng/mL) to 100 microg/mL and the RSD less than 2.0%. Cocaine was detected in 67% of the circulated banknotes collected in Southeastern Massachusetts in amounts ranging from approximately 2 ng to 49.4 microg per note. On average, $5, 10, 20, and 50 denominations contain higher amounts of cocaine than $1 and 100 denominations of US banknotes.

  1. Picard Approximation of Stochastic Differential Equations and Application to LIBOR Models

    DEFF Research Database (Denmark)

    Papapantoleon, Antonis; Skovmand, David

    The aim of this work is to provide fast and accurate approximation schemes for the Monte Carlo pricing of derivatives in LIBOR market models. Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods are generally slow. Our...... exponential to quadratic using truncated expansions of the product terms. We include numerical illustrations of the accuracy and speed of our method pricing caplets, swaptions and forward rate agreements....

  2. Effect of flux discontinuity on spatial approximations for discrete ordinates methods

    International Nuclear Information System (INIS)

    Duo, J.I.; Azmy, Y.Y.

    2005-01-01

    This work presents advances on error analysis of the spatial approximation of the discrete ordinates method for solving the neutron transport equation. Error norms for different non-collided flux problems over a two dimensional pure absorber medium are evaluated using three numerical methods. The problems are characterized by the incoming flux boundary conditions to obtain solutions with different level of differentiability. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The last two methods are employed in constant, linear and quadratic orders of spatial approximation. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that the level of differentiability of the exact solution profoundly affects the rate of convergence of the numerical methods' solutions. Furthermore, in the case of discontinuous exact flux the methods fail to converge in the maximum error norm, or in the pointwise sense, in accordance with previous local error analysis. (authors)

  3. Laplace transform homotopy perturbation method for the approximation of variational problems.

    Science.gov (United States)

    Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R

    2016-01-01

    This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.

  4. Rational function approximation method for discrete ordinates problems in slab geometry

    International Nuclear Information System (INIS)

    Leal, Andre Luiz do C.; Barros, Ricardo C.

    2009-01-01

    In this work we use rational function approaches to obtain the transfer functions that appear in the spectral Green's function (SGF) auxiliary equations for one-speed isotropic scattering SN equations in one-dimensional Cartesian geometry. For this task we use the computation of the Pade approximants to compare the results with the standard SGF method's applied to deep penetration problems in homogeneous domains. This work is a preliminary investigation of a new proposal for handling leakage terms that appear in the two transverse integrated one-dimensional SN equations in the exponential SGF method (SGF-ExpN). Numerical results are presented to illustrate the rational function approximation accuracy. (author)

  5. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    Science.gov (United States)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  6. Two reactions method for accurate analysis by irradiation with charged particles

    International Nuclear Information System (INIS)

    Ishii, K.; Sastri, C.S.; Valladon, M.; Borderie, B.; Debrun, J.L.

    1978-01-01

    In the average stopping power method the formula error itself was negligible but systematic errors could be introduced by the stopping power data used in this formula. A method directly derived from the average stopping power method, but based on the use of two nuclear reactions, is described here. This method has a negligible formula error and does not require the use of any stopping power or range data: accurate and 'self-consistent' analysis by irradiation with charged particles is then possible. (Auth.)

  7. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    Science.gov (United States)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  8. Approximate Analytic Solutions for the Two-Phase Stefan Problem Using the Adomian Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Qin

    2014-01-01

    Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.

  9. The Incorporation of Truncated Fourier Series into Finite Difference Approximations of Structural Stability Equations

    Science.gov (United States)

    Hannah, S. R.; Palazotto, A. N.

    1978-01-01

    A new trigonometric approach to the finite difference calculus was applied to the problem of beam buckling as represented by virtual work and equilibrium equations. The trigonometric functions were varied by adjusting a wavelength parameter in the approximating Fourier series. Values of the critical force obtained from the modified approach for beams with a variety of boundary conditions were compared to results using the conventional finite difference method. The trigonometric approach produced significantly more accurate approximations for the critical force than the conventional approach for a relatively wide range in values of the wavelength parameter; and the optimizing value of the wavelength parameter corresponded to the half-wavelength of the buckled mode shape. It was found from a modal analysis that the most accurate solutions are obtained when the approximating function closely represents the actual displacement function and matches the actual boundary conditions.

  10. Qsub(N) approximation for slowing-down in fast reactors

    International Nuclear Information System (INIS)

    Rocca-Volmerange, Brigitte.

    1976-05-01

    An accurate and simple determination of the neutron energy spectra in fast reactors poses several problems. The slowing-down models (Fermi, Wigner, Goertzel-Greuling...) which are different forms of the approximation with order N=0 may prove inaccurate, in spite of recent improvements. A new method of approximation is presented which turns out to be a method of higher order: the Qsub(N) method. It is characterized by a rapid convergence with respect to the order N, by the use of some global parameters to represent the slowing-down and by the expression of the Boltzmann integral equation in a differential formalism. Numerous test verify that, for the order N=2 or 3, the method gives precision equivalent to that of the multigroup numerical integration for the spectra with greatly reduced calculational effort. Furthermore, since the Qsub(N) expressions are a kind of synthesis method, they allow calculation of the spatial Green's function, or the use of collision probabilities to find the flux. Both possibilities have been introduced into existing reactor codes: EXCALIBUR, TRALOR, RE MINEUR... Some applications to multi-zone media (core, blanket, reflector of Masurca pile and exponential slabs) are presented in the isotropic collision approximation. The case of linearly anisotropic collisions is theoretically resolved [fr

  11. Approximate quantum chemical methods for modelling carbohydrate conformation and aromatic interactions: β-cyclodextrin and its adsorption on a single-layer graphene sheet.

    Science.gov (United States)

    Jaiyong, Panichakorn; Bryce, Richard A

    2017-06-14

    Noncovalent functionalization of graphene by carbohydrates such as β-cyclodextrin (βCD) has the potential to improve graphene dispersibility and its use in biomedical applications. Here we explore the ability of approximate quantum chemical methods to accurately model βCD conformation and its interaction with graphene. We find that DFTB3, SCC-DFTB and PM3CARB-1 methods provide the best agreement with density functional theory (DFT) in calculation of relative energetics of gas-phase βCD conformers; however, the remaining NDDO-based approaches we considered underestimate the stability of the trans,gauche vicinal diol conformation. This diol orientation, corresponding to a clockwise hydrogen bonding arrangement in the glucosyl residue of βCD, is present in the lowest energy βCD conformer. Consequently, for adsorption on graphene of clockwise or counterclockwise hydrogen bonded forms of βCD, calculated with respect to this unbound conformer, the DFTB3 method provides closer agreement with DFT values than PM7 and PM6-DH2 approaches. These findings suggest approximate quantum chemical methods as potentially useful tools to guide the design of carbohydrate-graphene interactions, but also highlights the specific challenge to NDDO-based methods in capturing the relative energetics of carbohydrate hydrogen bond networks.

  12. Molecular Excitation Energies from Time-Dependent Density Functional Theory Employing Random-Phase Approximation Hessians with Exact Exchange.

    Science.gov (United States)

    Heßelmann, Andreas

    2015-04-14

    Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.

  13. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  14. Approximate Bayesian evaluations of measurement uncertainty

    Science.gov (United States)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  15. The generalized gradient approximation in solids and molecules

    International Nuclear Information System (INIS)

    Haas, P.

    2010-01-01

    Today, most methods are based on theoretical calculations of the electronic structure of molecules, surfaces and solids on density functional theory (DFT) and the resulting Kohn-Sham equations. Unfortunately, the exact analytical expression for the exchange-correlation functional is not known and has to be approximated. The reliability of such a Kohn-Sham calculation depends i) from the numerical accuracy and ii) from the used approximation for the exchange-correlation energy. To solve the Kohn-Sham equations, the WIEN2k code, which is one of the most accurate methods for solid-state calculations, is used. The search for better approximations for the exchange-correlation energy is an intense field of research in chemistry and physics. The main objectives of the dissertation is the development, implementation and testing of advanced exchange-correlation functionals and the analysis of existing functionals. The focus of this work are GGA - functionals. Such GGA functionals are still the most widely used functionals, in particular because they are easy to implement and require little computational effort. Several recent studies have shown that an improvement of the GGA should be possible. A detailed analysis of the results will allow us to understand why a particular GGA approximation for a class of elements (compounds) works better than for another. (Kancsar) [de

  16. The Validity of a Paraxial Approximation in the Simulation of Laser Plasma Interactions

    International Nuclear Information System (INIS)

    Hyole, E. M.

    2000-01-01

    The design of high-power lasers such as those used for inertial confinement fusion demands accurate modeling of the interaction between lasers and plasmas. In inertial confinement fusion, initial laser pulses ablate material from the hohlraum, which contains the target, creating a plasma. Plasma density variations due to plasma motion, ablating material and the ponderomotive force exerted by the laser on the plasma disrupt smooth laser propagation, undesirably focusing and scattering the light. Accurate and efficient computational simulations aid immensely in developing an understanding of these effects. In this paper, we compare the accuracy of two methods for calculating the propagation of laser light through plasmas. A full laser-plasma simulation typically consists of a fluid model for the plasma motion and a laser propagation model. These two pieces interact with each other as follows. First, given the plasma density, one propagates the laser with a refractive index determined by this density. Then, given the laser intensities, the calculation of one time step of the plasma motion provides a new density for the laser propagation. Because this procedure repeats over many time steps, each piece must be performed accurately and efficiently. In general, calculation of the light intensities necessitates the solution of the Helmholtz equation with a variable index of refraction. The Helmholtz equation becomes extremely difficult and time-consuming to solve as the problem size increases. The size of laser-plasma problems of present interest far exceeds current capabilities. To avoid solving the full Helmholtz equation one may use a partial approximation. Generally speaking the partial approximation applies when one expects negligible backscattering of the light and only mild scattering transverse to the direction of light propagation. This approximation results in a differential equation that is first-order in the propagation direction that can be integrated

  17. A fast GNU method to draw accurate scientific illustrations for taxonomy

    Directory of Open Access Journals (Sweden)

    Giuseppe Montesanto

    2015-07-01

    Full Text Available Nowadays only digital figures are accepted by the most important journals of taxonomy. These may be produced by scanning conventional drawings, made with high precision technical ink-pens, which normally use capillary cartridge and various line widths. Digital drawing techniques that use vector graphics, have already been described in literature to support scientists in drawing figures and plates for scientific illustrations; these techniques use many different software and hardware devices. The present work gives step-by-step instructions on how to make accurate line drawings with a new procedure that uses bitmap graphics with the GNU Image Manipulation Program (GIMP. This method is noteworthy: it is very accurate, producing detailed lines at the highest resolution; the raster lines appear as realistic ink-made drawings; it is faster than the traditional way of making illustrations; everyone can use this simple technique; this method is completely free as it does not use expensive and licensed software and it can be used with different operating systems. The method has been developed drawing figures of terrestrial isopods and some examples are here given.

  18. 'LTE-diffusion approximation' for arc calculations

    International Nuclear Information System (INIS)

    Lowke, J J; Tanaka, M

    2006-01-01

    This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on D e /W, where D e is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode

  19. Communication: Random phase approximation renormalized many-body perturbation theory

    International Nuclear Information System (INIS)

    Bates, Jefferson E.; Furche, Filipp

    2013-01-01

    We derive a renormalized many-body perturbation theory (MBPT) starting from the random phase approximation (RPA). This RPA-renormalized perturbation theory extends the scope of single-reference MBPT methods to small-gap systems without significantly increasing the computational cost. The leading correction to RPA, termed the approximate exchange kernel (AXK), substantially improves upon RPA atomization energies and ionization potentials without affecting other properties such as barrier heights where RPA is already accurate. Thus, AXK is more balanced than second-order screened exchange [A. Grüneis et al., J. Chem. Phys. 131, 154115 (2009)], which tends to overcorrect RPA for systems with stronger static correlation. Similarly, AXK avoids the divergence of second-order Møller-Plesset (MP2) theory for small gap systems and delivers a much more consistent performance than MP2 across the periodic table at comparable cost. RPA+AXK thus is an accurate, non-empirical, and robust tool to assess and improve semi-local density functional theory for a wide range of systems previously inaccessible to first-principles electronic structure calculations

  20. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J; Shin, H S; Song, T Y; Park, W S [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  1. Approximate method in estimation sensitivity responses to variations in delayed neutron energy spectra

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Shin, H. S.; Song, T. Y.; Park, W. S. [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    Previous our numerical results in computing point kinetics equations show a possibility in developing approximations to estimate sensitivity responses of nuclear reactor. We recalculate sensitivity responses by maintaining the corrections with first order of sensitivity parameter. We present a method for computing sensitivity responses of nuclear reactor based on an approximation derived from point kinetics equations. Exploiting this approximation, we found that the first order approximation works to estimate variations in the time to reach peak power because of their linear dependence on a sensitivity parameter, and that there are errors in estimating the peak power in the first order approximation for larger sensitivity parameters. To confirm legitimacy of out approximation, these approximate results are compared with exact results obtained from out previous numerical study. 4 refs., 2 figs., 3 tabs. (Author)

  2. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  3. Higher accuracy analytical approximations to a nonlinear oscillator with discontinuity by He's homotopy perturbation method

    International Nuclear Information System (INIS)

    Belendez, A.; Hernandez, A.; Belendez, T.; Neipp, C.; Marquez, A.

    2008-01-01

    He's homotopy perturbation method is used to calculate higher-order approximate periodic solutions of a nonlinear oscillator with discontinuity for which the elastic force term is proportional to sgn(x). We find He's homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. Only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate period of less than 1.56% for all values of oscillation amplitude, while this relative error is 0.30% for the second iteration and as low as 0.057% when the third-order approximation is considered. Comparison of the result obtained using this method with those obtained by different harmonic balance methods reveals that He's homotopy perturbation method is very effective and convenient

  4. Comparative analysis of approximations used in the methods of Faddeev equations and hyperspherical harmonics

    International Nuclear Information System (INIS)

    Mukhtarova, M.I.

    1988-01-01

    Comparative analysis of approximations, used in the methods of Faddeev equations and hyperspherical harmonics (MHH) was conducted. The differences in solutions of these methods, related with introduction of approximation of sufficient partial states into the three-nucleon problem, is shown. MHH method is preferred. It is shown that MHH advantage can be manifested clearly when studying new classes of interactions: three-particle, Δ-isobar, nonlocal and other interactions

  5. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  6. Monoenergetic approximation of a polyenergetic beam: a theoretical approach

    International Nuclear Information System (INIS)

    Robinson, D.M.; Scrimger, J.W.

    1991-01-01

    There exist numerous occasions in which it is desirable to approximate the polyenergetic beams employed in radiation therapy by a beam of photons of a single energy. In some instances, commonly used rules of thumb for the selection of an appropriate energy may be valid. A more accurate approximate energy, however, may be determined by an analysis which takes into account both the spectral qualities of the beam and the material through which it passes. The theoretical basis of this method of analysis is presented in this paper. Experimental agreement with theory for a range of materials and beam qualities is also presented and demonstrates the validity of the theoretical approach taken. (author)

  7. Approximate Schur complement preconditioning of the lowest order nodal discretizations

    Energy Technology Data Exchange (ETDEWEB)

    Moulton, J.D.; Ascher, U.M. [Univ. of British Columbia, Vancouver, British Columbia (Canada); Morel, J.E. [Los Alamos National Lab., NM (United States)

    1996-12-31

    Particular classes of nodal methods and mixed hybrid finite element methods lead to equivalent, robust and accurate discretizations of 2nd order elliptic PDEs. However, widespread popularity of these discretizations has been hindered by the awkward linear systems which result. The present work exploits this awkwardness, which provides a natural partitioning of the linear system, by defining two optimal preconditioners based on approximate Schur complements. Central to the optimal performance of these preconditioners is their sparsity structure which is compatible with Dendy`s black box multigrid code.

  8. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    International Nuclear Information System (INIS)

    Sin, M. W.; Kim, M. H.

    2002-01-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values

  9. Development of approximate shielding calculation method for high energy cosmic radiation on LEO satellites

    Energy Technology Data Exchange (ETDEWEB)

    Sin, M. W.; Kim, M. H. [Kyunghee Univ., Yongin (Korea, Republic of)

    2002-10-01

    To calculate total dose effect on semi-conductor devices in satellite for a period of space mission effectively, two approximate calculation models for a comic radiation shielding were proposed. They are a sectoring method and a chord-length distribution method. When an approximate method was applied in this study, complex structure of satellite was described into multiple 1-dimensional slabs, structural materials were converted to reference material(aluminum), and the pre-calculated dose-depth conversion function was introduced to simplify the calculation process. Verification calculation was performed for orbit location and structure geometry of KITSAT-1 and compared with detailed 3-dimensional calculation results and experimental values. The calculation results from approximate method were estimated conservatively with acceptable error. However, results for satellite mission simulation were underestimated in total dose rate compared with experimental values.

  10. A local adaptive method for the numerical approximation in seismic wave modelling

    Directory of Open Access Journals (Sweden)

    Galuzzi Bruno G.

    2017-12-01

    Full Text Available We propose a new numerical approach for the solution of the 2D acoustic wave equation to model the predicted data in the field of active-source seismic inverse problems. This method consists in using an explicit finite difference technique with an adaptive order of approximation of the spatial derivatives that takes into account the local velocity at the grid nodes. Testing our method to simulate the recorded seismograms in a marine seismic acquisition, we found that the low computational time and the low approximation error of the proposed approach make it suitable in the context of seismic inversion problems.

  11. Comparison of approximate methods for multiple scattering in high-energy collisions. II

    International Nuclear Information System (INIS)

    Nolan, A.M.; Tobocman, W.; Werby, M.F.

    1976-01-01

    The scattering in one dimension of a particle by a target of N like particles in a bound state has been studied. The exact result for the transmission probability has been compared with the predictions of the Glauber theory, the Watson optical potential model, and the adiabatic (or fixed scatterer) approximation. The approximate methods optical potential model is second best. The Watson method is found to work better when the kinematics suggested by Foldy and Walecka are used rather than that suggested by Watson, that is to say, when the two-body of the nucleon-nucleon reduced mass

  12. Methods of Fourier analysis and approximation theory

    CERN Document Server

    Tikhonov, Sergey

    2016-01-01

    Different facets of interplay between harmonic analysis and approximation theory are covered in this volume. The topics included are Fourier analysis, function spaces, optimization theory, partial differential equations, and their links to modern developments in the approximation theory. The articles of this collection were originated from two events. The first event took place during the 9th ISAAC Congress in Krakow, Poland, 5th-9th August 2013, at the section “Approximation Theory and Fourier Analysis”. The second event was the conference on Fourier Analysis and Approximation Theory in the Centre de Recerca Matemàtica (CRM), Barcelona, during 4th-8th November 2013, organized by the editors of this volume. All articles selected to be part of this collection were carefully reviewed.

  13. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  14. Accurate Quasiparticle Spectra from the T-Matrix Self-Energy and the Particle-Particle Random Phase Approximation.

    Science.gov (United States)

    Zhang, Du; Su, Neil Qiang; Yang, Weitao

    2017-07-20

    The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.

  15. APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL

    Directory of Open Access Journals (Sweden)

    Kasa, Richard

    2015-01-01

    Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.

  16. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    Science.gov (United States)

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. An approximation method for nonlinear integral equations of Hammerstein type

    International Nuclear Information System (INIS)

    Chidume, C.E.; Moore, C.

    1989-05-01

    The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs

  18. Local facet approximation for image stitching

    Science.gov (United States)

    Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun

    2018-01-01

    Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.

  19. An Approximate Redistributed Proximal Bundle Method with Inexact Data for Minimizing Nonsmooth Nonconvex Functions

    Directory of Open Access Journals (Sweden)

    Jie Shen

    2015-01-01

    Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.

  20. Higher order analytical approximate solutions to the nonlinear pendulum by He's homotopy method

    International Nuclear Information System (INIS)

    Belendez, A; Pascual, C; Alvarez, M L; Mendez, D I; Yebra, M S; Hernandez, A

    2009-01-01

    A modified He's homotopy perturbation method is used to calculate the periodic solutions of a nonlinear pendulum. The method has been modified by truncating the infinite series corresponding to the first-order approximate solution and substituting a finite number of terms in the second-order linear differential equation. As can be seen, the modified homotopy perturbation method works very well for high values of the initial amplitude. Excellent agreement of the analytical approximate period with the exact period has been demonstrated not only for small but also for large amplitudes A (the relative error is less than 1% for A < 152 deg.). Comparison of the result obtained using this method with the exact ones reveals that this modified method is very effective and convenient.

  1. Self-consistent Random Phase Approximation applied to a schematic model of the field theory; Approximation des phases aleatoires self-consistante appliquee a un modele schematique de la theorie des champs

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, Thierry [Inst. de Physique Nucleaire, Lyon-1 Univ., 69 - Villeurbanne (France)

    1998-12-11

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum 40 refs., 33 figs., 14 tabs.

  2. Approximation by rational functions as processing method, analysis and transformation of neutron data

    International Nuclear Information System (INIS)

    Gaj, E.V.; Badikov, S.A.; Gusejnov, M.A.; Rabotnov, N.S.

    1988-01-01

    Possible applications of rational functions in the analysis of neutron cross sections, angular distributions and neutron constants generation are described. Results of investigations made in this direction, which have been obtained after the preceding conference in Kiev, are presented: the method of simultaneous treatment of several cross sections for one compound nucleus in the resonance range; the use of the Pade approximation for elastically scattered neutron angular distribution approximation; obtaining of subgroup constants on the basis of rational approximation of cross section functional dependence on dilution cross section; the first experience in function approximation by two variables

  3. Generation, combination and extension of random set approximations to coherent lower and upper probabilities

    International Nuclear Information System (INIS)

    Hall, Jim W.; Lawry, Jonathan

    2004-01-01

    Random set theory provides a convenient mechanism for representing uncertain knowledge including probabilistic and set-based information, and extending it through a function. This paper focuses upon the situation when the available information is in terms of coherent lower and upper probabilities, which are encountered, for example, when a probability distribution is specified by interval parameters. We propose an Iterative Rescaling Method (IRM) for constructing a random set with corresponding belief and plausibility measures that are a close outer approximation to the lower and upper probabilities. The approach is compared with the discrete approximation method of Williamson and Downs (sometimes referred to as the p-box), which generates a closer approximation to lower and upper cumulative probability distributions but in most cases a less accurate approximation to the lower and upper probabilities on the remainder of the power set. Four combination methods are compared by application to example random sets generated using the IRM

  4. Modulated Pade approximant

    International Nuclear Information System (INIS)

    Ginsburg, C.A.

    1980-01-01

    In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)

  5. Accurate Estimation of Low Fundamental Frequencies from Real-Valued Measurements

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2013-01-01

    In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason for this is that the......In this paper, the difficult problem of estimating low fundamental frequencies from real-valued measurements is addressed. The methods commonly employed do not take the phenomena encountered in this scenario into account and thus fail to deliver accurate estimates. The reason...... for this is that they employ asymptotic approximations that are violated when the harmonics are not well-separated in frequency, something that happens when the observed signal is real-valued and the fundamental frequency is low. To mitigate this, we analyze the problem and present some exact fundamental frequency estimators...

  6. Analytical method comparisons for the accurate determination of PCBs in sediments

    Energy Technology Data Exchange (ETDEWEB)

    Numata, M.; Yarita, T.; Aoyagi, Y.; Yamazaki, M.; Takatsu, A. [National Metrology Institute of Japan, Tsukuba (Japan)

    2004-09-15

    National Metrology Institute of Japan in National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has been developing several matrix reference materials, for example, sediments, water and biological tissues, for the determinations of heavy metals and organometallic compounds. The matrix compositions of those certified reference materials (CRMs) are similar to compositions of actual samples, and those are useful for validating analytical procedures. ''Primary methods of measurements'' are essential to obtain accurate and SI-traceable certified values in the reference materials, because the methods have the highest quality of measurement. However, inappropriate analytical operations, such as incomplete extraction of analytes or crosscontamination during analytical procedures, will cause error of analytical results, even if one of the primary methods, isotope-dilution, is utilized. To avoid possible procedural bias for the certification of reference materials, we employ more than two analytical methods which have been optimized beforehand. Because the accurate determination of trace POPs in the environment is important to evaluate their risk, reliable CRMs are required by environmental chemists. Therefore, we have also been preparing matrix CRMs for the determination of POPs. To establish accurate analytical procedures for the certification of POPs, extraction is one of the critical steps as described above. In general, conventional extraction techniques for the determination of POPs, such as Soxhlet extraction (SOX) and saponification (SAP), have been characterized well, and introduced as official methods for environmental analysis. On the other hand, emerging techniques, such as microwave-assisted extraction (MAE), pressurized fluid extraction (PFE) and supercritical fluid extraction (SFE), give higher recovery yields of analytes with relatively short extraction time and small amount of solvent, by reasons of the high

  7. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  8. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    Science.gov (United States)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  9. Approximate solution of the transport equation by methods of Galerkin type

    International Nuclear Information System (INIS)

    Pitkaranta, J.

    1977-01-01

    Questions of the existence, uniqueness, and convergence of approximate solutions of transport equations by methods of the Galerkin type (where trial and weighting functions are the same) are discussed. The results presented do not exclude the infinite-dimensional case. Two strategies can be followed in the variational approximation of the transport operator: one proceeds from the original form of the transport equation, while the other is based on the partially symmetrized equation. Both principles are discussed in this paper. The transport equation is assumed in a discretized multigroup form

  10. Calculating Resonance Positions and Widths Using the Siegert Approximation Method

    Science.gov (United States)

    Rapedius, Kevin

    2011-01-01

    Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…

  11. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  12. Modified method of perturbed stationary states. II. Semiclassical and low-velocity quantal approximations

    International Nuclear Information System (INIS)

    Green, T.A.

    1978-10-01

    For one-electron heteropolar systems, the wave-theoretic Lagrangian of Paper I 2 is simplified in two distinct approximations. The first is semiclassical; the second is quantal, for velocities below those for which the semiclassical treatment is reliable. For each approximation, unitarity and detailed balancing are discussed. Then, the variational method as described by Demkov is used to determine the coupled equations for the radial functions and the Euler-Lagrange equations for the translational factors which are part of the theory. Specific semiclassical formulae for the translational factors are given in a many-state approximation. Low-velocity quantal formulae are obtained in a one-state approximation. The one-state results of both approximations agree with an earlier determination by Riley. 14 references

  13. An Accurate and Impartial Expert Assignment Method for Scientific Project Review

    Directory of Open Access Journals (Sweden)

    Mingliang Yue

    2017-12-01

    Full Text Available Purpose: This paper proposes an expert assignment method for scientific project review that considers both accuracy and impartiality. As impartial and accurate peer review is extremely important to ensure the quality and feasibility of scientific projects, enhanced methods for managing the process are needed. Design/methodology/approach: To ensure both accuracy and impartiality, we design four criteria, the reviewers’ fitness degree, research intensity, academic association, and potential conflict of interest, to express the characteristics of an appropriate peer review expert. We first formalize the expert assignment problem as an optimization problem based on the designed criteria, and then propose a randomized algorithm to solve the expert assignment problem of identifying reviewer adequacy. Findings: Simulation results show that the proposed method is quite accurate and impartial during expert assignment. Research limitations: Although the criteria used in this paper can properly show the characteristics of a good and appropriate peer review expert, more criteria/conditions can be included in the proposed scheme to further enhance accuracy and impartiality of the expert assignment. Practical implications: The proposed method can help project funding agencies (e.g. the National Natural Science Foundation of China find better experts for project peer review. Originality/value: To the authors’ knowledge, this is the first publication that proposes an algorithm that applies an impartial approach to the project review expert assignment process. The simulation results show the effectiveness of the proposed method.

  14. A stepwise regression tree for nonlinear approximation: applications to estimating subpixel land cover

    Science.gov (United States)

    Huang, C.; Townshend, J.R.G.

    2003-01-01

    A stepwise regression tree (SRT) algorithm was developed for approximating complex nonlinear relationships. Based on the regression tree of Breiman et al . (BRT) and a stepwise linear regression (SLR) method, this algorithm represents an improvement over SLR in that it can approximate nonlinear relationships and over BRT in that it gives more realistic predictions. The applicability of this method to estimating subpixel forest was demonstrated using three test data sets, on all of which it gave more accurate predictions than SLR and BRT. SRT also generated more compact trees and performed better than or at least as well as BRT at all 10 equal forest proportion interval ranging from 0 to 100%. This method is appealing to estimating subpixel land cover over large areas.

  15. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  16. Using gas blow methods to realize accurate volume measurement of radioactivity liquid

    International Nuclear Information System (INIS)

    Zhang Caiyun

    2010-01-01

    For liquid which has radioactivity, Realized the accurate volume measurement uncertainty less than 0.2% (k=2) by means of gas blow methods presented in the 'American National Standard-Nuclear Material Control-Volume Calibration Methods(ANSI N15.19-1989)' and the 'ISO Committee Drafts (ISO/TC/85/SC 5N 282 )' and Explored a set methods of Data Processing. In the article, the major problems is to solve data acquisition and function foundation and measurement uncertainty estimate. (authors)

  17. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    Science.gov (United States)

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  18. Accurate approximation of the dispersion differential equation of ideal magnetohydrodynamics: The diffuse linear pinch

    International Nuclear Information System (INIS)

    Barnes, D.C.; Cayton, T.E.

    1980-01-01

    The ideal magnetohydrodynamic stability of the diffuse linear pinch is studied in the special case when the poloidal magnetic field component is small compared with the axial field component. A two-term approximation for growth rates is derived by straightforward asymptotic expansion in terms of a small parameter that is proportional to (B/sub theta//rB/sub z/). Evaluation of the second term in the expansion requires only a trivial amount of additional computation after the leading-order eigenvalue and eigenfunction are determined. For small, but finite, values of the expansion parameter the second term is found to be non-negligible compared with the leading term. The approximate solution is compared with exact solutions and the range of validity of the approximation is investigated. Implications of these results to a wide class of problems involving weakly unstable near theta-pinch configurations are discussed

  19. Higher-order approximate solutions to the relativistic and Duffing-harmonic oscillators by modified He's homotopy methods

    International Nuclear Information System (INIS)

    Belendez, A; Pascual, C; Fernandez, E; Neipp, C; Belendez, T

    2008-01-01

    A modified He's homotopy perturbation method is used to calculate higher-order analytical approximate solutions to the relativistic and Duffing-harmonic oscillators. The He's homotopy perturbation method is modified by truncating the infinite series corresponding to the first-order approximate solution before introducing this solution in the second-order linear differential equation, and so on. We find this modified homotopy perturbation method works very well for the whole range of initial amplitudes, and the excellent agreement of the approximate frequencies and periodic solutions with the exact ones has been demonstrated and discussed. The approximate formulae obtained show excellent agreement with the exact solutions, and are valid for small as well as large amplitudes of oscillation, including the limiting cases of amplitude approaching zero and infinity. For the relativistic oscillator, only one iteration leads to high accuracy of the solutions with a maximal relative error for the approximate frequency of less than 1.6% for small and large values of oscillation amplitude, while this relative error is 0.65% for two iterations with two harmonics and as low as 0.18% when three harmonics are considered in the second approximation. For the Duffing-harmonic oscillator the relative error is as low as 0.078% when the second approximation is considered. Comparison of the result obtained using this method with those obtained by the harmonic balance methods reveals that the former is very effective and convenient

  20. APPROX, 1-D and 2-D Function Approximation by Polynomials, Splines, Finite Elements Method

    International Nuclear Information System (INIS)

    Tollander, Bengt

    1975-01-01

    1 - Nature of physical problem solved: Approximates one- and two- dimensional functions using different forms of the approximating function, as polynomials, rational functions, Splines and (or) the finite element method. Different kinds of transformations of the dependent and (or) the independent variables can easily be made by data cards using a FORTRAN-like language. 2 - Method of solution: Approximations by polynomials, Splines and (or) the finite element method are made in L2 norm using the least square method by which the answer is directly given. For rational functions in one dimension the result given in L(infinite) norm is achieved by iterations moving the zero points of the error curve. For rational functions in two dimensions, the norm is L2 and the result is achieved by iteratively changing the coefficients of the denominator and then solving the coefficients of the numerator by the least square method. The transformation of the dependent and (or) independent variables is made by compiling the given transform data card(s) to an array of integers from which the transformation can be made

  1. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  2. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  3. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  4. Approximate solution of generalized Ginzburg-Landau-Higgs system via homotopy perturbation method

    Energy Technology Data Exchange (ETDEWEB)

    Lu Juhong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Dept. of Information Engineering, Coll. of Lishui Professional Tech., Zhejiang (China); Zheng Chunlong [School of Physics and Electromechanical Engineering, Shaoguan Univ., Guangdong (China); Shanghai Inst. of Applied Mathematics and Mechanics, Shanghai Univ., SH (China)

    2010-04-15

    Using the homotopy perturbation method, a class of nonlinear generalized Ginzburg-Landau-Higgs systems (GGLH) is considered. Firstly, by introducing a homotopic transformation, the nonlinear problem is changed into a system of linear equations. Secondly, by selecting a suitable initial approximation, the approximate solution with arbitrary degree accuracy to the generalized Ginzburg-Landau-Higgs system is derived. Finally, another type of homotopic transformation to the generalized Ginzburg-Landau-Higgs system reported in previous literature is briefly discussed. (orig.)

  5. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    Science.gov (United States)

    Mellas, John

    2016-05-01

    Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P1 and 0.907 (0.841, 0.973) for 0.95 ml/min accurately predicted the ability to terminate renal replacement therapy in AKI. Include the need to measure urine volume accurately. Furthermore the precision of the method requires accurate estimates of sGFR, while a reasonable measure of P is crucial to estimating Ke. The present study provides the

  6. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  7. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  8. Hydrogen atom with a Yukawa potential: Perturbation theory and continued-fractions--Pade approximants at large order

    International Nuclear Information System (INIS)

    Vrscay, E.R.

    1986-01-01

    A simple power-series method is developed to calculate to large order the Rayleigh-Schroedinger perturbation expansions for energy levels of a hydrogen atom with a Yukawa-type screened Coulomb potential. Perturbation series for the 1s, 2s, and 2p levels, shown not to be of the Stieltjes type, are calculated to 100th order. Nevertheless, the poles of the Pade approximants to these series generally avoid the region of the positive real axis 0 < lambda < lambda(, where lambda( represents the coupling constant threshold. As a result, the Pade sums afford accurate approximations to E(lambda) in this domain. The continued-fraction representations to these perturbation series have been accurately calculated to large (100th) order and demonstrate a curious ''quasioscillatory,'' but non-Stieltjes, behavior. Accurate values of E(lambda) as well as lambda( for the 1s, 2s, and 2p levels are reported

  9. Application of the finite-difference approximation to electrostatic problems in gaseous proportional counters

    International Nuclear Information System (INIS)

    Waligorski, M.P.R.; Urbanczyk, K.M.

    1975-01-01

    The basic principles of the finite-difference approximation applied to the solution of electrostatic field distributions in gaseous proportional counters are given. Using this method, complicated two-dimensional electrostatic problems may be solved, taking into account any number of anodes, each with its own radius, and any cathode shape. A general formula for introducing the anode radii into the calculations is derived and a method of obtaining extremely accurate (up to 0.1%) solutions is developed. Several examples of potential and absolute field distributions for single rectangular and multiwire proportional counters are calculated and compared with exact results according to Tomitani, in order to discuss in detail errors of the finite-difference approximation. (author)

  10. A novel method for the accurate evaluation of Poisson's ratio of soft polymer materials.

    Science.gov (United States)

    Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S; Kang, Dong-Joong; Park, Sungchan; Park, Seonghun

    2013-01-01

    A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6-47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.

  11. The Pade approximate method for solving problems in plasma kinetic theory

    International Nuclear Information System (INIS)

    Jasperse, J.R.; Basu, B.

    1992-01-01

    The method of Pade Approximates has been a powerful tool in solving for the time dependent propagator (Green function) in model quantum field theories. We have developed a modified Pade method which we feel has promise for solving linearized collisional and weakly nonlinear problems in plasma kinetic theory. In order to illustrate the general applicability of the method, in this paper we discuss Pade solutions for the linearized collisional propagator and the collisional dielectric function for a model collisional problem. (author) 3 refs., 2 tabs

  12. On a novel iterative method to compute polynomial approximations to Bessel functions of the first kind and its connection to the solution of fractional diffusion/diffusion-wave problems

    International Nuclear Information System (INIS)

    Yuste, Santos Bravo; Abad, Enrique

    2011-01-01

    We present an iterative method to obtain approximations to Bessel functions of the first kind J p (x) (p > -1) via the repeated application of an integral operator to an initial seed function f 0 (x). The class of seed functions f 0 (x) leading to sets of increasingly accurate approximations f n (x) is considerably large and includes any polynomial. When the operator is applied once to a polynomial of degree s, it yields a polynomial of degree s + 2, and so the iteration of this operator generates sets of increasingly better polynomial approximations of increasing degree. We focus on the set of polynomial approximations generated from the seed function f 0 (x) = 1. This set of polynomials is useful not only for the computation of J p (x) but also from a physical point of view, as it describes the long-time decay modes of certain fractional diffusion and diffusion-wave problems.

  13. Comparison of matrix exponential methods for fuel burnup calculations

    International Nuclear Information System (INIS)

    Oh, Hyung Suk; Yang, Won Sik

    1999-01-01

    Series expansion methods to compute the exponential of a matrix have been compared by applying them to fuel depletion calculations. Specifically, Taylor, Pade, Chebyshev, and rational Chebyshev approximations have been investigated by approximating the exponentials of bum matrices by truncated series of each method with the scaling and squaring algorithm. The accuracy and efficiency of these methods have been tested by performing various numerical tests using one thermal reactor and two fast reactor depletion problems. The results indicate that all the four series methods are accurate enough to be used for fuel depletion calculations although the rational Chebyshev approximation is relatively less accurate. They also show that the rational approximations are more efficient than the polynomial approximations. Considering the computational accuracy and efficiency, the Pade approximation appears to be better than the other methods. Its accuracy is better than the rational Chebyshev approximation, while being comparable to the polynomial approximations. On the other hand, its efficiency is better than the polynomial approximations and is similar to the rational Chebyshev approximation. In particular, for fast reactor depletion calculations, it is faster than the polynomial approximations by a factor of ∼ 1.7. (author). 11 refs., 4 figs., 2 tabs

  14. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  15. Self-consistent Random Phase Approximation applied to a schematic model of the field theory

    International Nuclear Information System (INIS)

    Bertrand, Thierry

    1998-01-01

    The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum

  16. Perturbation methods and closure approximations in nonlinear systems

    International Nuclear Information System (INIS)

    Dubin, D.H.E.

    1984-01-01

    In the first section of this thesis, Hamiltonian theories of guiding center and gyro-center motion are developed using modern symplectic methods and Lie transformations. Littlejohn's techniques, combined with the theory of resonant interaction and island overlap, are used to explore the problem of adiabatic invariance and onset of stochasticity. As an example, the breakdown of invariance due to resonance between drift motion and gyromotion in a tokamak is considered. A Hamiltonian is developed for motion in a straight magnetic field with electrostatic perturbations in the gyrokinetic ordering, from which nonlinear gyrokinetic equations are constructed which have the property of phase-space preservation, useful for computer simulation. Energy invariants are found and various limits of the equations are considered. In the second section, statistical closure theories are applied to simple dynamical systems. The logistic map is used as an example because of its universal properties and simple quadratic nonlinearity. The first closure considered is the direct interaction approximation of Kraichnan, which is found to fail when applied to the logistic map because it cannot approximate the bounded support of the map's equilibrium distribution. By imposing a periodically constraint on a Langevin form of the DIA a new stable closure is developed

  17. Balancing Exchange Mixing in Density-Functional Approximations for Iron Porphyrin.

    Science.gov (United States)

    Berryman, Victoria E J; Boyd, Russell J; Johnson, Erin R

    2015-07-14

    Predicting the correct ground-state multiplicity for iron(II) porphyrin, a high-spin quintet, remains a significant challenge for electronic-structure methods, including commonly employed density functionals. An even greater challenge for these methods is correctly predicting favorable binding of O2 to iron(II) porphyrin, due to the open-shell singlet character of the adduct. In this work, the performance of a modest set of contemporary density-functional approximations is assessed and the results interpreted using Bader delocalization indices. It is found that inclusion of greater proportions of Hartree-Fock exchange, in hybrid or range-separated hybrid functionals, has opposing effects; it improves the ability of the functional to identify the ground state but is detrimental to predicting favorable dioxygen binding. Because of the uncomplementary nature of these properties, accurate prediction of both the relative spin-state energies and the O2 binding enthalpy eludes conventional density-functional approximations.

  18. Design of A Cyclone Separator Using Approximation Method

    Science.gov (United States)

    Sin, Bong-Su; Choi, Ji-Won; Lee, Kwon-Hee

    2017-12-01

    A Separator is a device installed in industrial applications to separate mixed objects. The separator of interest in this research is a cyclone type, which is used to separate a steam-brine mixture in a geothermal plant. The most important performance of the cyclone separator is the collection efficiency. The collection efficiency in this study is predicted by performing the CFD (Computational Fluid Dynamics) analysis. This research defines six shape design variables to maximize the collection efficiency. Thus, the collection efficiency is set up as the objective function in optimization process. Since the CFD analysis requires a lot of calculation time, it is impossible to obtain the optimal solution by linking the gradient-based optimization algorithm. Thus, two approximation methods are introduced to obtain an optimum design. In this process, an L18 orthogonal array is adopted as a DOE method, and kriging interpolation method is adopted to generate the metamodel for the collection efficiency. Based on the 18 analysis results, the relative importance of each variable to the collection efficiency is obtained through the ANOVA (analysis of variance). The final design is suggested considering the results obtained from two optimization methods. The fluid flow analysis of the cyclone separator is conducted by using the commercial CFD software, ANSYS-CFX.

  19. Approximations to the Probability of Failure in Random Vibration by Integral Equation Methods

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    Close approximations to the first passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first passage probability density function and the distribution function for the time interval spent below a barrier before...... passage probability density. The results of the theory agree well with simulation results for narrow banded processes dominated by a single frequency, as well as for bimodal processes with 2 dominating frequencies in the structural response....... outcrossing. An integral equation for the probability density function of the time interval is formulated, and adequate approximations for the kernel are suggested. The kernel approximation results in approximate solutions for the probability density function of the time interval, and hence for the first...

  20. A numeric-analytic method for approximating the chaotic Chen system

    International Nuclear Information System (INIS)

    Mossa Al-sawalha, M.; Noorani, M.S.M.

    2009-01-01

    The epitome of this paper centers on the application of the differential transformation method (DTM) the renowned Chen system which is described as a three-dimensional system of ODEs with quadratic nonlinearities. Numerical comparisons are made between the DTM and the classical fourth-order Runge-Kutta method (RK4). Our work showcases the precision of the DTM as the Chen system transforms from a non-chaotic system to a chaotic one. Since the Lyapunov exponent for this system is much higher compared to other chaotic systems, we shall highlight the difficulties of the simulations with respect to its accuracy. We wrap up our investigations to reveal that this direct symbolic-numeric scheme is effective and accurate.

  1. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    KAUST Repository

    Rodriguez, Jesse M.; Batzoglou, Serafim; Bercovici, Sivan

    2013-01-01

    , accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting

  2. Using the DDA (Discrete Dipole Approximation Method in Determining the Extinction Cross Section of Black Carbon

    Directory of Open Access Journals (Sweden)

    Skorupski Krzysztof

    2015-03-01

    Full Text Available BC (Black Carbon, which can be found in the atmosphere, is characterized by a large value of the imaginary part of the complex refractive index and, therefore, might have an impact on the global warming effect. To study the interaction of BC with light often computer simulations are used. One of the methods, which are capable of performing light scattering simulations by any shape, is DDA (Discrete Dipole Approximation. In this work its accuracy was estimated in respect to BC structures using the latest stable version of the ADDA (vr. 1.2 algorithm. As the reference algorithm the GMM (Generalized Multiparticle Mie-Solution code was used. The study shows that the number of volume elements (dipoles is the main parameter that defines the quality of results. However, they can be improved by a proper polarizability expression. The most accurate, and least time consuming, simulations were observed for IGT_SO. When an aggregate consists of particles composed of ca. 750 volume elements (dipoles, the averaged relative extinction error should not exceed ca. 4.5%.

  3. Approximation of the unsteady Brinkman-Forchheimer equations by the pressure stabilization method

    KAUST Repository

    Louaked, Mohammed; Seloula, Nour; Trabelsi, Saber

    2017-01-01

    In this work, we propose and analyze the pressure stabilization method for the unsteady incompressible Brinkman-Forchheimer equations. We present a time discretization scheme which can be used with any consistent finite element space approximation. Second-order error estimate is proven. Some numerical results are also given.© 2017 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2017

  4. Approximation of the unsteady Brinkman-Forchheimer equations by the pressure stabilization method

    KAUST Repository

    Louaked, Mohammed

    2017-07-20

    In this work, we propose and analyze the pressure stabilization method for the unsteady incompressible Brinkman-Forchheimer equations. We present a time discretization scheme which can be used with any consistent finite element space approximation. Second-order error estimate is proven. Some numerical results are also given.© 2017 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2017

  5. Parabolic approximation method for fast magnetosonic wave propagation in tokamaks

    International Nuclear Information System (INIS)

    Phillips, C.K.; Perkins, F.W.; Hwang, D.Q.

    1985-07-01

    Fast magnetosonic wave propagation in a cylindrical tokamak model is studied using a parabolic approximation method in which poloidal variations of the wave field are considered weak in comparison to the radial variations. Diffraction effects, which are ignored by ray tracing mthods, are included self-consistently using the parabolic method since continuous representations for the wave electromagnetic fields are computed directly. Numerical results are presented which illustrate the cylindrical convergence of the launched waves into a diffraction-limited focal spot on the cyclotron absorption layer near the magnetic axis for a wide range of plasma confinement parameters

  6. Real-time dynamics of matrix quantum mechanics beyond the classical approximation

    Science.gov (United States)

    Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas

    2018-03-01

    We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.

  7. Formic acid hydrolysis/liquid chromatography isotope dilution mass spectrometry: An accurate method for large DNA quantification.

    Science.gov (United States)

    Shibayama, Sachie; Fujii, Shin-Ichiro; Inagaki, Kazumi; Yamazaki, Taichi; Takatsu, Akiko

    2016-10-14

    Liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) with formic acid hydrolysis was established for the accurate quantification of λDNA. The over-decomposition of nucleobases in formic acid hydrolysis was restricted by optimizing the reaction temperature and the reaction time, and accurately corrected by using deoxynucleotides (dNMPs) and isotope-labeled dNMPs as the calibrator and the internal standard, respectively. The present method could quantify λDNA with an expanded uncertainty of 4.6% using 10fmol of λDNA. The analytical results obtained with the present method were validated by comparing with the results of phosphate-base quantification by inductively coupled plasma-mass spectrometry (ICP-MS). The results showed good agreement with each other. We conclude that the formic acid hydrolysis/LC-IDMS method can quantify λDNA accurately and is promising as the primary method for the certification of DNA as reference material. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Correlation effects beyond coupled cluster singles and doubles approximation through Fock matrix dressing.

    Science.gov (United States)

    Maitra, Rahul; Nakajima, Takahito

    2017-11-28

    We present an accurate single reference coupled cluster theory in which the conventional Fock operator matrix is suitably dressed to simulate the effect of triple and higher excitations within a singles and doubles framework. The dressing thus invoked originates from a second-order perturbative approximation of a similarity transformed Hamiltonian and induces higher rank excitations through local renormalization of individual occupied and unoccupied orbital lines. Such a dressing is able to recover a significant amount of correlation effects beyond singles and doubles approximation, but only with an economic n 5 additional cost. Due to the inclusion of higher rank excitations via the Fock matrix dressing, this method is a natural improvement over conventional coupled cluster theory with singles and doubles approximation, and this method would be demonstrated via applications on some challenging systems. This highly promising scheme has a conceptually simple structure which is also easily generalizable to a multi-reference coupled cluster scheme for treating strong degeneracy. We shall demonstrate that this method is a natural lowest order perturbative approximation to the recently developed iterative n-body excitation inclusive coupled cluster singles and doubles scheme [R. Maitra et al., J. Chem. Phys. 147, 074103 (2017)].

  9. Accurate, safe, and rapid method of intraoperative tumor identification for totally laparoscopic distal gastrectomy: injection of mixed fluid of sodium hyaluronate and patent blue.

    Science.gov (United States)

    Nakagawa, Masatoshi; Ehara, Kazuhisa; Ueno, Masaki; Tanaka, Tsuyoshi; Kaida, Sachiko; Udagawa, Harushi

    2014-04-01

    In totally laparoscopic distal gastrectomy, determining the resection line with safe proximal margins is often difficult, particularly for tumors located in a relatively upper area. This is because, in contrast to open surgery, identifying lesions by palpating or opening the stomach is essentially impossible. This study introduces a useful method of tumor identification that is accurate, safe, and rapid. On the operation day, after inducing general anesthesia, a mixture of sodium hyaluronate and patent blue is injected into the submucosal layer of the proximal margin. When resecting stomach, all marker spots should be on the resected side. In all cases, the proximal margin is examined histologically by using frozen sections during the operation. From October 2009 to September 2011, a prospective study that evaluated this method was performed. A total of 34 patients who underwent totally laparoscopic distal gastrectomy were enrolled in this study. Approximately 5 min was required to complete the procedure. Proximal margins were negative in all cases, and the mean ± standard deviation length of the proximal margin was 23.5 ± 12.8 mm. No side effects, such as allergy, were encountered. As a method of tumor identification for totally laparoscopic distal gastrectomy, this procedure appears accurate, safe, and rapid.

  10. Accurate beacon positioning method for satellite-to-ground optical communication.

    Science.gov (United States)

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  11. A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices

    Science.gov (United States)

    AlQurashi, Ahmed; Selvakumar, C. R.

    2018-06-01

    There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.

  12. Subsystem density functional theory with meta-generalized gradient approximation exchange-correlation functionals.

    Science.gov (United States)

    Śmiga, Szymon; Fabiano, Eduardo; Laricchia, Savio; Constantin, Lucian A; Della Sala, Fabio

    2015-04-21

    We analyze the methodology and the performance of subsystem density functional theory (DFT) with meta-generalized gradient approximation (meta-GGA) exchange-correlation functionals for non-bonded molecular systems. Meta-GGA functionals depend on the Kohn-Sham kinetic energy density (KED), which is not known as an explicit functional of the density. Therefore, they cannot be directly applied in subsystem DFT calculations. We propose a Laplacian-level approximation to the KED which overcomes this limitation and provides a simple and accurate way to apply meta-GGA exchange-correlation functionals in subsystem DFT calculations. The so obtained density and energy errors, with respect to the corresponding supermolecular calculations, are comparable with conventional approaches, depending almost exclusively on the approximations in the non-additive kinetic embedding term. An embedding energy error decomposition explains the accuracy of our method.

  13. Construction and accuracy of partial differential equation approximations to the chemical master equation.

    Science.gov (United States)

    Grima, Ramon

    2011-11-01

    The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.

  14. Approximating second-order vector differential operators on distorted meshes in two space dimensions

    International Nuclear Information System (INIS)

    Hermeline, F.

    2008-01-01

    A new finite volume method is presented for approximating second-order vector differential operators in two space dimensions. This method allows distorted triangle or quadrilateral meshes to be used without the numerical results being too much altered. The matrices that need to be inverted are symmetric positive definite therefore, the most powerful linear solvers can be applied. The method has been tested on a few second-order vector partial differential equations coming from elasticity and fluids mechanics areas. These numerical experiments show that it is second-order accurate and locking-free. (authors)

  15. Risk approximation in decision making: approximative numeric abilities predict advantageous decisions under objective risk.

    Science.gov (United States)

    Mueller, Silke M; Schiebener, Johannes; Delazer, Margarete; Brand, Matthias

    2018-01-22

    Many decision situations in everyday life involve mathematical considerations. In decisions under objective risk, i.e., when explicit numeric information is available, executive functions and abilities to handle exact numbers and ratios are predictors of objectively advantageous choices. Although still debated, exact numeric abilities, e.g., normative calculation skills, are assumed to be related to approximate number processing skills. The current study investigates the effects of approximative numeric abilities on decision making under objective risk. Participants (N = 153) performed a paradigm measuring number-comparison, quantity-estimation, risk-estimation, and decision-making skills on the basis of rapid dot comparisons. Additionally, a risky decision-making task with exact numeric information was administered, as well as tasks measuring executive functions and exact numeric abilities, e.g., mental calculation and ratio processing skills, were conducted. Approximative numeric abilities significantly predicted advantageous decision making, even beyond the effects of executive functions and exact numeric skills. Especially being able to make accurate risk estimations seemed to contribute to superior choices. We recommend approximation skills and approximate number processing to be subject of future investigations on decision making under risk.

  16. A method for the accurate determination of the polarization of a neutron beam using a polarized 3He spin filter

    International Nuclear Information System (INIS)

    Greene, G.L.; Thompson, A.K.; Dewey, M.S.

    1995-01-01

    A new method for the accurate determination of the degree of polarization of a neutron beam which has been polarized by transmission through a spin polarized 3 He cell is given. The method does not require the use of an analyzer or spin flipper nor does it require an accurate independent determination of the 3 He polarization. The method provides a continuous on-line determination of the neutron polarization. The method may be of use in the accurate determination of correlation coefficients in neutron beta decay which provide a test of the standard model for the electroweak interaction. The method may also provide an accurate procedure for the calibration of polarized 3 He targets used in medium and high energy scattering experiments. ((orig.))

  17. New realisation of Preisach model using adaptive polynomial approximation

    Science.gov (United States)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  18. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  19. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  20. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  1. Simple Methods to Approximate CPC Shape to Preserve Collection Efficiency

    Directory of Open Access Journals (Sweden)

    David Jafrancesco

    2012-01-01

    Full Text Available The compound parabolic concentrator (CPC is the most efficient reflective geometry to collect light to an exit port. Anyway, to allow its actual use in solar plants or photovoltaic concentration systems, a tradeoff between system efficiency and cost reduction, the two key issues for sunlight exploitation, must be found. In this work, we analyze various methods to model an approximated CPC aimed to be simpler and more cost-effective than the ideal one, as well as to preserve the system efficiency. The manufacturing easiness arises from the use of truncated conic surfaces only, which can be realized by cheap machining techniques. We compare different configurations on the basis of their collection efficiency, evaluated by means of nonsequential ray-tracing software. Moreover, due to the fact that some configurations are beam dependent and for a closer approximation of a real case, the input beam is simulated as nonsymmetric, with a nonconstant irradiance on the CPC internal surface.

  2. Approximate analytical solution of diffusion equation with fractional time derivative using optimal homotopy analysis method

    Directory of Open Access Journals (Sweden)

    S. Das

    2013-12-01

    Full Text Available In this article, optimal homotopy-analysis method is used to obtain approximate analytic solution of the time-fractional diffusion equation with a given initial condition. The fractional derivatives are considered in the Caputo sense. Unlike usual Homotopy analysis method, this method contains at the most three convergence control parameters which describe the faster convergence of the solution. Effects of parameters on the convergence of the approximate series solution by minimizing the averaged residual error with the proper choices of parameters are calculated numerically and presented through graphs and tables for different particular cases.

  3. Approximate Dispersion Relations for Waves on Arbitrary Shear Flows

    Science.gov (United States)

    Ellingsen, S. À.; Li, Y.

    2017-12-01

    An approximate dispersion relation is derived and presented for linear surface waves atop a shear current whose magnitude and direction can vary arbitrarily with depth. The approximation, derived to first order of deviation from potential flow, is shown to produce good approximations at all wavelengths for a wide range of naturally occuring shear flows as well as widely used model flows. The relation reduces in many cases to a 3-D generalization of the much used approximation by Skop (1987), developed further by Kirby and Chen (1989), but is shown to be more robust, succeeding in situations where the Kirby and Chen model fails. The two approximations incur the same numerical cost and difficulty. While the Kirby and Chen approximation is excellent for a wide range of currents, the exact criteria for its applicability have not been known. We explain the apparently serendipitous success of the latter and derive proper conditions of applicability for both approximate dispersion relations. Our new model has a greater range of applicability. A second order approximation is also derived. It greatly improves accuracy, which is shown to be important in difficult cases. It has an advantage over the corresponding second-order expression proposed by Kirby and Chen that its criterion of accuracy is explicitly known, which is not currently the case for the latter to our knowledge. Our second-order term is also arguably significantly simpler to implement, and more physically transparent, than its sibling due to Kirby and Chen.Plain Language SummaryIn order to answer key questions such as how the ocean surface affects the climate, erodes the coastline and transports nutrients, we must understand how waves move. This is not so easy when depth varying currents are present, as they often are in coastal waters. We have developed a modeling tool for accurately predicting wave properties in such situations, ready for use, for example, in the complex oceanographic computer models. Our

  4. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  5. Big Data Meets Quantum Chemistry Approximations: The Δ-Machine Learning Approach.

    Science.gov (United States)

    Ramakrishnan, Raghunathan; Dral, Pavlo O; Rupp, Matthias; von Lilienfeld, O Anatole

    2015-05-12

    Chemically accurate and comprehensive studies of the virtual space of all possible molecules are severely limited by the computational cost of quantum chemistry. We introduce a composite strategy that adds machine learning corrections to computationally inexpensive approximate legacy quantum methods. After training, highly accurate predictions of enthalpies, free energies, entropies, and electron correlation energies are possible, for significantly larger molecular sets than used for training. For thermochemical properties of up to 16k isomers of C7H10O2 we present numerical evidence that chemical accuracy can be reached. We also predict electron correlation energy in post Hartree-Fock methods, at the computational cost of Hartree-Fock, and we establish a qualitative relationship between molecular entropy and electron correlation. The transferability of our approach is demonstrated, using semiempirical quantum chemistry and machine learning models trained on 1 and 10% of 134k organic molecules, to reproduce enthalpies of all remaining molecules at density functional theory level of accuracy.

  6. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  7. Accurate density-functional calculations on large systems: Fullerenes and magnetic clusters

    International Nuclear Information System (INIS)

    Dunlap, B.I.

    1996-01-01

    Efforts to accurately compute all-electron density-functional energies for large molecules and clusters using Gaussian basis sets will be reviewed. The foundation of this effort, variational fitting, will be described and followed by three applications of the method. The first application concerns fullerenes. When first discovered, C 60 is quite unstable relative to the higher fullerenes. In addition, to raising questions about the relative abundance of the various fullerenes, this work conflicted with the then state-of-the art density-funcitonal calculations on crystalline graphite. Now high accuracy molecular and band structure calculations are in fairly good agreement. Second, we have used these methods to design transition metal clusters having the highest magnetic moment by maximizing the symmetry-required degeneracy of the one-electron orbitals. Most recently, we have developed accurate, variational generalized-gradient approximation (GGA) forces for use in geometry optimization of clusters and in molecular-dynamics simulations of friction. The GGA optimized geometries of a number of large clusters will be given

  8. Minimax rational approximation of the Fermi-Dirac distribution

    Science.gov (United States)

    Moussa, Jonathan E.

    2016-10-01

    Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.

  9. Approximate Dual Averaging Method for Multiagent Saddle-Point Problems with Stochastic Subgradients

    Directory of Open Access Journals (Sweden)

    Deming Yuan

    2014-01-01

    Full Text Available This paper considers the problem of solving the saddle-point problem over a network, which consists of multiple interacting agents. The global objective function of the problem is a combination of local convex-concave functions, each of which is only available to one agent. Our main focus is on the case where the projection steps are calculated approximately and the subgradients are corrupted by some stochastic noises. We propose an approximate version of the standard dual averaging method and show that the standard convergence rate is preserved, provided that the projection errors decrease at some appropriate rate and the noises are zero-mean and have bounded variance.

  10. A fault diagnosis system for PV power station based on global partitioned gradually approximation method

    Science.gov (United States)

    Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.

    2016-08-01

    As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.

  11. A Novel Method for the Accurate Evaluation of Poisson’s Ratio of Soft Polymer Materials

    Directory of Open Access Journals (Sweden)

    Jae-Hoon Lee

    2013-01-01

    Full Text Available A new method with a simple algorithm was developed to accurately measure Poisson’s ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson’s ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson’s ratio despite misalignment between specimens and experimental devices. In this study, Poisson’s ratio of PVA-H was 0.44 ± 0.025 (n=6 for 2.6–47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson’s ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.

  12. Approximate Analytic and Numerical Solutions to Lane-Emden Equation via Fuzzy Modeling Method

    Directory of Open Access Journals (Sweden)

    De-Gang Wang

    2012-01-01

    Full Text Available A novel algorithm, called variable weight fuzzy marginal linearization (VWFML method, is proposed. This method can supply approximate analytic and numerical solutions to Lane-Emden equations. And it is easy to be implemented and extended for solving other nonlinear differential equations. Numerical examples are included to demonstrate the validity and applicability of the developed technique.

  13. Calculation of Resonance Interaction Effects Using a Rational Approximation to the Symmetric Resonance Line Shape Function

    International Nuclear Information System (INIS)

    Haeggblom, H.

    1968-08-01

    The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances

  14. Calculation of Resonance Interaction Effects Using a Rational Approximation to the Symmetric Resonance Line Shape Function

    Energy Technology Data Exchange (ETDEWEB)

    Haeggblom, H

    1968-08-15

    The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances.

  15. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  16. Parente2: a fast and accurate method for detecting identity by descent

    KAUST Repository

    Rodriguez, Jesse M.; Bercovici, Sivan; Huang, Lin; Frostig, Roy; Batzoglou, Serafim

    2014-01-01

    Identity-by-descent (IBD) inference is the problem of establishing a genetic connection between two individuals through a genomic segment that is inherited by both individuals from a recent common ancestor. IBD inference is an important preceding step in a variety of population genomic studies, ranging from demographic studies to linking genomic variation with phenotype and disease. The problem of accurate IBD detection has become increasingly challenging with the availability of large collections of human genotypes and genomes: Given a cohort's size, a quadratic number of pairwise genome comparisons must be performed. Therefore, computation time and the false discovery rate can also scale quadratically. To enable accurate and efficient large-scale IBD detection, we present Parente2, a novel method for detecting IBD segments. Parente2 is based on an embedded log-likelihood ratio and uses a model that accounts for linkage disequilibrium by explicitly modeling haplotype frequencies. Parente2 operates directly on genotype data without the need to phase data prior to IBD inference. We evaluate Parente2's performance through extensive simulations using real data, and we show that it provides substantially higher accuracy compared to previous state-of-the-art methods while maintaining high computational efficiency.

  17. Approximating distributions from moments

    Science.gov (United States)

    Pawula, R. F.

    1987-11-01

    A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.

  18. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  19. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  20. Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results

    Science.gov (United States)

    Liu, Li; Mishchenko, Michael I.

    2016-01-01

    We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.

  1. A Monotone, Higher-Order Accurate, Fixed-Grid Finite-Volume Method for Advection Problems with Moving Boundaries

    NARCIS (Netherlands)

    Y.J. Hassen (Yunus); B. Koren (Barry)

    2008-01-01

    textabstractIn this paper, an accurate method, using a novel immersed-boundary approach, is presented for numerically solving linear, scalar convection problems. As is standard in immersed-boundary methods, moving bodies are embedded in a fixed Cartesian grid. The essence of the present method is

  2. Label inspection of approximate cylinder based on adverse cylinder panorama

    Science.gov (United States)

    Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo

    2013-12-01

    This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.

  3. Discovering approximate-associated sequence patterns for protein-DNA interactions

    KAUST Repository

    Chan, Tak Ming

    2010-12-30

    Motivation: The bindings between transcription factors (TFs) and transcription factor binding sites (TFBSs) are fundamental protein-DNA interactions in transcriptional regulation. Extensive efforts have been made to better understand the protein-DNA interactions. Recent mining on exact TF-TFBS-associated sequence patterns (rules) has shown great potentials and achieved very promising results. However, exact rules cannot handle variations in real data, resulting in limited informative rules. In this article, we generalize the exact rules to approximate ones for both TFs and TFBSs, which are essential for biological variations. Results: A progressive approach is proposed to address the approximation to alleviate the computational requirements. Firstly, similar TFBSs are grouped from the available TF-TFBS data (TRANSFAC database). Secondly, approximate and highly conserved binding cores are discovered from TF sequences corresponding to each TFBS group. A customized algorithm is developed for the specific objective. We discover the approximate TF-TFBS rules by associating the grouped TFBS consensuses and TF cores. The rules discovered are evaluated by matching (verifying with) the actual protein-DNA binding pairs from Protein Data Bank (PDB) 3D structures. The approximate results exhibit many more verified rules and up to 300% better verification ratios than the exact ones. The customized algorithm achieves over 73% better verification ratios than traditional methods. Approximate rules (64-79%) are shown statistically significant. Detailed variation analysis and conservation verification on NCBI records demonstrate that the approximate rules reveal both the flexible and specific protein-DNA interactions accurately. The approximate TF-TFBS rules discovered show great generalized capability of exploring more informative binding rules. © The Author 2010. Published by Oxford University Press. All rights reserved.

  4. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  5. Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise

    Science.gov (United States)

    Groeneboom, N. E.; Dahle, H.

    2014-03-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  6. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    International Nuclear Information System (INIS)

    Groeneboom, N. E.; Dahle, H.

    2014-01-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  7. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    Energy Technology Data Exchange (ETDEWEB)

    Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo (Norway)

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  8. A method to accurately estimate the muscular torques of human wearing exoskeletons by torque sensors.

    Science.gov (United States)

    Hwang, Beomsoo; Jeon, Doyoung

    2015-04-09

    In exoskeletal robots, the quantification of the user's muscular effort is important to recognize the user's motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users' muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user's limb accurately from the measured torque. The user's limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user's muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  9. A Method to Accurately Estimate the Muscular Torques of Human Wearing Exoskeletons by Torque Sensors

    Directory of Open Access Journals (Sweden)

    Beomsoo Hwang

    2015-04-01

    Full Text Available In exoskeletal robots, the quantification of the user’s muscular effort is important to recognize the user’s motion intentions and evaluate motor abilities. In this paper, we attempt to estimate users’ muscular efforts accurately using joint torque sensor which contains the measurements of dynamic effect of human body such as the inertial, Coriolis, and gravitational torques as well as torque by active muscular effort. It is important to extract the dynamic effects of the user’s limb accurately from the measured torque. The user’s limb dynamics are formulated and a convenient method of identifying user-specific parameters is suggested for estimating the user’s muscular torque in robotic exoskeletons. Experiments were carried out on a wheelchair-integrated lower limb exoskeleton, EXOwheel, which was equipped with torque sensors in the hip and knee joints. The proposed methods were evaluated by 10 healthy participants during body weight-supported gait training. The experimental results show that the torque sensors are to estimate the muscular torque accurately in cases of relaxed and activated muscle conditions.

  10. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    Science.gov (United States)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  11. Methods of Approximation Theory in Complex Analysis and Mathematical Physics

    CERN Document Server

    Saff, Edward

    1993-01-01

    The book incorporates research papers and surveys written by participants ofan International Scientific Programme on Approximation Theory jointly supervised by Institute for Constructive Mathematics of University of South Florida at Tampa, USA and the Euler International Mathematical Instituteat St. Petersburg, Russia. The aim of the Programme was to present new developments in Constructive Approximation Theory. The topics of the papers are: asymptotic behaviour of orthogonal polynomials, rational approximation of classical functions, quadrature formulas, theory of n-widths, nonlinear approximation in Hardy algebras,numerical results on best polynomial approximations, wavelet analysis. FROM THE CONTENTS: E.A. Rakhmanov: Strong asymptotics for orthogonal polynomials associated with exponential weights on R.- A.L. Levin, E.B. Saff: Exact Convergence Rates for Best Lp Rational Approximation to the Signum Function and for Optimal Quadrature in Hp.- H. Stahl: Uniform Rational Approximation of x .- M. Rahman, S.K. ...

  12. Energy stable and high-order-accurate finite difference methods on staggered grids

    Science.gov (United States)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  13. An accurate method for the determination of unlike potential parameters from thermal diffusion data

    International Nuclear Information System (INIS)

    El-Geubeily, S.

    1997-01-01

    A new method is introduced by means of which the unlike intermolecular potential parameters can be determined from the experimental measurements of the thermal diffusion factor as a function of temperature. The method proved to be easy, accurate, and applicable two-, three-, and four-parameter potential functions whose collision integrals are available. The potential parameters computed by this method are found to provide a faith full representation of the thermal diffusion data under consideration. 3 figs., 4 tabs

  14. Adaptive approximation of higher order posterior statistics

    KAUST Repository

    Lee, Wonjung

    2014-02-01

    Filtering is an approach for incorporating observed data into time-evolving systems. Instead of a family of Dirac delta masses that is widely used in Monte Carlo methods, we here use the Wiener chaos expansion for the parametrization of the conditioned probability distribution to solve the nonlinear filtering problem. The Wiener chaos expansion is not the best method for uncertainty propagation without observations. Nevertheless, the projection of the system variables in a fixed polynomial basis spanning the probability space might be a competitive representation in the presence of relatively frequent observations because the Wiener chaos approach not only leads to an accurate and efficient prediction for short time uncertainty quantification, but it also allows to apply several data assimilation methods that can be used to yield a better approximate filtering solution. The aim of the present paper is to investigate this hypothesis. We answer in the affirmative for the (stochastic) Lorenz-63 system based on numerical simulations in which the uncertainty quantification method and the data assimilation method are adaptively selected by whether the dynamics is driven by Brownian motion and the near-Gaussianity of the measure to be updated, respectively. © 2013 Elsevier Inc.

  15. New finite volume methods for approximating partial differential equations on arbitrary meshes

    International Nuclear Information System (INIS)

    Hermeline, F.

    2008-12-01

    This dissertation presents some new methods of finite volume type for approximating partial differential equations on arbitrary meshes. The main idea lies in solving twice the problem to be dealt with. One addresses the elliptic equations with variable (anisotropic, antisymmetric, discontinuous) coefficients, the parabolic linear or non linear equations (heat equation, radiative diffusion, magnetic diffusion with Hall effect), the wave type equations (Maxwell, acoustics), the elasticity and Stokes'equations. Numerous numerical experiments show the good behaviour of this type of method. (author)

  16. SET: A Pupil Detection Method Using Sinusoidal Approximation

    Directory of Open Access Journals (Sweden)

    Amir-Homayoun eJavadi

    2015-04-01

    Full Text Available Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as ‘SET’ that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (‘Natural’; and images of less challenging indoor scenes (‘CASIA-Iris-Thousand’. We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (‘DLL’, which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk.

  17. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    Science.gov (United States)

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  18. The place of highly accurate methods by RNAA in metrology

    International Nuclear Information System (INIS)

    Dybczynski, R.; Danko, B.; Polkowska-Motrenko, H.; Samczynski, Z.

    2006-01-01

    With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units

  19. Approximation methods for the stability analysis of complete synchronization on duplex networks

    Science.gov (United States)

    Han, Wenchen; Yang, Junzhong

    2018-01-01

    Recently, the synchronization on multi-layer networks has drawn a lot of attention. In this work, we study the stability of the complete synchronization on duplex networks. We investigate effects of coupling function on the complete synchronization on duplex networks. We propose two approximation methods to deal with the stability of the complete synchronization on duplex networks. In the first method, we introduce a modified master stability function and, in the second method, we only take into consideration the contributions of a few most unstable transverse modes to the stability of the complete synchronization. We find that both methods work well for predicting the stability of the complete synchronization for small networks. For large networks, the second method still works pretty well.

  20. An approximate inversion method of geoelectrical sounding data using linear and bayesian statistical approaches. Examples of Tritrivakely volcanic lake and Mahitsy area (central part of Madagascar)

    International Nuclear Information System (INIS)

    Ranaivo Nomenjanahary, F.; Rakoto, H.; Ratsimbazafy, J.B.

    1994-08-01

    This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs

  1. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William

    2011-02-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  2. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William; Hanke, Martin

    2011-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  3. An approximation method for diffusion based leaching models

    International Nuclear Information System (INIS)

    Shukla, B.S.; Dignam, M.J.

    1987-01-01

    In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)

  4. Complete two-loop effective potential approximation to the lightest Higgs scalar boson mass in supersymmetry

    International Nuclear Information System (INIS)

    Martin, Stephen P.

    2003-01-01

    I present a method for accurately calculating the pole mass of the lightest Higgs scalar boson in supersymmetric extensions of the standard model, using a mass-independent renormalization scheme. The Higgs scalar self-energies are approximated by supplementing the exact one-loop results with the second derivatives of the complete two-loop effective potential in Landau gauge. I discuss the dependence of this approximation on the choice of renormalization scale, and note the existence of particularly poor choices, which fortunately can be easily identified and avoided. For typical input parameters, the variation in the calculated Higgs boson mass over a wide range of renormalization scales is found to be of the order of a few hundred MeV or less, and is significantly improved over previous approximations

  5. Parente2: a fast and accurate method for detecting identity by descent

    KAUST Repository

    Rodriguez, Jesse M.

    2014-10-01

    Identity-by-descent (IBD) inference is the problem of establishing a genetic connection between two individuals through a genomic segment that is inherited by both individuals from a recent common ancestor. IBD inference is an important preceding step in a variety of population genomic studies, ranging from demographic studies to linking genomic variation with phenotype and disease. The problem of accurate IBD detection has become increasingly challenging with the availability of large collections of human genotypes and genomes: Given a cohort\\'s size, a quadratic number of pairwise genome comparisons must be performed. Therefore, computation time and the false discovery rate can also scale quadratically. To enable accurate and efficient large-scale IBD detection, we present Parente2, a novel method for detecting IBD segments. Parente2 is based on an embedded log-likelihood ratio and uses a model that accounts for linkage disequilibrium by explicitly modeling haplotype frequencies. Parente2 operates directly on genotype data without the need to phase data prior to IBD inference. We evaluate Parente2\\'s performance through extensive simulations using real data, and we show that it provides substantially higher accuracy compared to previous state-of-the-art methods while maintaining high computational efficiency.

  6. Approximations for Large Deflection of a Cantilever Beam under a Terminal Follower Force and Nonlinear Pendulum

    Directory of Open Access Journals (Sweden)

    H. Vázquez-Leal

    2013-01-01

    Full Text Available In theoretical mechanics field, solution methods for nonlinear differential equations are very important because many problems are modelled using such equations. In particular, large deflection of a cantilever beam under a terminal follower force and nonlinear pendulum problem can be described by the same nonlinear differential equation. Therefore, in this work, we propose some approximate solutions for both problems using nonlinearities distribution homotopy perturbation method, homotopy perturbation method, and combinations with Laplace-Padé posttreatment. We will show the high accuracy of the proposed cantilever solutions, which are in good agreement with other reported solutions. Finally, for the pendulum case, the proposed approximation was useful to predict, accurately, the period for an angle up to 179.99999999∘ yielding a relative error of 0.01222747.

  7. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin; Bagci, Hakan

    2011-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

  8. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  9. Multifrequency Excitation Method for Rapid and Accurate Dynamic Test of Micromachined Gyroscope Chips

    Directory of Open Access Journals (Sweden)

    Yan Deng

    2014-10-01

    Full Text Available A novel multifrequency excitation (MFE method is proposed to realize rapid and accurate dynamic testing of micromachined gyroscope chips. Compared with the traditional sweep-frequency excitation (SFE method, the computational time for testing one chip under four modes at a 1-Hz frequency resolution and 600-Hz bandwidth was dramatically reduced from 10 min to 6 s. A multifrequency signal with an equal amplitude and initial linear-phase-difference distribution was generated to ensure test repeatability and accuracy. The current test system based on LabVIEW using the SFE method was modified to use the MFE method without any hardware changes. The experimental results verified that the MFE method can be an ideal solution for large-scale dynamic testing of gyroscope chips and gyroscopes.

  10. An accurate clone-based haplotyping method by overlapping pool sequencing.

    Science.gov (United States)

    Li, Cheng; Cao, Changchang; Tu, Jing; Sun, Xiao

    2016-07-08

    Chromosome-long haplotyping of human genomes is important to identify genetic variants with differing gene expression, in human evolution studies, clinical diagnosis, and other biological and medical fields. Although several methods have realized haplotyping based on sequencing technologies or population statistics, accuracy and cost are factors that prohibit their wide use. Borrowing ideas from group testing theories, we proposed a clone-based haplotyping method by overlapping pool sequencing. The clones from a single individual were pooled combinatorially and then sequenced. According to the distinct pooling pattern for each clone in the overlapping pool sequencing, alleles for the recovered variants could be assigned to their original clones precisely. Subsequently, the clone sequences could be reconstructed by linking these alleles accordingly and assembling them into haplotypes with high accuracy. To verify the utility of our method, we constructed 130 110 clones in silico for the individual NA12878 and simulated the pooling and sequencing process. Ultimately, 99.9% of variants on chromosome 1 that were covered by clones from both parental chromosomes were recovered correctly, and 112 haplotype contigs were assembled with an N50 length of 3.4 Mb and no switch errors. A comparison with current clone-based haplotyping methods indicated our method was more accurate. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  12. An analytic, approximate method for modeling steady, three-dimensional flow to partially penetrating wells

    Science.gov (United States)

    Bakker, Mark

    2001-05-01

    An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.

  13. Approximate and renormgroup symmetries

    International Nuclear Information System (INIS)

    Ibragimov, Nail H.; Kovalev, Vladimir F.

    2009-01-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  14. Perturbation method for periodic solutions of nonlinear jerk equations

    International Nuclear Information System (INIS)

    Hu, H.

    2008-01-01

    A Lindstedt-Poincare type perturbation method with bookkeeping parameters is presented for determining accurate analytical approximate periodic solutions of some third-order (jerk) differential equations with cubic nonlinearities. In the process of the solution, higher-order approximate angular frequencies are obtained by Newton's method. A typical example is given to illustrate the effectiveness and simplicity of the proposed method

  15. Uniform approximation is more appropriate for Wilcoxon Rank-Sum Test in gene set analysis.

    Directory of Open Access Journals (Sweden)

    Zhide Fang

    Full Text Available Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO term analysis using simulations and real data sets.

  16. Peculiarities of cyclotron magnetic system calculation with the finite difference method using two-dimensional approximation

    International Nuclear Information System (INIS)

    Shtromberger, N.L.

    1989-01-01

    To design a cyclotron magnetic system the legitimacy of two-dimensional approximations application is discussed. In all the calculations the finite difference method is used, and the linearization method with further use of the gradient conjugation method is used to solve the set of finite-difference equations. 3 refs.; 5 figs

  17. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  18. Gene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information.

    Science.gov (United States)

    Jia, Bin; Wang, Xiaodong

    2013-12-17

    : The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.

  19. Green-Ampt approximations: A comprehensive analysis

    Science.gov (United States)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  20. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    Science.gov (United States)

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement

  1. Comparison of the methods for discrete approximation of the fractional-order operator

    Directory of Open Access Journals (Sweden)

    Zborovjan Martin

    2003-12-01

    Full Text Available In this paper we will present some alternative types of discretization methods (discrete approximation for the fractional-order (FO differentiator and their application to the FO dynamical system described by the FO differential equation (FDE. With analytical solution and numerical solution by power series expansion (PSE method are compared two effective methods - the Muir expansion of the Tustin operator and continued fraction expansion method (CFE with the Tustin operator and the Al-Alaoui operator. Except detailed mathematical description presented are also simulation results. From the Bode plots of the FO differentiator and FDE and from the solution in the time domain we can see, that the CFE is a more effective method according to the PSE method, but there are some restrictions for the choice of the time step. The Muir expansion is almost unusable.

  2. Approximate and renormgroup symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling

    2009-07-01

    ''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)

  3. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  4. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  5. An implicit second order numerical method for two-fluid models

    International Nuclear Information System (INIS)

    Toumi, I.

    1995-01-01

    We present an implicit upwind numerical method for a six equation two-fluid model based on a linearized Riemann solver. The construction of this approximate Riemann solver uses an extension of Roe's scheme. Extension to second order accurate method is achieved using a piecewise linear approximation of the solution and a slope limiter method. For advancing in time, a linearized implicit integrating step is used. In practice this new numerical method has proved to be stable and capable of generating accurate non-oscillating solutions for two-phase flow calculations. The scheme was applied both to shock tube problems and to standard tests for two-fluid codes. (author)

  6. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio; Ibeid, Huda; Keyes, David E.

    2018-01-01

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  7. Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

    KAUST Repository

    Yokota, Rio

    2018-01-03

    There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

  8. An accurate segmentation method for volumetry of brain tumor in 3D MRI

    Science.gov (United States)

    Wang, Jiahui; Li, Qiang; Hirai, Toshinori; Katsuragawa, Shigehiko; Li, Feng; Doi, Kunio

    2008-03-01

    Accurate volumetry of brain tumors in magnetic resonance imaging (MRI) is important for evaluating the interval changes in tumor volumes during and after treatment, and also for planning of radiation therapy. In this study, an automated volumetry method for brain tumors in MRI was developed by use of a new three-dimensional (3-D) image segmentation technique. First, the central location of a tumor was identified by a radiologist, and then a volume of interest (VOI) was determined automatically. To substantially simplify tumor segmentation, we transformed the 3-D image of the tumor into a two-dimensional (2-D) image by use of a "spiral-scanning" technique, in which a radial line originating from the center of the tumor scanned the 3-D image spirally from the "north pole" to the "south pole". The voxels scanned by the radial line provided a transformed 2-D image. We employed dynamic programming to delineate an "optimal" outline of the tumor in the transformed 2-D image. We then transformed the optimal outline back into 3-D image space to determine the volume of the tumor. The volumetry method was trained and evaluated by use of 16 cases with 35 brain tumors. The agreement between tumor volumes provided by computer and a radiologist was employed as a performance metric. Our method provided relatively accurate results with a mean agreement value of 88%.

  9. Approximation of the Thomas-Fermi-Dirac potential for neutral atoms

    International Nuclear Information System (INIS)

    Jablonski, A.

    1992-01-01

    The frequently used analytical expression of Bonham and Strand approximating the Thomas-Fermi-Dirac (TFD) potential is closely analyzed. This expression does not satisfy the boundary conditions of the TFD differential equation, in particular, does not comprise the finite radius of the TFD potential. A modification of the analytical expression is proposed to adjust it to the boundary conditions. A new fit is made on the basis of the variational formulation of the TFD problem. An attempt is also made in the present work to develop a new numerical procedure providing very accurate solutions of this problem. Such solutions form a reference to check the quality of analytical approximations. Exemplary calculations of the elastic scattering cross sections are made for different expressions approximating the TFD potential to visualize the influence of the inaccuracies of the fit. It seems that the elastic scattering calculations should be based on extensive tables with the accurate values of the TFD screening function rather than on fitted analytical expressions. (orig.)

  10. METHODS OF THE APPROXIMATE ESTIMATIONS OF FATIGUE DURABILITY OF COMPOSITE AIRFRAME COMPONENT TYPICAL ELEMENTS

    Directory of Open Access Journals (Sweden)

    V. E. Strizhius

    2015-01-01

    Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.

  11. The generalized Mayer theorem in the approximating hamiltonian method

    International Nuclear Information System (INIS)

    Bakulev, A.P.; Bogoliubov, N.N. Jr.; Kurbatov, A.M.

    1982-07-01

    With the help of the generalized Mayer theorem we obtain the improved inequality for free energies of model and approximating systems, where only ''connected parts'' over the approximating hamiltonian are taken into account. For the concrete system we discuss the problem of convergency of appropriate series of ''connected parts''. (author)

  12. An Effective Method to Accurately Calculate the Phase Space Factors for β"-β"- Decay

    International Nuclear Information System (INIS)

    Horoi, Mihai; Neacsu, Andrei

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  13. Optimized implementations of rational approximations for the Voigt and complex error function

    International Nuclear Information System (INIS)

    Schreier, Franz

    2011-01-01

    Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued functions. For the complex error function w(x+iy), whose real part is the Voigt function K(x,y), code optimizations of rational approximations are investigated. An assessment of requirements for atmospheric radiative transfer modeling indicates a y range over many orders of magnitude and accuracy better than 10 -4 . Following a brief survey of complex error function algorithms in general and rational function approximations in particular the problems associated with subdivisions of the x, y plane (i.e., conditional branches in the code) are discussed and practical aspects of Fortran and Python implementations are considered. Benchmark tests of a variety of algorithms demonstrate that programming language, compiler choice, and implementation details influence computational speed and there is no unique ranking of algorithms. A new implementation, based on subdivision of the upper half-plane in only two regions, combining Weideman's rational approximation for small |x|+y<15 and Humlicek's rational approximation otherwise is shown to be efficient and accurate for all x, y.

  14. Applicability of point-dipoles approximation to all-dielectric metamaterials

    DEFF Research Database (Denmark)

    Kuznetsova, S. M.; Andryieuski, Andrei; Lavrinenko, Andrei

    2015-01-01

    All-dielectric metamaterials consisting of high-dielectric inclusions in a low-dielectric matrix are considered as a low-loss alternative to resonant metal-based metamaterials. In this paper we investigate the applicability of the point electric and magnetic dipoles approximation to dielectric meta......-atoms on the example of a dielectric ring metamaterial. Despite the large electrical size of high-dielectric meta-atoms, the dipole approximation allows for accurate prediction of the metamaterials properties for the rings with diameters up to approximate to 0.8 of the lattice constant. The results provide important...... guidelines for design and optimization of all-dielectric metamaterials....

  15. Accuracy of the Hartree-Fock and local density approximations for electron densities: a study for light atoms

    International Nuclear Information System (INIS)

    Almbladh, C.-O.; Ekenberg, U.; Pedroza, A.C.

    1983-01-01

    The authors compare the electron densities and Hartree potentials in the local density and the Hartree-Fock approximations to the corresponding quantities obtained from more accurate correlated wavefunctions. The comparison is made for a number of two-electron atoms, Li, and for Be. The Hartree-Fock approximation is more accurate than the local density approximation within the 1s shell and for the spin polarization in Li, while the local density approximation is slightly better than the Hartree-Fock approximation for charge densities in the 2s shell. The inaccuracy of the Hartree-Fock and local density approximations to the Hartree potential is substantially smaller than the inaccuracy of the local density approximation to the ground-state exchange-correlation potential. (Auth.)

  16. An improved corrective smoothed particle method approximation for second‐order derivatives

    NARCIS (Netherlands)

    Korzilius, S.P.; Schilders, W.H.A.; Anthonissen, M.J.H.

    2013-01-01

    To solve (partial) differential equations it is necessary to have good numerical approximations. In SPH, most approximations suffer from the presence of boundaries. In this work a new approximation for the second-order derivative is derived and numerically compared with two other approximation

  17. S-curve networks and an approximate method for estimating degree distributions of complex networks

    Science.gov (United States)

    Guo, Jin-Li

    2010-12-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.

  18. S-curve networks and an approximate method for estimating degree distributions of complex networks

    International Nuclear Information System (INIS)

    Guo Jin-Li

    2010-01-01

    In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research. (general)

  19. Time line cell tracking for the approximation of lagrangian coherent structures with subgrid accuracy

    KAUST Repository

    Kuhn, Alexander

    2013-12-05

    Lagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations. © 2013 The Authors Computer Graphics Forum © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  20. Efficient approximation of the incomplete gamma function for use in cloud model applications

    Directory of Open Access Journals (Sweden)

    U. Blahak

    2010-07-01

    Full Text Available This paper describes an approximation to the lower incomplete gamma function γl(a,x which has been obtained by nonlinear curve fitting. It comprises a fixed number of terms and yields moderate accuracy (the absolute approximation error of the corresponding normalized incomplete gamma function P is smaller than 0.02 in the range 0.9 ≤ a ≤ 45 and x≥0. Monotonicity and asymptotic behaviour of the original incomplete gamma function is preserved.

    While providing a slight to moderate performance gain on scalar machines (depending on whether a stays the same for subsequent function evaluations or not compared to established and more accurate methods based on series- or continued fraction expansions with a variable number of terms, a big advantage over these more accurate methods is the applicability on vector CPUs. Here the fixed number of terms enables proper and efficient vectorization. The fixed number of terms might be also beneficial on massively parallel machines to avoid load imbalances, caused by a possibly vastly different number of terms in series expansions to reach convergence at different grid points. For many cloud microphysical applications, the provided moderate accuracy should be enough. However, on scalar machines and if a is the same for subsequent function evaluations, the most efficient method to evaluate incomplete gamma functions is perhaps interpolation of pre-computed regular lookup tables (most simple example: equidistant tables.

  1.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  2. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  3. Validity of the Born approximation for beyond Gaussian weak lensing observables

    Science.gov (United States)

    Petri, Andrea; Haiman, Zoltán; May, Morgan

    2017-06-01

    Accurate forward modeling of weak lensing (WL) observables from cosmological parameters is necessary for upcoming galaxy surveys. Because WL probes structures in the nonlinear regime, analytical forward modeling is very challenging, if not impossible. Numerical simulations of WL features rely on ray tracing through the outputs of N -body simulations, which requires knowledge of the gravitational potential and accurate solvers for light ray trajectories. A less accurate procedure, based on the Born approximation, only requires knowledge of the density field, and can be implemented more efficiently and at a lower computational cost. In this work, we use simulations to show that deviations of the Born-approximated convergence power spectrum, skewness and kurtosis from their fully ray-traced counterparts are consistent with the smallest nontrivial O (Φ3) post-Born corrections (so-called geodesic and lens-lens terms). Our results imply a cancellation among the larger O (Φ4) (and higher order) terms, consistent with previous analytic work. We also find that cosmological parameter bias induced by the Born-approximated power spectrum is negligible even for a LSST-like survey, once galaxy shape noise is considered. When considering higher order statistics such as the κ skewness and kurtosis, however, we find significant bias of up to 2.5 σ . Using the LensTools software suite, we show that the Born approximation saves a factor of 4 in computing time with respect to the full ray tracing in reconstructing the convergence.

  4. Comment on 'Approximation for a large-angle simple pendulum period'

    International Nuclear Information System (INIS)

    Yuan Qingxin; Ding Pei

    2009-01-01

    In a recent letter, Belendez et al (2009 Eur. J. Phys. 30 L25-8) proposed an alternative of approximation for the period of a simple pendulum suggested earlier by Hite (2005 Phys. Teach. 43 290-2) who set out to improve on the Kidd and Fogg formula (2002 Phys. Teach. 40 81-3). As a response to the approximation scheme, we obtain another analytical approximation for the large-angle pendulum period, which owns the simplicity and accuracy in evaluating the exact period, and moreover, for amplitudes less than 144 deg. the analytical approximate expression is more accurate than others in the literature. (letters and comments)

  5. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  6. Hybridization of Sensing Methods of the Search Domain and Adaptive Weighted Sum in the Pareto Approximation Problem

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available We consider the relatively new and rapidly developing class of methods to solve a problem of multi-objective optimization, based on the preliminary built finite-dimensional approximation of the set, and thereby, the Pareto front of this problem as well. The work investigates the efficiency of several modifications of the method of adaptive weighted sum (AWS. This method proposed in the paper of Ryu and Kim Van (JH. Ryu, S. Kim, H. Wan is intended to build Pareto approximation of the multi-objective optimization problem.The AWS method uses quadratic approximation of the objective functions in the current sub-domain of the search space (the area of trust based on the gradient and Hessian matrix of the objective functions. To build the (quadratic meta objective functions this work uses methods of the experimental design theory, which involves calculating the values of these functions in the grid nodes covering the area of trust (a sensing method of the search domain. There are two groups of the sensing methods under consideration: hypercube- and hyper-sphere-based methods. For each of these groups, a number of test multi-objective optimization tasks has been used to study the efficiency of the following grids: "Latin Hypercube"; grid, which is uniformly random for each measurement; grid, based on the LP  sequences.

  7. Vacancy-rearrangement theory in the first Magnus approximation

    International Nuclear Information System (INIS)

    Becker, R.L.

    1984-01-01

    In the present paper we employ the first Magnus approximation (M1A), a unitarized Born approximation, in semiclassical collision theory. We have found previously that the M1A gives a substantial improvement over the first Born approximation (B1A) and can give a good approximation to a full coupled channels calculation of the mean L-shell vacancy probability per electron, p/sub L/, when the L-vacancies are accompanied by a K-shell vacancy (p/sub L/ is obtained experimentally from measurements of K/sub α/-satellite intensities). For sufficiently strong projectile-electron interactions (sufficiently large Z/sub p/ or small v) the M1A ceases to reproduce the coupled channels results, but it is accurate over a much wider range of Z/sub p/ and v than the B1A. 27 references

  8. Screened Coulomb interactions in metallic alloys. II. Screening beyond the single-site and atomic-sphere approximations

    DEFF Research Database (Denmark)

    Ruban, Andrei; Simak, S.I.; Korzhavyi, P.A.

    2002-01-01

    -electron potential and energy. In the case of a random alloy such interactions can be accounted for only by lifting the atomic-sphere and single-site approximations, in order to include the polarization due to local environment effects. Nevertheless, a simple parametrization of the screened Coulomb interactions...... for the ordinary single-site methods, including the generalized perturbation method, is still possible. We obtained such a parametrization for bulk and surface NiPt alloys, which allows one to obtain quantitatively accurate effective interactions in this system....

  9. Accurate diffraction data integration by the EVAL15 profile prediction method : Application in chemical and biological crystallography

    NARCIS (Netherlands)

    Xian, X.

    2009-01-01

    Accurate integration of reflection intensities plays an essential role in structure determination of the crystallized compound. A new diffraction data integration method, EVAL15, is presented in this thesis. This method uses the principle of general impacts to predict ab inito three-dimensional

  10. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim; Tempone, Raul; Nobile, Fabio; Tamellini, Lorenzo

    2012-01-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  11. On the optimal polynomial approximation of stochastic PDEs by galerkin and collocation methods

    KAUST Repository

    Beck, Joakim

    2012-09-01

    In this work we focus on the numerical approximation of the solution u of a linear elliptic PDE with stochastic coefficients. The problem is rewritten as a parametric PDE and the functional dependence of the solution on the parameters is approximated by multivariate polynomials. We first consider the stochastic Galerkin method, and rely on sharp estimates for the decay of the Fourier coefficients of the spectral expansion of u on an orthogonal polynomial basis to build a sequence of polynomial subspaces that features better convergence properties, in terms of error versus number of degrees of freedom, than standard choices such as Total Degree or Tensor Product subspaces. We consider then the Stochastic Collocation method, and use the previous estimates to introduce a new class of Sparse Grids, based on the idea of selecting a priori the most profitable hierarchical surpluses, that, again, features better convergence properties compared to standard Smolyak or tensor product grids. Numerical results show the effectiveness of the newly introduced polynomial spaces and sparse grids. © 2012 World Scientific Publishing Company.

  12. Shape theory categorical methods of approximation

    CERN Document Server

    Cordier, J M

    2008-01-01

    This in-depth treatment uses shape theory as a ""case study"" to illustrate situations common to many areas of mathematics, including the use of archetypal models as a basis for systems of approximations. It offers students a unified and consolidated presentation of extensive research from category theory, shape theory, and the study of topological algebras.A short introduction to geometric shape explains specifics of the construction of the shape category and relates it to an abstract definition of shape theory. Upon returning to the geometric base, the text considers simplical complexes and

  13. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin

    2011-07-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.

  14. Multilevel Approximations of Markovian Jump Processes with Applications in Communication Networks

    KAUST Repository

    Vilanova, Pedro

    2015-05-04

    This thesis focuses on the development and analysis of efficient simulation and inference techniques for Markovian pure jump processes with a view towards applications in dense communication networks. These techniques are especially relevant for modeling networks of smart devices —tiny, abundant microprocessors with integrated sensors and wireless communication abilities— that form highly complex and diverse communication networks. During 2010, the number of devices connected to the Internet exceeded the number of people on Earth: over 12.5 billion devices. By 2015, Cisco’s Internet Business Solutions Group predicts that this number will exceed 25 billion. The first part of this work proposes novel numerical methods to estimate, in an efficient and accurate way, observables from realizations of Markovian jump processes. In particular, hybrid Monte Carlo type methods are developed that combine the exact and approximate simulation algorithms to exploit their respective advantages. These methods are tailored to keep a global computational error below a prescribed global error tolerance and within a given statistical confidence level. Indeed, the computational work of these methods is similar to the one of an exact method, but with a smaller constant. Finally, the methods are extended to systems with a disparity of time scales. The second part develops novel inference methods to estimate the parameters of Markovian pure jump process. First, an indirect inference approach is presented, which is based on upscaled representations and does not require sampling. This method is simpler than dealing directly with the likelihood of the process, which, in general, cannot be expressed in closed form and whose maximization requires computationally intensive sampling techniques. Second, a forward-reverse Monte Carlo Expectation-Maximization algorithm is provided to approximate a local maximum or saddle point of the likelihood function of the parameters given a set of

  15. Application of the homotopy perturbation method and the homotopy analysis method for the dynamics of tobacco use and relapse

    Directory of Open Access Journals (Sweden)

    Anant Kant Shukla

    2014-11-01

    Full Text Available We obtain approximate analytical solutions of two mathematical models of the dynamics of tobacco use and relapse including peer pressure using the homotopy perturbation method (HPM and the homotopy analysis method (HAM. To enlarge the domain of convergence we apply the Padé approximation to the HPM and HAM series solutions. We show graphically that the results obtained by both methods are very accurate in comparison with the numerical solution for a period of 30 years.

  16. Approximate Solutions of Nonlinear Partial Differential Equations by Modified q-Homotopy Analysis Method

    Directory of Open Access Journals (Sweden)

    Shaheed N. Huseen

    2013-01-01

    Full Text Available A modified q-homotopy analysis method (mq-HAM was proposed for solving nth-order nonlinear differential equations. This method improves the convergence of the series solution in the nHAM which was proposed in (see Hassan and El-Tawil 2011, 2012. The proposed method provides an approximate solution by rewriting the nth-order nonlinear differential equation in the form of n first-order differential equations. The solution of these n differential equations is obtained as a power series solution. This scheme is tested on two nonlinear exactly solvable differential equations. The results demonstrate the reliability and efficiency of the algorithm developed.

  17. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  18. A safe and accurate method to perform esthetic mandibular contouring surgery for Far Eastern Asians.

    Science.gov (United States)

    Hsieh, A M-C; Huon, L-K; Jiang, H-R; Liu, S Y-C

    2017-05-01

    A tapered mandibular contour is popular with Far Eastern Asians. This study describes a safe and accurate method of using preoperative virtual surgical planning (VSP) and an intraoperative ostectomy guide to maximize the esthetic outcomes of mandibular symmetry and tapering while mitigating injury to the inferior alveolar nerve (IAN). Twelve subjects with chief complaints of a wide and square lower face underwent this protocol from January to June 2015. VSP was used to confirm symmetry and preserve the IAN while maximizing the surgeon's ability to taper the lower face via mandibular inferior border ostectomy. The accuracy of this method was confirmed by superimposition of the perioperative computed tomography scans in all subjects. No subjects complained of prolonged paresthesia after 3 months. A safe and accurate protocol for achieving an esthetic lower face in indicated Far Eastern individuals is described. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    Science.gov (United States)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  20. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul, E-mail: paul.tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludwig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ{sub i} of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ{sub i}. A summarizing discussion highlights the achievements of the new theory and of its approximate solution

  1. Approximate rational Jacobi elliptic function solutions of the fractional differential equations via the enhanced Adomian decomposition method

    International Nuclear Information System (INIS)

    Song Lina; Wang Weiguo

    2010-01-01

    In this Letter, an enhanced Adomian decomposition method which introduces the h-curve of the homotopy analysis method into the standard Adomian decomposition method is proposed. Some examples prove that this method can derive successfully approximate rational Jacobi elliptic function solutions of the fractional differential equations.

  2. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    Science.gov (United States)

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  3. Exact and approximate interior corner problem in neutron diffusion by integral transform methods

    International Nuclear Information System (INIS)

    Bareiss, E.H.; Chang, K.S.J.; Constatinescu, D.A.

    1976-09-01

    The mathematical solution of the neutron diffusion equation exhibits singularities in its derivatives at material corners. A mathematical treatment of the nature of these singularities and its impact on coarse network approximation methods in computational work is presented. The mathematical behavior is deduced from Green's functions, based on a generalized theory for two space dimensions, and the resulting systems of integral equations, as well as from the Kontorovich--Lebedev Transform. The effect on numerical calculations is demonstrated for finite difference and finite element methods for a two-region corner problem

  4. Efficient Method to Approximately Solve Retrial Systems with Impatience

    Directory of Open Access Journals (Sweden)

    Jose Manuel Gimenez-Guzman

    2012-01-01

    Full Text Available We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.

  5. Higher-Order Approximations of Motion of a Nonlinear Oscillator Using the Parameter Expansion Technique

    Science.gov (United States)

    Ganji, S. S.; Domairry, G.; Davodi, A. G.; Babazadeh, H.; Seyedalizadeh Ganji, S. H.

    The main objective of this paper is to apply the parameter expansion technique (a modified Lindstedt-Poincaré method) to calculate the first, second, and third-order approximations of motion of a nonlinear oscillator arising in rigid rod rocking back. The dynamics and frequency of motion of this nonlinear mechanical system are analyzed. A meticulous attention is carried out to the study of the introduced nonlinearity effects on the amplitudes of the oscillatory states and on the bifurcation structures. We examine the synchronization and the frequency of systems using both the strong and special method. Numerical simulations and computer's answers confirm and complement the results obtained by the analytical approach. The approach proposes a choice to overcome the difficulty of computing the periodic behavior of the oscillation problems in engineering. The solutions of this method are compared with the exact ones in order to validate the approach, and assess the accuracy of the solutions. In particular, APL-PM works well for the whole range of oscillation amplitudes and excellent agreement of the approximate frequency with the exact one has been demonstrated. The approximate period derived here is accurate and close to the exact solution. This method has a distinguished feature which makes it simple to use, and also it agrees with the exact solutions for various parameters.

  6. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  7. A method for the approximate solutions of the unsteady boundary layer equations

    International Nuclear Information System (INIS)

    Abdus Sattar, Md.

    1990-12-01

    The approximate integral method proposed by Bianchini et al. to solve the unsteady boundary layer equations is considered here with a simple modification to the scale function for the similarity variable. This is done by introducing a time dependent length scale. The closed form solutions, thus obtained, give satisfactory results for the velocity profile and the skin friction to a limiting case in comparison with the results of the past investigators. (author). 7 refs, 2 figs

  8. Born approximation to a perturbative numerical method for the solution of the Schroedinger equation

    International Nuclear Information System (INIS)

    Adam, Gh.

    1978-01-01

    A step function perturbative numerical method (SF-PN method) is developed for the solution of the Cauchy problem for the second order liniar differential equation in normal form. An important point stressed in the present paper, which seems to have been previously ignored in the literature devoted to the PN methods, is the close connection between the first order perturbation theory of the PN approach and the wellknown Born approximation, and, in general, the connection between the varjous orders of the PN corrections and the Neumann series. (author)

  9. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    Science.gov (United States)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  10. Generalized Gradient Approximation Made Simple

    International Nuclear Information System (INIS)

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-01-01

    Generalized gradient approximations (GGA close-quote s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. copyright 1996 The American Physical Society

  11. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    CERN Document Server

    Öz, E.; Muggli, P.

    2016-01-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE)~\\cite{bib:awake} project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook~\\cite{bib:Hook} method and has been described in great detail in the work by W. Tendell Hill et. al.~\\cite{bib:densitymeter}. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of $1\\%$ for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prot...

  12. Magnetocaloric effect (MCE): Microscopic approach within Tyablikov approximation for anisotropic ferromagnets

    Energy Technology Data Exchange (ETDEWEB)

    Kotelnikova, O.A.; Prudnikov, V.N. [Physical Faculty, Lomonosov State University, Department of Magnetism, Moscow (Russian Federation); Rudoy, Yu.G., E-mail: rudikar@mail.ru [People' s Friendship University of Russia, Department of Theoretical Physics, Moscow (Russian Federation)

    2015-06-01

    The aim of this paper is to generalize the microscopic approach to the description of the magnetocaloric effect (MCE) started by Kokorina and Medvedev (E.E. Kokorina, M.V. Medvedev, Physica B 416 (2013) 29.) by applying it to the anisotropic ferromagnet of the “easy axis” type in two settings—with external magnetic field parallel and perpendicular to the axis of easy magnetization. In the last case there appears the field induced (or spin-reorientation) phase transition which occurs at the critical value of the external magnetic field. This value is proportional to the exchange anisotropy constant at low temperatures, but with the rise of temperature it may be renormalized (as a rule, proportional to the magnetization). We use the explicit form of the Hamiltonian of the anisotropic ferromagnet and apply widely used random phase approximation (RPA) (known also as Tyablikov approximation in the Green function method) which is more accurate than the well known molecular field approximation (MFA). It is shown that in the first case the magnitude of MCE is raised whereas in the second one the MCE disappears due to compensation of the critical field renormalized with the magnetization.

  13. The Bateman method for multichannel scattering theory

    International Nuclear Information System (INIS)

    Kim, Y. E.; Kim, Y. J.; Zubarev, A. L.

    1997-01-01

    Accuracy and convergence of the Bateman method are investigated for calculating the transition amplitude in multichannel scattering theory. This approximation method is applied to the calculation of elastic amplitude. The calculated results are remarkably accurate compared with those of exactly solvable multichannel model

  14. Padé approximant for normal stress differences in large-amplitude oscillatory shear flow

    Science.gov (United States)

    Poungthong, P.; Saengow, C.; Giacomin, A. J.; Kolitawong, C.; Merger, D.; Wilhelm, M.

    2018-04-01

    Analytical solutions for the normal stress differences in large-amplitude oscillatory shear flow (LAOS), for continuum or molecular models, normally take the inexact form of the first few terms of a series expansion in the shear rate amplitude. Here, we improve the accuracy of these truncated expansions by replacing them with rational functions called Padé approximants. The recent advent of exact solutions in LAOS presents an opportunity to identify accurate and useful Padé approximants. For this identification, we replace the truncated expansion for the corotational Jeffreys fluid with its Padé approximants for the normal stress differences. We uncover the most accurate and useful approximant, the [3,4] approximant, and then test its accuracy against the exact solution [C. Saengow and A. J. Giacomin, "Normal stress differences from Oldroyd 8-constant framework: Exact analytical solution for large-amplitude oscillatory shear flow," Phys. Fluids 29, 121601 (2017)]. We use Ewoldt grids to show the stunning accuracy of our [3,4] approximant in LAOS. We quantify this accuracy with an objective function and then map it onto the Pipkin space. Our two applications illustrate how to use our new approximant reliably. For this, we use the Spriggs relations to generalize our best approximant to multimode, and then, we compare with measurements on molten high-density polyethylene and on dissolved polyisobutylene in isobutylene oligomer.

  15. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low

  16. Sparse approximation with bases

    CERN Document Server

    2015-01-01

    This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications.  The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...

  17. Approximate method for solving the velocity dependent transport equation in a slab lattice

    International Nuclear Information System (INIS)

    Ferrari, A.

    1966-01-01

    A method is described that is intended to provide an approximate solution of the transport equation in a medium simulating a water-moderated plate filled reactor core. This medium is constituted by a periodic array of water channels and absorbing plates. The velocity dependent transport equation in slab geometry is included. The computation is performed in a water channel: the absorbing plates are accounted for by the boundary conditions. The scattering of neutrons in water is assumed isotropic, which allows the use of a double Pn approximation to deal with the angular dependence. This method is able to represent the discontinuity of the angular distribution at the channel boundary. The set of equations thus obtained is dependent only on x and v and the coefficients are independent on x. This solution suggests to try solutions involving Legendre polynomials. This scheme leads to a set of equations v dependent only. To obtain an explicit solution, a thermalization model must now be chosen. Using the secondary model of Cadilhac a solution of this set is easy to get. The numerical computations were performed with a particular secondary model, the well-known model of Wigner and Wilkins. (author) [fr

  18. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  19. Testing approximate predictions of displacements of cosmological dark matter halos

    Energy Technology Data Exchange (ETDEWEB)

    Munari, Emiliano; Monaco, Pierluigi; Borgani, Stefano [Department of Physics, Astronomy Unit, University of Trieste, via Tiepolo 11, I-34143 Trieste (Italy); Koda, Jun [INAF – Osservatorio Astronomico di Brera, via E. Bianchi 46, I-23807 Merate (Italy); Kitaura, Francisco-Shu [Instituto de Astrofísica de Canarias, 38205 San Cristóbal de La Laguna, Santa Cruz de Tenerife (Spain); Sefusatti, Emiliano, E-mail: munari@oats.inaf.it, E-mail: monaco@oats.inaf.it, E-mail: jun.koda@brera.inaf.it, E-mail: fkitaura@iac.es, E-mail: sefusatti@oats.inaf.it, E-mail: borgani@oats.inaf.it [INAF – Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34143 Trieste (Italy)

    2017-07-01

    We present a test to quantify how well some approximate methods, designed to reproduce the mildly non-linear evolution of perturbations, are able to reproduce the clustering of DM halos once the grouping of particles into halos is defined and kept fixed. The following methods have been considered: Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT, Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by applying a friends-of-friends (FoF) halo finder to the output of an N-body simulation. The approximate methods are then applied to the same initial conditions of the simulation, producing for all particles displacements from their starting position and velocities. The position and velocity of each halo are computed by averaging over the particles that belong to that halo, according to the FoF halo finder. This procedure allows us to perform a well-posed test of how clustering of the matter density and halo density fields are recovered, without asking to the approximate method an accurate reconstruction of halos. We have considered the results at z =0,0.5,1, and we have analysed power spectrum in real and redshift space, object-by-object difference in position and velocity, density Probability Distribution Function (PDF) and its moments, phase difference of Fourier modes. We find that higher LPT orders are generally able to better reproduce the clustering of halos, while little or no improvement is found for the matter density field when going to 2LPT and 3LPT. Augmentation provides some improvement when coupled with 2LPT, while its effect is limited when coupled with 3LPT. Little improvement is brought by MUSCLE with respect to Augmentation. The more expensive particle-mesh code COLA outperforms all LPT methods, and this is true even for mesh sizes as large as the inter-particle distance. This test sets an upper limit on the ability of these methods to reproduce the clustering of halos, for the cases when these objects are

  20. Testing approximate predictions of displacements of cosmological dark matter halos

    Science.gov (United States)

    Munari, Emiliano; Monaco, Pierluigi; Koda, Jun; Kitaura, Francisco-Shu; Sefusatti, Emiliano; Borgani, Stefano

    2017-07-01

    We present a test to quantify how well some approximate methods, designed to reproduce the mildly non-linear evolution of perturbations, are able to reproduce the clustering of DM halos once the grouping of particles into halos is defined and kept fixed. The following methods have been considered: Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT, Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by applying a friends-of-friends (FoF) halo finder to the output of an N-body simulation. The approximate methods are then applied to the same initial conditions of the simulation, producing for all particles displacements from their starting position and velocities. The position and velocity of each halo are computed by averaging over the particles that belong to that halo, according to the FoF halo finder. This procedure allows us to perform a well-posed test of how clustering of the matter density and halo density fields are recovered, without asking to the approximate method an accurate reconstruction of halos. We have considered the results at z=0,0.5,1, and we have analysed power spectrum in real and redshift space, object-by-object difference in position and velocity, density Probability Distribution Function (PDF) and its moments, phase difference of Fourier modes. We find that higher LPT orders are generally able to better reproduce the clustering of halos, while little or no improvement is found for the matter density field when going to 2LPT and 3LPT. Augmentation provides some improvement when coupled with 2LPT, while its effect is limited when coupled with 3LPT. Little improvement is brought by MUSCLE with respect to Augmentation. The more expensive particle-mesh code COLA outperforms all LPT methods, and this is true even for mesh sizes as large as the inter-particle distance. This test sets an upper limit on the ability of these methods to reproduce the clustering of halos, for the cases when these objects are

  1. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  2. Nonlinear dynamic analysis using Petrov-Galerkin natural element method

    International Nuclear Information System (INIS)

    Lee, Hong Woo; Cho, Jin Rae

    2004-01-01

    According to our previous study, it is confirmed that the Petrov-Galerkin Natural Element Method (PG-NEM) completely resolves the numerical integration inaccuracy in the conventional Bubnov-Galerkin Natural Element Method (BG-NEM). This paper is an extension of PG-NEM to two-dimensional nonlinear dynamic problem. For the analysis, a constant average acceleration method and a linearized total Lagrangian formulation is introduced with the PG-NEM. At every time step, the grid points are updated and the shape functions are reproduced from the relocated nodal distribution. This process enables the PG-NEM to provide more accurate and robust approximations. The representative numerical experiments performed by the test Fortran program, and the numerical results confirmed that the PG-NEM effectively and accurately approximates the nonlinear dynamic problem

  3. Nebulizer calibration using lithium chloride: an accurate, reproducible and user-friendly method.

    Science.gov (United States)

    Ward, R J; Reid, D W; Leonard, R F; Johns, D P; Walters, E H

    1998-04-01

    Conventional gravimetric (weight loss) calibration of jet nebulizers overestimates their aerosol output by up to 80% due to unaccounted evaporative loss. We examined two methods of measuring true aerosol output from jet nebulizers. A new adaptation of a widely available clinical assay for lithium (determined by flame photometry, LiCl method) was compared to an existing electrochemical method based on fluoride detection (NaF method). The agreement between the two methods and the repeatability of each method were examined. Ten Mefar jet nebulizers were studied using a Mefar MK3 inhalation dosimeter. There was no significant difference between the two methods (p=0.76) with mean aerosol output of the 10 nebulizers being 7.40 mg x s(-1) (SD 1.06; range 5.86-9.36 mg x s(-1)) for the NaF method and 7.27 mg x s(-1) (SD 0.82; range 5.52-8.26 mg x s(-1)) for the LiCl method. The LiCl method had a coefficient of repeatability of 13 mg x s(-1) compared with 3.7 mg x s(-1) for the NaF method. The LiCl method accurately measured true aerosol output and was considerably easier to use. It was also more repeatable, and hence more precise, than the NaF method. Because the LiCl method uses an assay that is routinely available from hospital biochemistry laboratories, it is easy to use and, thus, can readily be adopted by busy respiratory function departments.

  4. ACCELERATED METHODS FOR ESTIMATING THE DURABILITY OF PLAIN BEARINGS

    Directory of Open Access Journals (Sweden)

    Myron Czerniec

    2014-09-01

    Full Text Available The paper presents methods for determining the durability of slide bearings. The developed methods enhance the calculation process by even 100000 times, compared to the accurate solution obtained with the generalized cumulative model of wear. The paper determines the accuracy of results for estimating the durability of bearings depending on the size of blocks of constant conditions of contact interaction between the shaft with small out-of-roundedness and the bush with a circular contour. The paper gives an approximate dependence for determining accurate durability using either a more accurate or an additional method.

  5. Generalized weighted ratio method for accurate turbidity measurement over a wide range.

    Science.gov (United States)

    Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying

    2015-12-14

    Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.

  6. Improving the Weizsäcker-Williams approximation in electron-proton collisions

    CERN Document Server

    Frixione, Stefano; Nason, P; Ridolfi, G

    1993-01-01

    We critically examine the validity of the Weizs\\"acker-Williams approximation in electron-hadron collisions. We show that in its commonly used form it can lead to large errors, and we show how to improve it in order to get accurate results. In particular, we present an improved form that is valid beyond the leading logarithmic approximation in the case when a small-angle cut is applied to the scattered electron. Furthermore we include comparisons of the approximate expressions with the exact electroproduction calculation in the case of heavy-quark production.

  7. A new way of obtaining analytic approximations of Chandrasekhar's H function

    International Nuclear Information System (INIS)

    Vukanic, J.; Arsenovic, D.; Davidovic, D.

    2007-01-01

    Applying the mean value theorem for definite integrals in the non-linear integral equation for Chandrasekhar's H function describing conservative isotropic scattering, we have derived a new, simple analytic approximation for it, with a maximal relative error below 2.5%. With this new function as a starting-point, after a single iteration in the corresponding integral equation, we have obtained a new, highly accurate analytic approximation for the H function. As its maximal relative error is below 0.07%, it significantly surpasses the accuracy of other analytic approximations

  8. Approximation for the adjoint neutron spectrum

    International Nuclear Information System (INIS)

    Suster, Luis Carlos; Martinez, Aquilino Senra; Silva, Fernando Carvalho da

    2002-01-01

    The proposal of this work is the determination of an analytical approximation which is capable to reproduce the adjoint neutron flux for the energy range of the narrow resonances (NR). In a previous work we developed a method for the calculation of the adjoint spectrum which was calculated from the adjoint neutron balance equations, that were obtained by the collision probabilities method, this method involved a considerable quantity of numerical calculation. In the analytical method some approximations were done, like the multiplication of the escape probability in the fuel by the adjoint flux in the moderator, and after these approximations, taking into account the case of the narrow resonances, were substituted in the adjoint neutron balance equation for the fuel, resulting in an analytical approximation for the adjoint flux. The results obtained in this work were compared to the results generated with the reference method, which demonstrated a good and precise results for the adjoint neutron flux for the narrow resonances. (author)

  9. A simple approximation for the current-voltage characteristics of high-power, relativistic diodes

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Carl, E-mail: cekdahl@lanl.gov [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

    2016-06-15

    A simple approximation for the current-voltage characteristics of a relativistic electron diode is presented. The approximation is accurate from non-relativistic through relativistic electron energies. Although it is empirically developed, it has many of the fundamental properties of the exact diode solutions. The approximation is simple enough to be remembered and worked on almost any pocket calculator, so it has proven to be quite useful on the laboratory floor.

  10. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    Science.gov (United States)

    Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.

    2017-07-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.

  11. APPROXIMATIONS TO PERFORMANCE MEASURES IN QUEUING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kambo, N. S.

    2012-11-01

    Full Text Available Approximations to various performance measures in queuing systems have received considerable attention because these measures have wide applicability. In this paper we propose two methods to approximate the queuing characteristics of a GI/M/1 system. The first method is non-parametric in nature, using only the first three moments of the arrival distribution. The second method treads the known path of approximating the arrival distribution by a mixture of two exponential distributions by matching the first three moments. Numerical examples and optimal analysis of performance measures of GI/M/1 queues are provided to illustrate the efficacy of the methods, and are compared with benchmark approximations.

  12. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  13. Technical notes. Rational approximations for cross-section space-shielding in doubly heterogeneous systems

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.

    1976-01-01

    A simple yet accurate method of space-shielding cross sections in a doubly heterogeneous high-temperature gas-cooled reactor (HTGR) system using collision probabilities and rational approximations is presented. Unlike other more elaborate methods, this method does not require point-wise cross sections that are not explicitly generated in most popular cross-section codes. Consequently, this method makes double heterogeneity space-shielding possible for cross-section codes that do not proceed via point-wise cross sections and that usually allow only for single (fuel-rod) heterogeneity cross-section space-shielding. Results of calculations based on this method compare well with results of calculations based on more elaborate methods using point-wise cross sections. Moreover, the systematic trend of the difference between the results from this method and those from the more elaborate methods used for comparison supports the already existent opinion that the latter methods tend to overestimate the space-shielding cross-section correction in doubly heterogeneous HTGR systems

  14. An accurate method of 131I dosimetry in the rat thyroid

    International Nuclear Information System (INIS)

    Lee, W.; Shleien, B.; Telles, N.C.; Chiacchierini, R.P.

    1979-01-01

    An accurate method of thyroid 131 I dosimetry was developed by imploying the dose formulation recommended by the Medical Internal Radiation Dose (MIRD) Committee. Six-week-old female Long-Evans rats were injected intraperitonealy with 0.5, 1.9, and 5.4 μCi of Na 131 I. The accumulated 131 I activities in the thyroid were precisely determined by integrating the 131 I activities per gram of the thyroid as functions of postinjection time. When the mean thyroid doses derived from this method are compared to those derived from the conventional method, the conventional method over-estimated the doses by 60 to 70%. Similarly, the conventional method yielded effective half-lives of 2.5 to 2.8 days; these estimates were found to be high by factors of 1.4 to 2.0. This finding implies that the biological elimination of iodide from the rat thyroid is much more rapid (up to 2.5 times) that once believed. Results from this study showed that the basic assumption in the conventional method of thyroid 131 I dosimetry in the rat, i.e., that the thyroid iodide retention function is a single exponential, is invalid. Results from this study also demonstrated that variations in animal body weight of 6 to 7-week-old animals and diurnal variation have no significant influence on the mean thyroid doses for a given injected activity of 131 I. However, as expected, variation in iodide content of the animal diets significantly altered the thyroid doses for a given 131 I injected activity

  15. Simple and Accurate Analytical Solutions of the Electrostatically Actuated Curled Beam Problem

    KAUST Repository

    Younis, Mohammad I.

    2014-08-17

    We present analytical solutions of the electrostatically actuated initially deformed cantilever beam problem. We use a continuous Euler-Bernoulli beam model combined with a single-mode Galerkin approximation. We derive simple analytical expressions for two commonly observed deformed beams configurations: the curled and tilted configurations. The derived analytical formulas are validated by comparing their results to experimental data in the literature and numerical results of a multi-mode reduced order model. The derived expressions do not involve any complicated integrals or complex terms and can be conveniently used by designers for quick, yet accurate, estimations. The formulas are found to yield accurate results for most commonly encountered microbeams of initial tip deflections of few microns. For largely deformed beams, we found that these formulas yield less accurate results due to the limitations of the single-mode approximations they are based on. In such cases, multi-mode reduced order models need to be utilized.

  16. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    Science.gov (United States)

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  17. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    Science.gov (United States)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  18. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  19. A fast approximation method for reliability analysis of cold-standby systems

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Amari, Suprasad V.

    2012-01-01

    Analyzing reliability of large cold-standby systems has been a complicated and time-consuming task, especially for systems with components having non-exponential time-to-failure distributions. In this paper, an approximation model, which is based on the central limit theorem, is presented for the reliability analysis of binary cold-standby systems. The proposed model can estimate the reliability of large cold-standby systems with binary-state components having arbitrary time-to-failure distributions in an efficient and easy way. The accuracy and efficiency of the proposed method are illustrated using several different types of distributions for both 1-out-of-n and k-out-of-n cold-standby systems.

  20. SU-F-BRF-09: A Non-Rigid Point Matching Method for Accurate Bladder Dose Summation in Cervical Cancer HDR Brachytherapy

    International Nuclear Information System (INIS)

    Chen, H; Zhen, X; Zhou, L; Zhong, Z; Pompos, A; Yan, H; Jiang, S; Gu, X

    2014-01-01

    Purpose: To propose and validate a deformable point matching scheme for surface deformation to facilitate accurate bladder dose summation for fractionated HDR cervical cancer treatment. Method: A deformable point matching scheme based on the thin plate spline robust point matching (TPSRPM) algorithm is proposed for bladder surface registration. The surface of bladders segmented from fractional CT images is extracted and discretized with triangular surface mesh. Deformation between the two bladder surfaces are obtained by matching the two meshes' vertices via the TPS-RPM algorithm, and the deformation vector fields (DVFs) characteristic of this deformation is estimated by B-spline approximation. Numerically, the algorithm is quantitatively compared with the Demons algorithm using five clinical cervical cancer cases by several metrics: vertex-to-vertex distance (VVD), Hausdorff distance (HD), percent error (PE), and conformity index (CI). Experimentally, the algorithm is validated on a balloon phantom with 12 surface fiducial markers. The balloon is inflated with different amount of water, and the displacement of fiducial markers is benchmarked as ground truth to study TPS-RPM calculated DVFs' accuracy. Results: In numerical evaluation, the mean VVD is 3.7(±2.0) mm after Demons, and 1.3(±0.9) mm after TPS-RPM. The mean HD is 14.4 mm after Demons, and 5.3mm after TPS-RPM. The mean PE is 101.7% after Demons and decreases to 18.7% after TPS-RPM. The mean CI is 0.63 after Demons, and increases to 0.90 after TPS-RPM. In the phantom study, the mean Euclidean distance of the fiducials is 7.4±3.0mm and 4.2±1.8mm after Demons and TPS-RPM, respectively. Conclusions: The bladder wall deformation is more accurate using the feature-based TPS-RPM algorithm than the intensity-based Demons algorithm, indicating that TPS-RPM has the potential for accurate bladder dose deformation and dose summation for multi-fractional cervical HDR brachytherapy. This work is supported

  1. The generalized successive approximation and Padé Approximants method for solving an elasticity problem of based on the elastic ground with variable coefficients

    Directory of Open Access Journals (Sweden)

    Mustafa Bayram

    2017-01-01

    Full Text Available In this study, we have applied a generalized successive numerical technique to solve the elasticity problem of based on the elastic ground with variable coefficient. In the first stage, we have calculated the generalized successive approximation of being given BVP and in the second stage we have transformed it into Padé series. At the end of study a test problem has been given to clarify the method.

  2. Simple and accurate solution for convective-radiative fin with temperature dependent thermal conductivity using double optimal linearization

    International Nuclear Information System (INIS)

    Bouaziz, M.N.; Aziz, Abdul

    2010-01-01

    A novel concept of double optimal linearization is introduced and used to obtain a simple and accurate solution for the temperature distribution in a straight rectangular convective-radiative fin with temperature dependent thermal conductivity. The solution is built from the classical solution for a pure convection fin of constant thermal conductivity which appears in terms of hyperbolic functions. When compared with the direct numerical solution, the double optimally linearized solution is found to be accurate within 4% for a range of radiation-conduction and thermal conductivity parameters that are likely to be encountered in practice. The present solution is simple and offers superior accuracy compared with the fairly complex approximate solutions based on the homotopy perturbation method, variational iteration method, and the double series regular perturbation method. The fin efficiency expression resembles the classical result for the constant thermal conductivity convecting fin. The present results are easily usable by the practicing engineers in their thermal design and analysis work involving fins.

  3. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  4. Multilocus lod scores in large pedigrees: combination of exact and approximate calculations.

    Science.gov (United States)

    Tong, Liping; Thompson, Elizabeth

    2008-01-01

    To detect the positions of disease loci, lod scores are calculated at multiple chromosomal positions given trait and marker data on members of pedigrees. Exact lod score calculations are often impossible when the size of the pedigree and the number of markers are both large. In this case, a Markov Chain Monte Carlo (MCMC) approach provides an approximation. However, to provide accurate results, mixing performance is always a key issue in these MCMC methods. In this paper, we propose two methods to improve MCMC sampling and hence obtain more accurate lod score estimates in shorter computation time. The first improvement generalizes the block-Gibbs meiosis (M) sampler to multiple meiosis (MM) sampler in which multiple meioses are updated jointly, across all loci. The second one divides the computations on a large pedigree into several parts by conditioning on the haplotypes of some 'key' individuals. We perform exact calculations for the descendant parts where more data are often available, and combine this information with sampling of the hidden variables in the ancestral parts. Our approaches are expected to be most useful for data on a large pedigree with a lot of missing data. (c) 2007 S. Karger AG, Basel

  5. An accurate and efficient method for large-scale SSR genotyping and applications.

    Science.gov (United States)

    Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai

    2017-06-02

    Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    Science.gov (United States)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  7. Ordered cones and approximation

    CERN Document Server

    Keimel, Klaus

    1992-01-01

    This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.

  8. A simple approximation of moments of the quasi-equilibrium distribution of an extended stochastic theta-logistic model with non-integer powers.

    Science.gov (United States)

    Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi

    2016-01-01

    The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Short overview of PSA quantification methods, pitfalls on the road from approximate to exact results

    International Nuclear Information System (INIS)

    Banov, Reni; Simic, Zdenko; Sterc, Davor

    2014-01-01

    Over time the Probabilistic Safety Assessment (PSA) models have become an invaluable companion in the identification and understanding of key nuclear power plant (NPP) vulnerabilities. PSA is an effective tool for this purpose as it assists plant management to target resources where the largest benefit for plant safety can be obtained. PSA has quickly become an established technique to numerically quantify risk measures in nuclear power plants. As complexity of PSA models increases, the computational approaches become more or less feasible. The various computational approaches can be basically classified in two major groups: approximate and exact (BDD based) methods. In recent time modern commercially available PSA tools started to provide both methods for PSA model quantification. Besides availability of both methods in proven PSA tools the usage must still be taken carefully since there are many pitfalls which can drive to wrong conclusions and prevent efficient usage of PSA tool. For example, typical pitfalls involve the usage of higher precision approximation methods and getting a less precise result, or mixing minimal cuts and prime implicants in the exact computation method. The exact methods are sensitive to selected computational paths in which case a simple human assisted rearrangement may help and even switch from computationally non-feasible to feasible methods. Further improvements to exact method are possible and desirable which opens space for a new research. In this paper we will show how these pitfalls may be detected and how carefully actions must be done especially when working with large PSA models. (authors)

  10. Approximate Analytical Solutions for Mathematical Model of Tumour Invasion and Metastasis Using Modified Adomian Decomposition and Homotopy Perturbation Methods

    Directory of Open Access Journals (Sweden)

    Norhasimah Mahiddin

    2014-01-01

    Full Text Available The modified decomposition method (MDM and homotopy perturbation method (HPM are applied to obtain the approximate solution of the nonlinear model of tumour invasion and metastasis. The study highlights the significant features of the employed methods and their ability to handle nonlinear partial differential equations. The methods do not need linearization and weak nonlinearity assumptions. Although the main difference between MDM and Adomian decomposition method (ADM is a slight variation in the definition of the initial condition, modification eliminates massive computation work. The approximate analytical solution obtained by MDM logically contains the solution obtained by HPM. It shows that HPM does not involve the Adomian polynomials when dealing with nonlinear problems.

  11. Reduced-rank approximations to the far-field transform in the gridded fast multipole method

    Science.gov (United States)

    Hesford, Andrew J.; Waag, Robert C.

    2011-05-01

    The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.

  12. Fast multipole acceleration of the MEG/EEG boundary element method

    International Nuclear Information System (INIS)

    Kybic, Jan; Clerc, Maureen; Faugeras, Olivier; Keriven, Renaud; Papadopoulo, Theo

    2005-01-01

    The accurate solution of the forward electrostatic problem is an essential first step before solving the inverse problem of magneto- and electroencephalography (MEG/EEG). The symmetric Galerkin boundary element method is accurate but cannot be used for very large problems because of its computational complexity and memory requirements. We describe a fast multipole-based acceleration for the symmetric boundary element method (BEM). It creates a hierarchical structure of the elements and approximates far interactions using spherical harmonics expansions. The accelerated method is shown to be as accurate as the direct method, yet for large problems it is both faster and more economical in terms of memory consumption

  13. Scaling laws and accurate small-amplitude stationary solution for the motion of a planar vortex filament in the Cartesian form of the local induction approximation.

    Science.gov (United States)

    Van Gorder, Robert A

    2013-04-01

    We provide a formulation of the local induction approximation (LIA) for the motion of a vortex filament in the Cartesian reference frame (the extrinsic coordinate system) which allows for scaling of the reference coordinate. For general monotone scalings of the reference coordinate, we derive an equation for the planar solution to the derivative nonlinear Schrödinger equation governing the LIA. We proceed to solve this equation perturbatively in small amplitude through an application of multiple-scales analysis, which allows for accurate computation of the period of the planar vortex filament. The perturbation result is shown to agree strongly with numerical simulations, and we also relate this solution back to the solution obtained in the arclength reference frame (the intrinsic coordinate system). Finally, we discuss nonmonotone coordinate scalings and their application for finding self-intersections of vortex filaments. These self-intersecting vortex filaments are likely unstable and collapse into other structures or dissipate completely.

  14. [A accurate identification method for Chinese materia medica--systematic identification of Chinese materia medica].

    Science.gov (United States)

    Wang, Xue-Yong; Liao, Cai-Li; Liu, Si-Qi; Liu, Chun-Sheng; Shao, Ai-Juan; Huang, Lu-Qi

    2013-05-01

    This paper put forward a more accurate identification method for identification of Chinese materia medica (CMM), the systematic identification of Chinese materia medica (SICMM) , which might solve difficulties in CMM identification used the ordinary traditional ways. Concepts, mechanisms and methods of SICMM were systematically introduced and possibility was proved by experiments. The establishment of SICMM will solve problems in identification of Chinese materia medica not only in phenotypic characters like the mnorphous, microstructure, chemical constituents, but also further discovery evolution and classification of species, subspecies and population in medical plants. The establishment of SICMM will improve the development of identification of CMM and create a more extensive study space.

  15. Distorted wave method in reactions with composite particles

    International Nuclear Information System (INIS)

    Zelenskaya, N.S.; Teplov, I.B.

    1980-01-01

    The work deals with the distorbed wave method with a finite radius of interaction (DWBAFR) as applied to quantitative analysis of direct nuclear reactions with composite particles (including heavy ions) considering the reaction mechanisms other than the cluster stripping mechanism, in particular the exchange processes. The accurate equations of the distorbed-wave method in the three-body problem and the general formula dor calculating differential cross-sections of arbitrary binary reactions by DWBAFR are presented. Accurate and approximate methods allowing for finite interaction radius are discussed. Two main versions of exact account of recoil effects: separation of variables in wave functions of relative motion of particles and in interaction potentials and separation of variables in distorted waves are analysed. Given is a characteristic of the known calculated programs approximately and exactly taking account of recoil effects for direct and exchange processes [ru

  16. Combined Forecasting Method of Landslide Deformation Based on MEEMD, Approximate Entropy, and WLS-SVM

    Directory of Open Access Journals (Sweden)

    Shaofeng Xie

    2017-01-01

    Full Text Available Given the chaotic characteristics of the time series of landslides, a new method based on modified ensemble empirical mode decomposition (MEEMD, approximate entropy and the weighted least square support vector machine (WLS-SVM was proposed. The method mainly started from the chaotic sequence of time-frequency analysis and improved the model performance as follows: first a deformation time series was decomposed into a series of subsequences with significantly different complexity using MEEMD. Then the approximate entropy method was used to generate a new subsequence for the combination of subsequences with similar complexity, which could effectively concentrate the component feature information and reduce the computational scale. Finally the WLS-SVM prediction model was established for each new subsequence. At the same time, phase space reconstruction theory and the grid search method were used to select the input dimension and the optimal parameters of the model, and then the superposition of each predicted value was the final forecasting result. Taking the landslide deformation data of Danba as an example, the experiments were carried out and compared with wavelet neural network, support vector machine, least square support vector machine and various combination schemes. The experimental results show that the algorithm has high prediction accuracy. It can ensure a better prediction effect even in landslide deformation periods of rapid fluctuation, and it can also better control the residual value and effectively reduce the error interval.

  17. Solution of two-dimensional equations of neutron transport in 4P0-approximation of spherical harmonics method

    International Nuclear Information System (INIS)

    Polivanskij, V.P.

    1989-01-01

    The method to solve two-dimensional equations of neutron transport using 4P 0 -approximation is presented. Previously such approach was efficiently used for the solution of one-dimensional problems. New an attempt is made to apply the approach to solution of two-dimensional problems. Algorithm of the solution is given, as well as results of test neutron-physical calculations. A considerable as compared with diffusion approximation is shown. 11 refs

  18. The comparison of DYNA3D to approximate solutions for a partially- full waste storage tank subjected to seismic loading

    International Nuclear Information System (INIS)

    Zaslawsky, M.; Kennedy, W.N.

    1992-01-01

    Mathematical solutions to the problem consisting of a partially-full waste tank subjected to seismic loading, embedded in soil, is classically difficult in that one has to address: soil-structure interaction, fluid-structure interaction, non-linear behavior of material, dynamic effects. Separating the problem and applying numerous assumptions will yield approximate solutions. This paper explores methods for generating these solutions accurately

  19. Application of the N-quantum approximation method to bound state problems

    International Nuclear Information System (INIS)

    Raychaudhuri, A.

    1977-01-01

    The N-quantum approximation (NQA) method is examined in the light of its application to bound state problems. Bound state wave functions are obtained as expansion coefficients in a truncated Haag expansion. From the equations of motion for the Heisenberg field and the NQA expansion, an equation satisfied by the wave function is derived. Two different bound state systems are considered. In one case, the bound state problem of two identical scalars by scalar exchange is analyzed using the NQA. An integral equation satisfied by the wave function is derived. In the nonrelativistic limit, the equation is shown to reduce to the Schroedinger equation. The equation is solved numerically, and the results compared with those obtained for this system by other methods. The NQA method is also applied to the bound state of two spin 1/2 particles with electromagnetic interaction. The integral equation for the wave function is shown to agree with the corresponding Bethe Salpeter equation in the nonrelativistic limit. Using the Dirac (4 x 4) matrices the wave function is expanded in terms of structure functions and the equation for the wave function is reduced to two disjoint sets of coupled equation for the structure functions

  20. On the Application of Iterative Methods of Nondifferentiable Optimization to Some Problems of Approximation Theory

    Directory of Open Access Journals (Sweden)

    Stefan M. Stefanov

    2014-01-01

    Full Text Available We consider the data fitting problem, that is, the problem of approximating a function of several variables, given by tabulated data, and the corresponding problem for inconsistent (overdetermined systems of linear algebraic equations. Such problems, connected with measurement of physical quantities, arise, for example, in physics, engineering, and so forth. A traditional approach for solving these two problems is the discrete least squares data fitting method, which is based on discrete l2-norm. In this paper, an alternative approach is proposed: with each of these problems, we associate a nondifferentiable (nonsmooth unconstrained minimization problem with an objective function, based on discrete l1- and/or l∞-norm, respectively; that is, these two norms are used as proximity criteria. In other words, the problems under consideration are solved by minimizing the residual using these two norms. Respective subgradients are calculated, and a subgradient method is used for solving these two problems. The emphasis is on implementation of the proposed approach. Some computational results, obtained by an appropriate iterative method, are given at the end of the paper. These results are compared with the results, obtained by the iterative gradient method for the corresponding “differentiable” discrete least squares problems, that is, approximation problems based on discrete l2-norm.

  1. Solution of the point kinetics equations in the presence of Newtonian temperature feedback by Pade approximations via the analytical inversion method

    International Nuclear Information System (INIS)

    Aboanber, A E; Nahla, A A

    2002-01-01

    A method based on the Pade approximations is applied to the solution of the point kinetics equations with a time varying reactivity. The technique consists of treating explicitly the roots of the inhour formula. A significant improvement has been observed by treating explicitly the most dominant roots of the inhour equation, which usually would make the Pade approximation inaccurate. Also the analytical inversion method which permits a fast inversion of polynomials of the point kinetics matrix is applied to the Pade approximations. Results are presented for several cases of Pade approximations using various options of the method with different types of reactivity. The formalism is applicable equally well to non-linear problems, where the reactivity depends on the neutron density through temperature feedback. It was evident that the presented method is particularly good for cases in which the reactivity can be represented by a series of steps and performed quite well for more general cases

  2. Symbolic computation of analytic approximate solutions for nonlinear differential equations with initial conditions

    Science.gov (United States)

    Lin, Yezhi; Liu, Yinping; Li, Zhibin

    2012-01-01

    The Adomian decomposition method (ADM) is one of the most effective methods for constructing analytic approximate solutions of nonlinear differential equations. In this paper, based on the new definition of the Adomian polynomials, and the two-step Adomian decomposition method (TSADM) combined with the Padé technique, a new algorithm is proposed to construct accurate analytic approximations of nonlinear differential equations with initial conditions. Furthermore, a MAPLE package is developed, which is user-friendly and efficient. One only needs to input a system, initial conditions and several necessary parameters, then our package will automatically deliver analytic approximate solutions within a few seconds. Several different types of examples are given to illustrate the validity of the package. Our program provides a helpful and easy-to-use tool in science and engineering to deal with initial value problems. Program summaryProgram title: NAPA Catalogue identifier: AEJZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 4060 No. of bytes in distributed program, including test data, etc.: 113 498 Distribution format: tar.gz Programming language: MAPLE R13 Computer: PC Operating system: Windows XP/7 RAM: 2 Gbytes Classification: 4.3 Nature of problem: Solve nonlinear differential equations with initial conditions. Solution method: Adomian decomposition method and Padé technique. Running time: Seconds at most in routine uses of the program. Special tasks may take up to some minutes.

  3. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  4. Numerical approximations of nonlinear fractional differential difference equations by using modified He-Laplace method

    Directory of Open Access Journals (Sweden)

    J. Prakash

    2016-03-01

    Full Text Available In this paper, a numerical algorithm based on a modified He-Laplace method (MHLM is proposed to solve space and time nonlinear fractional differential-difference equations (NFDDEs arising in physical phenomena such as wave phenomena in fluids, coupled nonlinear optical waveguides and nanotechnology fields. The modified He-Laplace method is a combined form of the fractional homotopy perturbation method and Laplace transforms method. The nonlinear terms can be easily decomposed by the use of He’s polynomials. This algorithm has been tested against time-fractional differential-difference equations such as the modified Lotka Volterra and discrete (modified KdV equations. The proposed scheme grants the solution in the form of a rapidly convergent series. Three examples have been employed to illustrate the preciseness and effectiveness of the proposed method. The achieved results expose that the MHLM is very accurate, efficient, simple and can be applied to other nonlinear FDDEs.

  5. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation

    KAUST Repository

    Wu, Zedong

    2018-04-05

    Numerical simulation of the acoustic wave equation in either isotropic or anisotropic media is crucial to seismic modeling, imaging and inversion. Actually, it represents the core computation cost of these highly advanced seismic processing methods. However, the conventional finite-difference method suffers from severe numerical dispersion errors and S-wave artifacts when solving the acoustic wave equation for anisotropic media. We propose a method to obtain the finite-difference coefficients by comparing its numerical dispersion with the exact form. We find the optimal finite difference coefficients that share the dispersion characteristics of the exact equation with minimal dispersion error. The method is extended to solve the acoustic wave equation in transversely isotropic (TI) media without S-wave artifacts. Numerical examples show that the method is is highly accurate and efficient.

  6. Cold pasta phase in the extended Thomas-Fermi approximation

    Science.gov (United States)

    Avancini, S. S.; Bertolino, B. P.

    2015-10-01

    In this paper, we aim to obtain more accurate values for the transition density to the homogenous phase in the nuclear pasta that occurs in the inner crust of neutron stars. To that end, we use the nonlinear Walecka model at zero temperature and an approach based on the extended Thomas-Fermi (ETF) approximation.

  7. Cold pasta phase in the extended Thomas–Fermi approximation

    International Nuclear Information System (INIS)

    Avancini, S.S.; Bertolino, B.P.

    2015-01-01

    In this paper, we aim to obtain more accurate values for the transition density to the homogenous phase in the nuclear pasta that occurs in the inner crust of neutron stars. To that end, we use the nonlinear Walecka model at zero temperature and an approach based on the extended Thomas–Fermi (ETF) approximation. (author)

  8. An asymptotically consistent approximant for the equatorial bending angle of light due to Kerr black holes

    International Nuclear Information System (INIS)

    Barlow, Nathaniel S; Faber, Joshua A; Weinstein, Steven J

    2017-01-01

    An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math . 70 21–48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations. (paper)

  9. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  10. An extension of the fenske-hall LCAO method for approximate calculations of inner-shell binding energies of molecules

    Science.gov (United States)

    Zwanziger, Ch.; Reinhold, J.

    1980-02-01

    The approximate LCAO MO method of Fenske and Hall has been extended to an all-election method allowing the calculation of inner-shell binding energies of molecules and their chemical shifts. Preliminary results are given.

  11. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    Science.gov (United States)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  12. Local density approximation for exchange in excited-state density functional theory

    OpenAIRE

    Harbola, Manoj K.; Samal, Prasanjit

    2004-01-01

    Local density approximation for the exchange energy is made for treatment of excited-states in density-functional theory. It is shown that taking care of the state-dependence of the LDA exchange energy functional leads to accurate excitation energies.

  13. Evaluation of Gaussian approximations for data assimilation in reservoir models

    KAUST Repository

    Iglesias, Marco A.

    2013-07-14

    implementation of the MCMC method provides the gold standard against which the aforementioned Gaussian approximations are assessed. We present numerical synthetic experiments where we quantify the capability of each of the ad hoc Gaussian approximation in reproducing the mean and the variance of the posterior distribution (characterized via MCMC) associated to a data assimilation problem. Both single-phase and two-phase (oil-water) reservoir models are considered so that fundamental differences in the resulting forward operators are highlighted. The main objective of our controlled experiments was to exhibit the substantial discrepancies of the approximation properties of standard ad hoc Gaussian approximations. Numerical investigations of the type we present here will lead to the greater understanding of the cost-efficient, but ad hoc, Bayesian techniques used for data assimilation in petroleum reservoirs and hence ultimately to improved techniques with more accurate uncertainty quantification. © 2013 Springer Science+Business Media Dordrecht.

  14. Diophantine approximation and badly approximable sets

    DEFF Research Database (Denmark)

    Kristensen, S.; Thorn, R.; Velani, S.

    2006-01-01

    . The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...

  15. Accurate detection of carcinoma cells by use of a cell microarray chip.

    Directory of Open Access Journals (Sweden)

    Shohei Yamamura

    Full Text Available BACKGROUND: Accurate detection and analysis of circulating tumor cells plays an important role in the diagnosis and treatment of metastatic cancer treatment. METHODS AND FINDINGS: A cell microarray chip was used to detect spiked carcinoma cells among leukocytes. The chip, with 20,944 microchambers (105 µm width and 50 µm depth, was made from polystyrene; and the formation of monolayers of leukocytes in the microchambers was observed. Cultured human T lymphoblastoid leukemia (CCRF-CEM cells were used to examine the potential of the cell microarray chip for the detection of spiked carcinoma cells. A T lymphoblastoid leukemia suspension was dispersed on the chip surface, followed by 15 min standing to allow the leukocytes to settle down into the microchambers. Approximately 29 leukocytes were found in each microchamber when about 600,000 leukocytes in total were dispersed onto a cell microarray chip. Similarly, when leukocytes isolated from human whole blood were used, approximately 89 leukocytes entered each microchamber when about 1,800,000 leukocytes in total were placed onto the cell microarray chip. After washing the chip surface, PE-labeled anti-cytokeratin monoclonal antibody and APC-labeled anti-CD326 (EpCAM monoclonal antibody solution were dispersed onto the chip surface and allowed to react for 15 min; and then a microarray scanner was employed to detect any fluorescence-positive cells within 20 min. In the experiments using spiked carcinoma cells (NCI-H1650, 0.01 to 0.0001%, accurate detection of carcinoma cells was achieved with PE-labeled anti-cytokeratin monoclonal antibody. Furthermore, verification of carcinoma cells in the microchambers was performed by double staining with the above monoclonal antibodies. CONCLUSION: The potential application of the cell microarray chip for the detection of CTCs was shown, thus demonstrating accurate detection by double staining for cytokeratin and EpCAM at the single carcinoma cell level.

  16. A variational nodal diffusion method of high accuracy; Varijaciona nodalna difuziona metoda visoke tachnosti

    Energy Technology Data Exchange (ETDEWEB)

    Tomasevic, Dj; Altiparmarkov, D [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    A variational nodal diffusion method with accurate treatment of transverse leakage shape is developed and presented in this paper. Using Legendre expansion in transverse coordinates higher order quasi-one-dimensional nodal equations are formulated. Numerical solution has been carried out using analytical solutions in alternating directions assuming Legendre expansion of the RHS term. The method has been tested against 2D and 3D IAEA benchmark problem, as well as 2D CANDU benchmark problem. The results are highly accurate. The first order approximation yields to the same order of accuracy as the standard nodal methods with quadratic leakage approximation, while the second order reaches reference solution. (author)

  17. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  18. New simple method for fast and accurate measurement of volumes

    International Nuclear Information System (INIS)

    Frattolillo, Antonio

    2006-01-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  19. Validation of the New Interpretation of Gerasimov's Nasal Projection Method for Forensic Facial Approximation Using CT Data

    DEFF Research Database (Denmark)

    Maltais Lapointe, Genevieve; Lynnerup, Niels; Hoppa, Robert D

    2016-01-01

    The most common method to predict nasal projection for forensic facial approximation is Gerasimov's two-tangent method. Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) argued that the method has not being properly implemented and a revised interpretation was proposed. The aim of this study......, and Ullrich H, Stephan CN (J Forensic Sci, 2011; 56: 470) interpretation should be used instead....

  20. Modeling Rocket Flight in the Low-Friction Approximation

    Directory of Open Access Journals (Sweden)

    Logan White

    2014-09-01

    Full Text Available In a realistic model for rocket dynamics, in the presence of atmospheric drag and altitude-dependent gravity, the exact kinematic equation cannot be integrated in closed form; even when neglecting friction, the exact solution is a combination of elliptic functions of Jacobi type, which are not easy to use in a computational sense. This project provides a precise analysis of the various terms in the full equation (such as gravity, drag, and exhaust momentum, and the numerical ranges for which various approximations are accurate to within 1%. The analysis leads to optimal approximations expressed through elementary functions, which can be implemented for efficient flight prediction on simple computational devices, such as smartphone applications.

  1. A Method for Generating Approximate Similarity Solutions of Nonlinear Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Mazhar Iqbal

    2014-01-01

    Full Text Available Standard application of similarity method to find solutions of PDEs mostly results in reduction to ODEs which are not easily integrable in terms of elementary or tabulated functions. Such situations usually demand solving reduced ODEs numerically. However, there are no systematic procedures available to utilize these numerical solutions of reduced ODE to obtain the solution of original PDE. A practical and tractable approach is proposed to deal with such situations and is applied to obtain approximate similarity solutions to different cases of an initial-boundary value problem of unsteady gas flow through a semi-infinite porous medium.

  2. Cheap contouring of costly functions: the Pilot Approximation Trajectory algorithm

    International Nuclear Information System (INIS)

    Huttunen, Janne M J; Stark, Philip B

    2012-01-01

    The Pilot Approximation Trajectory (PAT) contour algorithm can find the contour of a function accurately when it is not practical to evaluate the function on a grid dense enough to use a standard contour algorithm, for instance, when evaluating the function involves conducting a physical experiment or a computationally intensive simulation. PAT relies on an inexpensive pilot approximation to the function, such as interpolating from a sparse grid of inexact values, or solving a partial differential equation (PDE) numerically using a coarse discretization. For each level of interest, the location and ‘trajectory’ of an approximate contour of this pilot function are used to decide where to evaluate the original function to find points on its contour. Those points are joined by line segments to form the PAT approximation of the contour of the original function. Approximating a contour numerically amounts to estimating a lower level set of the function, the set of points on which the function does not exceed the contour level. The area of the symmetric difference between the true lower level set and the estimated lower level set measures the accuracy of the contour. PAT measures its own accuracy by finding an upper confidence bound for this area. In examples, PAT can estimate a contour more accurately than standard algorithms, using far fewer function evaluations than standard algorithms require. We illustrate PAT by constructing a confidence set for viscosity and thermal conductivity of a flowing gas from simulated noisy temperature measurements, a problem in which each evaluation of the function to be contoured requires solving a different set of coupled nonlinear PDEs. (paper)

  3. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  4. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    Science.gov (United States)

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  5. Approximation methods in loop quantum cosmology: from Gowdy cosmologies to inhomogeneous models in Friedmann–Robertson–Walker geometries

    International Nuclear Information System (INIS)

    Martín-Benito, Mercedes; Martín-de Blas, Daniel; Marugán, Guillermo A Mena

    2014-01-01

    We develop approximation methods in the hybrid quantization of the Gowdy model with linear polarization and a massless scalar field, for the case of three-torus spatial topology. The loop quantization of the homogeneous gravitational sector of the Gowdy model (according to the improved dynamics prescription) and the presence of inhomogeneities lead to a very complicated Hamiltonian constraint. Therefore, the extraction of physical results calls for the introduction of well justified approximations. We first show how to approximate the homogeneous part of the Hamiltonian constraint, corresponding to Bianchi I geometries, as if it described a Friedmann–Robertson–Walker (FRW) model corrected with anisotropies. This approximation is valid in the sector of high energies of the FRW geometry (concerning its contribution to the constraint) and for anisotropy profiles that are sufficiently smooth. In addition, for certain families of states related to regimes of physical interest, with negligible quantum effects of the anisotropies and small inhomogeneities, one can approximate the Hamiltonian constraint of the inhomogeneous system by that of an FRW geometry with a relatively simple matter content, and then obtain its solutions. (paper)

  6. Accurate quasiparticle calculation of x-ray photoelectron spectra of solids.

    Science.gov (United States)

    Aoki, Tsubasa; Ohno, Kaoru

    2018-05-31

    It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.

  7. Accurate quasiparticle calculation of x-ray photoelectron spectra of solids

    Science.gov (United States)

    Aoki, Tsubasa; Ohno, Kaoru

    2018-05-01

    It has been highly desired to provide an accurate and reliable method to calculate core electron binding energies (CEBEs) of crystals and to understand the final state screening effect on a core hole in high resolution x-ray photoelectron spectroscopy (XPS), because the ΔSCF method cannot be simply used for bulk systems. We propose to use the quasiparticle calculation based on many-body perturbation theory for this problem. In this study, CEBEs of band-gapped crystals, silicon, diamond, β-SiC, BN, and AlP, are investigated by means of the GW approximation (GWA) using the full ω integration and compared with the preexisting XPS data. The screening effect on a deep core hole is also investigated in detail by evaluating the relaxation energy (RE) from the core and valence contributions separately. Calculated results show that not only the valence electrons but also the core electrons have an important contribution to the RE, and the GWA have a tendency to underestimate CEBEs due to the excess RE. This underestimation can be improved by introducing the self-screening correction to the GWA. The resulting C1s, B1s, N1s, Si2p, and Al2p CEBEs are in excellent agreement with the experiments within 1 eV absolute error range. The present self-screening corrected GW approach has the capability to achieve the highly accurate prediction of CEBEs without any empirical parameter for band-gapped crystals, and provide a more reliable theoretical approach than the conventional ΔSCF-DFT method.

  8. On a Convergence of Rational Approximations by the Modified Fourier Basis

    Directory of Open Access Journals (Sweden)

    Tigran Bakaryan

    2017-12-01

    Full Text Available We continue investigations of the modified-trigonometric-rational approximations that arise while accelerating the convergence of the modified Fourier expansions by means of rational corrections. Previously, we investigated the pointwise convergence of the rational approximations away from the endpoints and the $L_2$-convergence on the entire interval. Here, we study the convergence at the endpoints and derive the exact constants for the main terms of asymptotic errors. We show that the Fourier-Pade approximations are much more accurate in all frameworks than the modified expansions for sufficiently smooth functions. Moreover, we consider a simplified version of the rational approximations and explore the optimal values of parameters that lead to better accuracy in the framework of the $L_2$-error. Numerical experiments perform comparisons of the rational approximations with the modified Fourier expansions.

  9. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  10. A general approach to the construction of 'very accurate' or 'definitive' methods by radiochemical NAA and the role of these methods in QA

    International Nuclear Information System (INIS)

    Dybczynski, R.

    1998-01-01

    Constant progress in instrumentation and methodology of inorganic trace analysis is not always paralleled by improvement in reliability of analytical results. Our approach to construction of 'very accurate' methods for the determination of selected trace elements in biological materials by RNAA is based on an assumption that: (i) The radionuclide in question should be selectively and quantitatively isolated from the irradiated sample by a suitable radiochemical scheme, optimized with respect to this particular radionuclide, yielding finally the analyte in the state of high radiochemical purity what assures interference-free measurement by gamma-ray spectrometry. (ii) The radiochemical scheme should be based on ion exchange and/or extraction column chromatography resulting in an easy automatic repetition of an elementary act of distribution of the analyte and accompanying radionuclides between stationary and mobile phases. (iii) The method should have some intrinsic mechanisms incorporated into the procedure preventing any possibility of making gross errors. Based on these general assumptions, several more specific rules for devising of 'very accurate' methods were formulated and applied when elaborating our methods for the determination of copper, cobalt, nickel, cadmium, molybdenum and uranium in biological materials. The significance of such methods for Quality Assurance is pointed out and illustrated by their use in the certification campaign of the new Polish biological CRMs based on tobacco

  11. Application of simple approximate system analysis methods for reliability and availability improvement of reactor WWER-1000

    International Nuclear Information System (INIS)

    Manchev, B.; Marinova, B.; Nenkova, B.

    2001-01-01

    The method described on this report provides a set of simple, easily understood 'approximate' models applicable to a large class of system architectures. Constructing a Markov model of each redundant subsystem and its replacement after that by a pseudo-component develops the approximation models. Of equal importance, the models can be easily understood even of non-experts, including managers, high-level decision-makers and unsophisticated consumers. A necessary requirement for their application is the systems to be repairable and the mean time to repair to be much smaller than the mean time to failure. This ia a case most often met in the real practice. Results of the 'approximate' model application on a technological system of Kozloduy NPP are also presented. The results obtained can be compared quite favorably with the results obtained by using SAPHIRE software

  12. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2009-06-19

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -{alpha}r{sup {lambda}}exp(-{beta}r) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential.

  13. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2009-01-01

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -αr λ exp(-βr) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential

  14. An accurate optical design method for synchrotron radiation beamlines with wave-front aberration theory

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xiaojiang, E-mail: slsyxj@nus.edu.sg; Diao, Caozheng; Breese, Mark B. H. [Singapore Synchrotron Light Source, National University of Singapore, Singapore 117603 (Singapore)

    2016-07-27

    An aberration calculation method which was developed by Lu [1] can treat individual aberration term precisely. Spectral aberration is the linear sum of these aberration terms, and the aberrations of multi-element systems also can be calculated correctly when the stretching ratio, defined herein, is unity. Evaluation of focusing mirror-grating systems which are optimized according to Lu’s method, along with the Light Path Function (LPF) and the Spot Diagram method (SD) are discussed to confirm the advantage of Lu’s methodology. Lu’s aberration terms are derived from a precise wave-front treatment, whereas the terms of the power series expansion of the light path function do not yield an accurate sum of the aberrations. Moreover, Lu’s aberration terms can be individually optimized. This is not possible with the analytical spot diagram formulae.

  15. An Accurate Method for Inferring Relatedness in Large Datasets of Unphased Genotypes via an Embedded Likelihood-Ratio Test

    KAUST Repository

    Rodriguez, Jesse M.

    2013-01-01

    Studies that map disease genes rely on accurate annotations that indicate whether individuals in the studied cohorts are related to each other or not. For example, in genome-wide association studies, the cohort members are assumed to be unrelated to one another. Investigators can correct for individuals in a cohort with previously-unknown shared familial descent by detecting genomic segments that are shared between them, which are considered to be identical by descent (IBD). Alternatively, elevated frequencies of IBD segments near a particular locus among affected individuals can be indicative of a disease-associated gene. As genotyping studies grow to use increasingly large sample sizes and meta-analyses begin to include many data sets, accurate and efficient detection of hidden relatedness becomes a challenge. To enable disease-mapping studies of increasingly large cohorts, a fast and accurate method to detect IBD segments is required. We present PARENTE, a novel method for detecting related pairs of individuals and shared haplotypic segments within these pairs. PARENTE is a computationally-efficient method based on an embedded likelihood ratio test. As demonstrated by the results of our simulations, our method exhibits better accuracy than the current state of the art, and can be used for the analysis of large genotyped cohorts. PARENTE\\'s higher accuracy becomes even more significant in more challenging scenarios, such as detecting shorter IBD segments or when an extremely low false-positive rate is required. PARENTE is publicly and freely available at http://parente.stanford.edu/. © 2013 Springer-Verlag.

  16. Weighted density approximation for bonding in molecules: ring and cage polymers

    CERN Document Server

    Sweatman, M B

    2003-01-01

    The focus of this work is the bonded contribution to the intrinsic Helmholtz free energy of molecules. A weighted density approximation (WDA) for this contribution is presented within the interaction site model (ISM) for ring and cage polymers. The resulting density functional theory (ISM/WDA) for these systems is no more complex than theories for a pure simple fluid, and much less complex than density functional approaches that treat the bonding functional exactly. The ISM/WDA bonding functional is much more accurate than either the ISM/HNC or ISM/PY bonding functionals, which are related to the reference interaction-site model (RISM)/HNC and RISM/PY integral equations respectively, for ideal ring polymers. This means that the ISM/WDA functional should generally be more accurate for most 'real' ring or cage polymer systems when any reasonable approximation for the 'excess' contribution to the intrinsic Helmholtz free energy is employed.

  17. Weighted density approximation for bonding in molecules: ring and cage polymers

    International Nuclear Information System (INIS)

    Sweatman, M B

    2003-01-01

    The focus of this work is the bonded contribution to the intrinsic Helmholtz free energy of molecules. A weighted density approximation (WDA) for this contribution is presented within the interaction site model (ISM) for ring and cage polymers. The resulting density functional theory (ISM/WDA) for these systems is no more complex than theories for a pure simple fluid, and much less complex than density functional approaches that treat the bonding functional exactly. The ISM/WDA bonding functional is much more accurate than either the ISM/HNC or ISM/PY bonding functionals, which are related to the reference interaction-site model (RISM)/HNC and RISM/PY integral equations respectively, for ideal ring polymers. This means that the ISM/WDA functional should generally be more accurate for most 'real' ring or cage polymer systems when any reasonable approximation for the 'excess' contribution to the intrinsic Helmholtz free energy is employed

  18. Approximate Bayesian computation.

    Directory of Open Access Journals (Sweden)

    Mikael Sunnåker

    Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.

  19. Multilevel weighted least squares polynomial approximation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-06-30

    Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.

  20. Approximate spin projected spin-unrestricted density functional theory method: Application to diradical character dependences of second hyperpolarizabilities

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, Masayoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Minami, Takuya, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Fukui, Hitoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Yoneda, Kyohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Shigeta, Yasuteru, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Kishi, Ryohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp [Department of Materials Engineering Science, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan); Champagne, Benoît; Botek, Edith [Laboratoire de Chimie Théorique, Facultés Universitaires Notre-Dame de la Paix (FUNDP), rue de Bruxelles, 61, 5000 Namur (Belgium)

    2015-01-22

    We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.

  1. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren

    2017-01-01

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  2. Sparse approximation of multilinear problems with applications to kernel-based methods in UQ

    KAUST Repository

    Nobile, Fabio

    2017-11-16

    We provide a framework for the sparse approximation of multilinear problems and show that several problems in uncertainty quantification fit within this framework. In these problems, the value of a multilinear map has to be approximated using approximations of different accuracy and computational work of the arguments of this map. We propose and analyze a generalized version of Smolyak’s algorithm, which provides sparse approximation formulas with convergence rates that mitigate the curse of dimension that appears in multilinear approximation problems with a large number of arguments. We apply the general framework to response surface approximation and optimization under uncertainty for parametric partial differential equations using kernel-based approximation. The theoretical results are supplemented by numerical experiments.

  3. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  4. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  5. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    Science.gov (United States)

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  6. Coupled radiative transfer equation and diffusion approximation model for photon migration in turbid medium with low-scattering and non-scattering regions

    International Nuclear Information System (INIS)

    Tarvainen, Tanja; Vauhkonen, Marko; Kolehmainen, Ville; Arridge, Simon R; Kaipio, Jari P

    2005-01-01

    In this paper, a coupled radiative transfer equation and diffusion approximation model is extended for light propagation in turbid medium with low-scattering and non-scattering regions. The light propagation is modelled with the radiative transfer equation in sub-domains in which the assumptions of the diffusion approximation are not valid. The diffusion approximation is used elsewhere in the domain. The two equations are coupled through their boundary conditions and they are solved simultaneously using the finite element method. The streamline diffusion modification is used to avoid the ray-effect problem in the finite element solution of the radiative transfer equation. The proposed method is tested with simulations. The results of the coupled model are compared with the finite element solutions of the radiative transfer equation and the diffusion approximation and with results of Monte Carlo simulation. The results show that the coupled model can be used to describe photon migration in turbid medium with low-scattering and non-scattering regions more accurately than the conventional diffusion model

  7. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  8. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  9. Traveltime approximations for inhomogeneous HTI media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2011-01-01

    Traveltimes information is convenient for parameter estimation especially if the medium is described by an anisotropic set of parameters. This is especially true if we could relate traveltimes analytically to these medium parameters, which is generally hard to do in inhomogeneous media. As a result, I develop traveltimes approximations for horizontaly transversely isotropic (HTI) media as simplified and even linear functions of the anisotropic parameters. This is accomplished by perturbing the solution of the HTI eikonal equation with respect to η and the azimuthal symmetry direction (usually used to describe the fracture direction) from a generally inhomogeneous elliptically anisotropic background medium. The resulting approximations can provide accurate analytical description of the traveltime in a homogenous background compared to other published moveout equations out there. These equations will allow us to readily extend the inhomogenous background elliptical anisotropic model to an HTI with a variable, but smoothly varying, η and horizontal symmetry direction values. © 2011 Society of Exploration Geophysicists.

  10. Fall with linear drag and Wien's displacement law: approximate solution and Lambert function

    International Nuclear Information System (INIS)

    Vial, Alexandre

    2012-01-01

    We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for undergraduate students, as they show that some transcendental equations found in physics may be solved without purely numerical methods. Moreover, as will be seen in the case of Wien's displacement law, solutions based on series expansion can be very accurate even with few terms. (paper)

  11. Approximate zero-variance Monte Carlo estimation of Markovian unreliability

    International Nuclear Information System (INIS)

    Delcoux, J.L.; Labeau, P.E.; Devooght, J.

    1997-01-01

    Monte Carlo simulation has become an important tool for the estimation of reliability characteristics, since conventional numerical methods are no more efficient when the size of the system to solve increases. However, evaluating by a simulation the probability of occurrence of very rare events means playing a very large number of histories of the system, which leads to unacceptable computation times. Acceleration and variance reduction techniques have to be worked out. We show in this paper how to write the equations of Markovian reliability as a transport problem, and how the well known zero-variance scheme can be adapted to this application. But such a method is always specific to the estimation of one quality, while a Monte Carlo simulation allows to perform simultaneously estimations of diverse quantities. Therefore, the estimation of one of them could be made more accurate while degrading at the same time the variance of other estimations. We propound here a method to reduce simultaneously the variance for several quantities, by using probability laws that would lead to zero-variance in the estimation of a mean of these quantities. Just like the zero-variance one, the method we propound is impossible to perform exactly. However, we show that simple approximations of it may be very efficient. (author)

  12. An improved method to accurately calibrate the gantry angle indicators of the radiotherapy linear accelerators

    International Nuclear Information System (INIS)

    Chang Liyun; Ho, S.-Y.; Du, Y.-C.; Lin, C.-M.; Chen Tainsong

    2007-01-01

    The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11 deg. compared with the digital indicator, and the standard deviation was within 0.05 deg. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator

  13. An efficient and accurate method to obtain the energy-dependent Green function for general potentials

    International Nuclear Information System (INIS)

    Kramer, T; Heller, E J; Parrott, R E

    2008-01-01

    Time-dependent quantum mechanics provides an intuitive picture of particle propagation in external fields. Semiclassical methods link the classical trajectories of particles with their quantum mechanical propagation. Many analytical results and a variety of numerical methods have been developed to solve the time-dependent Schroedinger equation. The time-dependent methods work for nearly arbitrarily shaped potentials, including sources and sinks via complex-valued potentials. Many quantities are measured at fixed energy, which is seemingly not well suited for a time-dependent formulation. Very few methods exist to obtain the energy-dependent Green function for complicated potentials without resorting to ensemble averages or using certain lead-in arrangements. Here, we demonstrate in detail a time-dependent approach, which can accurately and effectively construct the energy-dependent Green function for very general potentials. The applications of the method are numerous, including chemical, mesoscopic, and atomic physics

  14. A scalable and accurate method for classifying protein-ligand binding geometries using a MapReduce approach.

    Science.gov (United States)

    Estrada, T; Zhang, B; Cicotti, P; Armen, R S; Taufer, M

    2012-07-01

    We present a scalable and accurate method for classifying protein-ligand binding geometries in molecular docking. Our method is a three-step process: the first step encodes the geometry of a three-dimensional (3D) ligand conformation into a single 3D point in the space; the second step builds an octree by assigning an octant identifier to every single point in the space under consideration; and the third step performs an octree-based clustering on the reduced conformation space and identifies the most dense octant. We adapt our method for MapReduce and implement it in Hadoop. The load-balancing, fault-tolerance, and scalability in MapReduce allow screening of very large conformation spaces not approachable with traditional clustering methods. We analyze results for docking trials for 23 protein-ligand complexes for HIV protease, 21 protein-ligand complexes for Trypsin, and 12 protein-ligand complexes for P38alpha kinase. We also analyze cross docking trials for 24 ligands, each docking into 24 protein conformations of the HIV protease, and receptor ensemble docking trials for 24 ligands, each docking in a pool of HIV protease receptors. Our method demonstrates significant improvement over energy-only scoring for the accurate identification of native ligand geometries in all these docking assessments. The advantages of our clustering approach make it attractive for complex applications in real-world drug design efforts. We demonstrate that our method is particularly useful for clustering docking results using a minimal ensemble of representative protein conformational states (receptor ensemble docking), which is now a common strategy to address protein flexibility in molecular docking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  16. A spectral nodal method for discrete ordinates problems in x,y geometry

    International Nuclear Information System (INIS)

    Barros, R.C. de; Larsen, E.W.

    1991-06-01

    A new nodal method is proposed for the solution of S N problems in x- y-geometry. This method uses the Spectral Green's Function (SGF) scheme for solving the one-dimensional transverse-integrated nodal transport equations with no spatial truncation error. Thus, the only approximations in the x, y-geometry nodal method occur in the transverse leakage terms, as in diffusion theory. We approximate these leakage terms using a flat or constant approximation, and we refer to the resulting method as the SGF-Constant Nodal (SGF-CN) method. We show in numerical calculations that the SGF-CN method is much more accurate than other well-known transport nodal methods for coarse-mesh deep-penetration S N problems, even though the transverse leakage terms are approximated rather simply. (author)

  17. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Science.gov (United States)

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  18. Approximated calculation of the vacuum wave function and vacuum energy of the LGT with RPA method

    International Nuclear Information System (INIS)

    Hui Ping

    2004-01-01

    The coupled cluster method is improved with the random phase approximation (RPA) to calculate vacuum wave function and vacuum energy of 2 + 1 - D SU(2) lattice gauge theory. In this calculating, the trial wave function composes of single-hollow graphs. The calculated results of vacuum wave functions show very good scaling behaviors at weak coupling region l/g 2 >1.2 from the third order to the sixth order, and the vacuum energy obtained with RPA method is lower than the vacuum energy obtained without RPA method, which means that this method is a more efficient one

  19. Sub-micron accurate track navigation method ''Navi'' for the analysis of Nuclear Emulsion

    International Nuclear Information System (INIS)

    Yoshioka, T; Yoshida, J; Kodama, K

    2011-01-01

    Sub-micron accurate track navigation in Nuclear Emulsion is realized by using low energy signals detected by automated Nuclear Emulsion read-out systems. Using those much dense ''noise'', about 10 4 times larger than the real tracks, the accuracy of the track position navigation reaches to be sub micron only by using the information of a microscope field of view, 200 micron times 200 micron. This method is applied to OPERA analysis in Japan, i.e. support of human eye checks of the candidate tracks, confirmation of neutrino interaction vertexes and to embed missing track segments to the track data read-out by automated systems.

  20. Sub-micron accurate track navigation method ``Navi'' for the analysis of Nuclear Emulsion

    Science.gov (United States)

    Yoshioka, T.; Yoshida, J.; Kodama, K.

    2011-03-01

    Sub-micron accurate track navigation in Nuclear Emulsion is realized by using low energy signals detected by automated Nuclear Emulsion read-out systems. Using those much dense ``noise'', about 104 times larger than the real tracks, the accuracy of the track position navigation reaches to be sub micron only by using the information of a microscope field of view, 200 micron times 200 micron. This method is applied to OPERA analysis in Japan, i.e. support of human eye checks of the candidate tracks, confirmation of neutrino interaction vertexes and to embed missing track segments to the track data read-out by automated systems.

  1. Approximation by Cylinder Surfaces

    DEFF Research Database (Denmark)

    Randrup, Thomas

    1997-01-01

    We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...

  2. Algorithmic implementation of particle-particle ladder diagram approximation to study strongly-correlated metals and semiconductors

    Science.gov (United States)

    Prayogi, A.; Majidi, M. A.

    2017-07-01

    In condensed-matter physics, strongly-correlated systems refer to materials that exhibit variety of fascinating properties and ordered phases, depending on temperature, doping, and other factors. Such unique properties most notably arise due to strong electron-electron interactions, and in some cases due to interactions involving other quasiparticles as well. Electronic correlation effects are non-trivial that one may need a sufficiently accurate approximation technique with quite heavy computation, such as Quantum Monte-Carlo, in order to capture particular material properties arising from such effects. Meanwhile, less accurate techniques may come with lower numerical cost, but the ability to capture particular properties may highly depend on the choice of approximation. Among the many-body techniques derivable from Feynman diagrams, we aim to formulate algorithmic implementation of the Ladder Diagram approximation to capture the effects of electron-electron interactions. We wish to investigate how these correlation effects influence the temperature-dependent properties of strongly-correlated metals and semiconductors. As we are interested to study the temperature-dependent properties of the system, the Ladder diagram method needs to be applied in Matsubara frequency domain to obtain the self-consistent self-energy. However, at the end we would also need to compute the dynamical properties like density of states (DOS) and optical conductivity that are defined in the real frequency domain. For this purpose, we need to perform the analytic continuation procedure. At the end of this study, we will test the technique by observing the occurrence of metal-insulator transition in strongly-correlated metals, and renormalization of the band gap in strongly-correlated semiconductors.

  3. 3-D numerical investigation of subsurface flow in anisotropic porous media using multipoint flux approximation method

    KAUST Repository

    Negara, Ardiansyah

    2013-01-01

    Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation

  4. An Integrated GNSS/INS/LiDAR-SLAM Positioning Method for Highly Accurate Forest Stem Mapping

    Directory of Open Access Journals (Sweden)

    Chuang Qian

    2016-12-01

    Full Text Available Forest mapping, one of the main components of performing a forest inventory, is an important driving force in the development of laser scanning. Mobile laser scanning (MLS, in which laser scanners are installed on moving platforms, has been studied as a convenient measurement method for forest mapping in the past several years. Positioning and attitude accuracies are important for forest mapping using MLS systems. Inertial Navigation Systems (INSs and Global Navigation Satellite Systems (GNSSs are typical and popular positioning and attitude sensors used in MLS systems. In forest environments, because of the loss of signal due to occlusion and severe multipath effects, the positioning accuracy of GNSS is severely degraded, and even that of GNSS/INS decreases considerably. Light Detection and Ranging (LiDAR-based Simultaneous Localization and Mapping (SLAM can achieve higher positioning accuracy in environments containing many features and is commonly implemented in GNSS-denied indoor environments. Forests are different from an indoor environment in that the GNSS signal is available to some extent in a forest. Although the positioning accuracy of GNSS/INS is reduced, estimates of heading angle and velocity can maintain high accurate even with fewer satellites. GNSS/INS and the LiDAR-based SLAM technique can be effectively integrated to form a sustainable, highly accurate positioning and mapping solution for use in forests without additional hardware costs. In this study, information such as heading angles and velocities extracted from a GNSS/INS is utilized to improve the positioning accuracy of the SLAM solution, and two information-aided SLAM methods are proposed. First, a heading angle-aided SLAM (H-aided SLAM method is proposed that supplies the heading angle from GNSS/INS to SLAM. Field test results show that the horizontal positioning accuracy of an entire trajectory of 800 m is 0.13 m and is significantly improved (by 70% compared to that

  5. WKB approximation in atomic physics

    International Nuclear Information System (INIS)

    Karnakov, Boris Mikhailovich

    2013-01-01

    Provides extensive coverage of the Wentzel-Kramers-Brillouin approximation and its applications. Presented as a sequence of problems with highly detailed solutions. Gives a concise introduction for calculating Rydberg states, potential barriers and quasistationary systems. This book has evolved from lectures devoted to applications of the Wentzel-Kramers-Brillouin- (WKB or quasi-classical) approximation and of the method of 1/N -expansion for solving various problems in atomic and nuclear physics. The intent of this book is to help students and investigators in this field to extend their knowledge of these important calculation methods in quantum mechanics. Much material is contained herein that is not to be found elsewhere. WKB approximation, while constituting a fundamental area in atomic physics, has not been the focus of many books. A novel method has been adopted for the presentation of the subject matter, the material is presented as a succession of problems, followed by a detailed way of solving them. The methods introduced are then used to calculate Rydberg states in atomic systems and to evaluate potential barriers and quasistationary states. Finally, adiabatic transition and ionization of quantum systems are covered.

  6. Approximate method for calculating heat conditions in the magnetic circuits of transformers and betatrons

    International Nuclear Information System (INIS)

    Loginov, V.S.

    1986-01-01

    A technique for engineering design of two-dimensional stationary temperature field of rectangular cross section blending pile with inner heat release under nonsymmetrical cooling conditions is suggested. Area of its practical application is determined on the basis of experimental data known in literature. Different methods for calculating temperature distribution in betatron magnetic circuit are compared. Graph of maximum temperature calculation error on the basis of approximated expressions with respect to exact solution is given

  7. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  8. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  9. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  10. Mixed multiscale finite element methods using approximate global information based on partial upscaling

    KAUST Repository

    Jiang, Lijian

    2009-10-02

    The use of limited global information in multiscale simulations is needed when there is no scale separation. Previous approaches entail fine-scale simulations in the computation of the global information. The computation of the global information is expensive. In this paper, we propose the use of approximate global information based on partial upscaling. A requirement for partial homogenization is to capture long-range (non-local) effects present in the fine-scale solution, while homogenizing some of the smallest scales. The local information at these smallest scales is captured in the computation of basis functions. Thus, the proposed approach allows us to avoid the computations at the scales that can be homogenized. This results in coarser problems for the computation of global fields. We analyze the convergence of the proposed method. Mathematical formalism is introduced, which allows estimating the errors due to small scales that are homogenized. The proposed method is applied to simulate two-phase flows in heterogeneous porous media. Numerical results are presented for various permeability fields, including those generated using two-point correlation functions and channelized permeability fields from the SPE Comparative Project (Christie and Blunt, SPE Reserv Evalu Eng 4:308-317, 2001). We consider simple cases where one can identify the scales that can be homogenized. For more general cases, we suggest the use of upscaling on the coarse grid with the size smaller than the target coarse grid where multiscale basis functions are constructed. This intermediate coarse grid renders a partially upscaled solution that contains essential non-local information. Numerical examples demonstrate that the use of approximate global information provides better accuracy than purely local multiscale methods. © 2009 Springer Science+Business Media B.V.

  11. Low-complexity computation of plate eigenmodes with Vekua approximations and the method of particular solutions

    Science.gov (United States)

    Chardon, Gilles; Daudet, Laurent

    2013-11-01

    This paper extends the method of particular solutions (MPS) to the computation of eigenfrequencies and eigenmodes of thin plates, in the framework of the Kirchhoff-Love plate theory. Specific approximation schemes are developed, with plane waves (MPS-PW) or Fourier-Bessel functions (MPS-FB). This framework also requires a suitable formulation of the boundary conditions. Numerical tests, on two plates with various boundary conditions, demonstrate that the proposed approach provides competitive results with standard numerical schemes such as the finite element method, at reduced complexity, and with large flexibility in the implementation choices.

  12. Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes

    Science.gov (United States)

    Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca

    2018-01-01

    Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.

  13. Towards the accurate electronic structure descriptions of typical high-constant dielectrics

    Science.gov (United States)

    Jiang, Ting-Ting; Sun, Qing-Qing; Li, Ye; Guo, Jiao-Jiao; Zhou, Peng; Ding, Shi-Jin; Zhang, David Wei

    2011-05-01

    High-constant dielectrics have gained considerable attention due to their wide applications in advanced devices, such as gate oxides in metal-oxide-semiconductor devices and insulators in high-density metal-insulator-metal capacitors. However, the theoretical investigations of these materials cannot fulfil the requirement of experimental development, especially the requirement for the accurate description of band structures. We performed first-principles calculations based on the hybrid density functionals theory to investigate several typical high-k dielectrics such as Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2. The band structures of these materials are well described within the framework of hybrid density functionals theory. The band gaps of Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2are calculated to be 8.0 eV, 5.6 eV, 6.2 eV, 7.1 eV, 5.3 eV and 5.0 eV, respectively, which are very close to the experimental values and far more accurate than those obtained by the traditional generalized gradient approximation method.

  14. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    Science.gov (United States)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  15. Eyeball Position in Facial Approximation: Accuracy of Methods for Predicting Globe Positioning in Lateral View.

    Science.gov (United States)

    Zednikova Mala, Pavla; Veleminska, Jana

    2018-01-01

    This study measured the accuracy of traditional and validated newly proposed methods for globe positioning in lateral view. Eighty lateral head cephalograms of adult subjects from Central Europe were taken, and the actual and predicted dimensions were compared. The anteroposterior eyeball position was estimated as the most accurate method based on the proportion of the orbital height (SEE = 1.9 mm) and was followed by the "tangent to the iris method" showing SEE = 2.4 mm. The traditional "tangent to the cornea method" underestimated the eyeball projection by SEE = 5.8 mm. Concerning the superoinferior eyeball position, the results showed a deviation from a central to a more superior position by 0.3 mm, on average, and the traditional method of central positioning of the globe could not be rejected as inaccurate (SEE = 0.3 mm). Based on regression analyzes or proportionality of the orbital height, the SEE = 2.1 mm. © 2017 American Academy of Forensic Sciences.

  16. Inertia effects on the rigid displacement approximation of tokamak plasma vertical motion

    International Nuclear Information System (INIS)

    Carrera, R.; Khayrutdinov, R.R.; Azizov, E.A.; Montalvo, E.; Dong, J.Q.

    1991-01-01

    Elongated plasmas in tokamaks are unstable to axisymmetric vertical displacements. The vacuum vessel and passive conductors can stabilize the plasma motion in the short time scale. For stabilization of the plasma movement in the long time scale an active feedback control system is required. A widely used method of plasma stability analysis uses the Rigid Displacement Model (RDM) of plasma behavior. In the RDM it is assumed that the plasma displacement is small and usually plasma inertia effects are neglected. In addition, it is considered that no changes in plasma shape, plasma current, and plasma current profile take place throughout the plasma motion. It has been demonstrated that the massless-filament approximation (instantaneous force-balance) accurately reproduces the unstable root of the passive stabilization problem. Then, on the basis that the instantaneous force-balance approximation is correct in the passive stabilization analysis, the massless approximation is utilized also in the study of the plasma vertical stabilization by active feedback. The authors show here that the RDM (without mass effects included) does not provide correct stability results for a tokamak configuration (plasma column, passive conductors, and feedback control coils). Therefore, it is concluded that inertia effects have to be retained in the RDM system of equations. It is shown analytically and numerically that stability diagrams with and without plasma-mass corrections differ significantly. When inertia effects are included, the stability region is more restricted than obtained in the massless approximation

  17. Approximation of the inverse G-frame operator

    Indian Academy of Sciences (India)

    ... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.

  18. Well-Balanced Second-Order Approximation of the Shallow Water Equations With Friction via Continuous Galerkin Finite Elements

    Science.gov (United States)

    Quezada de Luna, M.; Farthing, M.; Guermond, J. L.; Kees, C. E.; Popov, B.

    2017-12-01

    The Shallow Water Equations (SWEs) are popular for modeling non-dispersive incompressible water waves where the horizontal wavelength is much larger than the vertical scales. They can be derived from the incompressible Navier-Stokes equations assuming a constant vertical velocity. The SWEs are important in Geophysical Fluid Dynamics for modeling surface gravity waves in shallow regimes; e.g., in the deep ocean. Some common geophysical applications are the evolution of tsunamis, river flooding and dam breaks, storm surge simulations, atmospheric flows and others. This work is concerned with the approximation of the time-dependent Shallow Water Equations with friction using explicit time stepping and continuous finite elements. The objective is to construct a method that is at least second-order accurate in space and third or higher-order accurate in time, positivity preserving, well-balanced with respect to rest states, well-balanced with respect to steady sliding solutions on inclined planes and robust with respect to dry states. Methods fulfilling the desired goals are common within the finite volume literature. However, to the best of our knowledge, schemes with the above properties are not well developed in the context of continuous finite elements. We start this work based on a finite element method that is second-order accurate in space, positivity preserving and well-balanced with respect to rest states. We extend it by: modifying the artificial viscosity (via the entropy viscosity method) to deal with issues of loss of accuracy around local extrema, considering a singular Manning friction term handled via an explicit discretization under the usual CFL condition, considering a water height regularization that depends on the mesh size and is consistent with the polynomial approximation, reducing dispersive errors introduced by lumping the mass matrix and others. After presenting the details of the method we show numerical tests that demonstrate the well

  19. Approximate Implicitization Using Linear Algebra

    Directory of Open Access Journals (Sweden)

    Oliver J. D. Barrowclough

    2012-01-01

    Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

  20. A Simple yet Accurate Method for Students to Determine Asteroid Rotation Periods from Fragmented Light Curve Data

    Science.gov (United States)

    Beare, R. A.

    2008-01-01

    Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…