Approximate models for the analysis of laser velocimetry correlation functions
Robinson, D.P.
1981-01-01
Velocity distributions in the subchannels of an eleven pin test section representing a slice through a Fast Reactor sub-assembly were measured with a dual beam laser velocimeter system using a Malvern K 7023 digital photon correlator for signal processing. Two techniques were used for data reduction of the correlation function to obtain velocity and turbulence values. Whilst both techniques were in excellent agreement on the velocity, marked discrepancies were apparent in the turbulence levels. As a consequence of this the turbulence data were not reported. Subsequent investigation has shown that the approximate technique used as the basis of Malvern's Data Processor 7023V is restricted in its range of application. In this note alternative approximate models are described and evaluated. The objective of this investigation was to develop an approximate model which could be used for on-line determination of the turbulence level. (author)
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order
Øjelund, Henrik; Sadegh, Payman
2000-01-01
be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....
Delta-function Approximation SSC Model in 3C 273 S. J. Kang1 ...
Abstract. We obtain an approximate analytical solution using δ approximate calculation on the traditional one-zone synchrotron self-. Compton (SSC) model. In this model, we describe the electron energy distribution by a broken power-law function with a sharp cut-off, and non- thermal photons are produced by both ...
Two site spin correlation function in Bethe-Peierls approximation for Ising model
Kumar, D [Roorkee Univ. (India). Dept. of Physics
1976-07-01
Two site spin correlation function for an Ising model above Curie temperature has been calculated by generalising Bethe-Peierls approximation. The results derived by a graphical method due to Englert are essentially the same as those obtained earlier by Elliott and Marshall, and Oguchi and Ono. The earlier results were obtained by a direct generalisation of the cluster method of Bethe, while these results are derived by retaining that class of diagrams , which is exact on Bethe lattice.
Towards the Accuracy of Cybernetic Strategy Planning Models: Causal Proof and Function Approximation
Christian A. Hillbrand
2003-04-01
Full Text Available All kind of strategic tasks within an enterprise require a deep understanding of its critical key success factors and their interrelations as well as an in-depth analysis of relevant environmental influences. Due to the openness of the underlying system, there seems to be an indefinite number of unknown variables influencing strategic goals. Cybernetic or systemic planning techniques try to overcome this intricacy by modeling the most important cause-and-effect relations within such a system. Although it seems to be obvious that there are specific influences between business variables, it is mostly impossible to identify the functional dependencies underlying such relations. Hence simulation or evaluation techniques based on such hypothetically assumed models deliver inaccurate results or fail completely. This paper addresses the need for accurate strategy planning models and proposes an approach to prove their cause-andeffect relations by empirical evidence. Based on this foundation an approach for the approximation of the underlying cause-andeffect function by the means of Artificial Neural Networks is developed.
Efficient approximation of the incomplete gamma function for use in cloud model applications
U. Blahak
2010-07-01
Full Text Available This paper describes an approximation to the lower incomplete gamma function γ_{l}(a,x which has been obtained by nonlinear curve fitting. It comprises a fixed number of terms and yields moderate accuracy (the absolute approximation error of the corresponding normalized incomplete gamma function P is smaller than 0.02 in the range 0.9 ≤ a ≤ 45 and x≥0. Monotonicity and asymptotic behaviour of the original incomplete gamma function is preserved.
While providing a slight to moderate performance gain on scalar machines (depending on whether a stays the same for subsequent function evaluations or not compared to established and more accurate methods based on series- or continued fraction expansions with a variable number of terms, a big advantage over these more accurate methods is the applicability on vector CPUs. Here the fixed number of terms enables proper and efficient vectorization. The fixed number of terms might be also beneficial on massively parallel machines to avoid load imbalances, caused by a possibly vastly different number of terms in series expansions to reach convergence at different grid points. For many cloud microphysical applications, the provided moderate accuracy should be enough. However, on scalar machines and if a is the same for subsequent function evaluations, the most efficient method to evaluate incomplete gamma functions is perhaps interpolation of pre-computed regular lookup tables (most simple example: equidistant tables.
Rosolen, A.; Peco, C.; Arroyo, M.
2013-01-01
We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler–Lagrange partial differential equations (PDE). The solutions develop sharp i...
Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles
2011-06-01
Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.
Bessems, D.; Rutten, M.C.M.; Vosse, van de F.N.
2007-01-01
Lumped-parameter models (zero-dimensional) and wave-propagation models (one-dimensional) for pressure and flow in large vessels, as well as fully three-dimensional fluid–structure interaction models for pressure and velocity, can contribute valuably to answering physiological and patho-physiological
Measure Fields for Function Approximation
1993-06-01
intelligence research is provided by ONR contract N00014-91-J-4038 J.L. Marroquin was supported in part by a grant from the Consejo Nacional de Ciencia y ... Tecnologia , Mexico. _ 93-28011 9-3 -- -" nnuM IInu 4 0 0 0 1 Introduction imating functions are always discontinuous, and the dis- continuities are...capacity and generalization capabili- is present panel (a) of figure 1 shows a function z(z, y ) ties. that is equal to a tilted plane inside an L
HU Qi-guo
2017-01-01
Full Text Available For reducing the vehicle compartment low frequency noise, the Optimal Latin hypercube sampling method was applied to perform experimental design for sampling in the factorial design space. The thickness parameters of the panels with larger acoustic contribution was considered as factors, as well as the vehicle mass, seventh rank modal frequency of body, peak sound pressure of test point and sound pressure root-mean-square value as responses. By using the RBF(radial basis function neuro-network method, an approximation model of four responses about six factors was established. Further more, error analysis of established approximation model was performed in this paper. To optimize the panel’s thickness parameter, the adaptive simulated annealing algorithm was im-plemented. Optimization results show that the peak sound pressure of driver’s head was reduced by 4.45dB and 5.47dB at frequency 158HZ and 134Hz respec-tively. The test point pressure were significantly reduced at other frequency as well. The results indicate that through the optimization the vehicle interior cavity noise was reduced effectively, and the acoustical comfort of the vehicle was im-proved significantly.
Function approximation of tasks by neural networks
Gougam, L.A.; Chikhi, A.; Mekideche-Chafa, F.
2008-01-01
For several years now, neural network models have enjoyed wide popularity, being applied to problems of regression, classification and time series analysis. Neural networks have been recently seen as attractive tools for developing efficient solutions for many real world problems in function approximation. The latter is a very important task in environments where computation has to be based on extracting information from data samples in real world processes. In a previous contribution, we have used a well known simplified architecture to show that it provides a reasonably efficient, practical and robust, multi-frequency analysis. We have investigated the universal approximation theory of neural networks whose transfer functions are: sigmoid (because of biological relevance), Gaussian and two specified families of wavelets. The latter have been found to be more appropriate to use. The aim of the present contribution is therefore to use a m exican hat wavelet a s transfer function to approximate different tasks relevant and inherent to various applications in physics. The results complement and provide new insights into previously published results on this problem
Nonlinear Ritz approximation for Fredholm functionals
Mudhir A. Abdul Hussain
2015-11-01
Full Text Available In this article we use the modify Lyapunov-Schmidt reduction to find nonlinear Ritz approximation for a Fredholm functional. This functional corresponds to a nonlinear Fredholm operator defined by a nonlinear fourth-order differential equation.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces
Semiclassical initial value approximation for Green's function.
Kay, Kenneth G
2010-06-28
A semiclassical initial value approximation is obtained for the energy-dependent Green's function. For a system with f degrees of freedom the Green's function expression has the form of a (2f-1)-dimensional integral over points on the energy surface and an integral over time along classical trajectories initiated from these points. This approximation is derived by requiring an integral ansatz for Green's function to reduce to Gutzwiller's semiclassical formula when the integrations are performed by the stationary phase method. A simpler approximation is also derived involving only an (f-1)-dimensional integral over momentum variables on a Poincare surface and an integral over time. The relationship between the present expressions and an earlier initial value approximation for energy eigenfunctions is explored. Numerical tests for two-dimensional systems indicate that good accuracy can be obtained from the initial value Green's function for calculations of autocorrelation spectra and time-independent wave functions. The relative advantages of initial value approximations for the energy-dependent Green's function and the time-dependent propagator are discussed.
RATIONAL APPROXIMATIONS TO GENERALIZED HYPERGEOMETRIC FUNCTIONS.
Under weak restrictions on the various free parameters, general theorems for rational representations of the generalized hypergeometric functions...and certain Meijer G-functions are developed. Upon specialization, these theorems yield a sequency of rational approximations which converge to the
Smooth function approximation using neural networks.
Ferrari, Silvia; Stengel, Robert F
2005-01-01
An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is presented. In this paper, the approach is implemented for the approximation of smooth batch data containing the function's input, output, and possibly, gradient information. The training set is associated to the network adjustable parameters by nonlinear weight equations. The cascade structure of these equations reveals that they can be treated as sets of linear systems. Hence, the training process and the network approximation properties can be investigated via linear algebra. Four algorithms are developed to achieve exact or approximate matching of input-output and/or gradient-based training sets. Their application to the design of forward and feedback neurocontrollers shows that algebraic training is characterized by faster execution speeds and better generalization properties than contemporary optimization techniques.
Multidimensional stochastic approximation using locally contractive functions
Lawton, W. M.
1975-01-01
A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.
Traa, M.R.M.J.; Traa, M.R.M.J.; Caspers, W.J.; Caspers, W.J.; Banning, E.J.; Banning, E.J.
1994-01-01
In this paper the Hubbard-Anderson model on a square lattice with two holes is studied. The ground state (GS) is approximated by a variational RVB-type wave function. The holes interact by exchange of a localized spin excitation (SE), which is created or absorbed if a hole moves to a
Efficient approximation of black-box functions and Pareto sets
Rennen, G.
2009-01-01
In the case of time-consuming simulation models or other so-called black-box functions, we determine a metamodel which approximates the relation between the input- and output-variables of the simulation model. To solve multi-objective optimization problems, we approximate the Pareto set, i.e. the
A partition function approximation using elementary symmetric functions.
Ramu Anandakrishnan
Full Text Available In statistical mechanics, the canonical partition function [Formula: see text] can be used to compute equilibrium properties of a physical system. Calculating [Formula: see text] however, is in general computationally intractable, since the computation scales exponentially with the number of particles [Formula: see text] in the system. A commonly used method for approximating equilibrium properties, is the Monte Carlo (MC method. For some problems the MC method converges slowly, requiring a very large number of MC steps. For such problems the computational cost of the Monte Carlo method can be prohibitive. Presented here is a deterministic algorithm - the direct interaction algorithm (DIA - for approximating the canonical partition function [Formula: see text] in [Formula: see text] operations. The DIA approximates the partition function as a combinatorial sum of products known as elementary symmetric functions (ESFs, which can be computed in [Formula: see text] operations. The DIA was used to compute equilibrium properties for the isotropic 2D Ising model, and the accuracy of the DIA was compared to that of the basic Metropolis Monte Carlo method. Our results show that the DIA may be a practical alternative for some problems where the Monte Carlo method converge slowly, and computational speed is a critical constraint, such as for very large systems or web-based applications.
On approximation of functions by product operators
Hare Krishna Nigam
2013-12-01
Full Text Available In the present paper, two quite new reults on the degree of approximation of a function f belonging to the class Lip(α,r, 1≤ r <∞ and the weighted class W(Lr,ξ(t, 1≤ r <∞ by (C,2(E,1 product operators have been obtained. The results obtained in the present paper generalize various known results on single operators.
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.
2011-05-12
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
Simultaneous perturbation stochastic approximation for tidal models
Altaf, M.U.; Heemink, A.W.; Verlaan, M.; Hoteit, Ibrahim
2011-01-01
The Dutch continental shelf model (DCSM) is a shallow sea model of entire continental shelf which is used operationally in the Netherlands to forecast the storm surges in the North Sea. The forecasts are necessary to support the decision of the timely closure of the moveable storm surge barriers to protect the land. In this study, an automated model calibration method, simultaneous perturbation stochastic approximation (SPSA) is implemented for tidal calibration of the DCSM. The method uses objective function evaluations to obtain the gradient approximations. The gradient approximation for the central difference method uses only two objective function evaluation independent of the number of parameters being optimized. The calibration parameter in this study is the model bathymetry. A number of calibration experiments is performed. The effectiveness of the algorithm is evaluated in terms of the accuracy of the final results as well as the computational costs required to produce these results. In doing so, comparison is made with a traditional steepest descent method and also with a newly developed proper orthogonal decompositionbased calibration method. The main findings are: (1) The SPSA method gives comparable results to steepest descent method with little computational cost. (2) The SPSA method with little computational cost can be used to estimate large number of parameters.
When Density Functional Approximations Meet Iron Oxides.
Meng, Yu; Liu, Xing-Wu; Huo, Chun-Fang; Guo, Wen-Ping; Cao, Dong-Bo; Peng, Qing; Dearden, Albert; Gonze, Xavier; Yang, Yong; Wang, Jianguo; Jiao, Haijun; Li, Yongwang; Wen, Xiao-Dong
2016-10-11
Three density functional approximations (DFAs), PBE, PBE+U, and Heyd-Scuseria-Ernzerhof screened hybrid functional (HSE), were employed to investigate the geometric, electronic, magnetic, and thermodynamic properties of four iron oxides, namely, α-FeOOH, α-Fe 2 O 3 , Fe 3 O 4 , and FeO. Comparing our calculated results with available experimental data, we found that HSE (a = 0.15) (containing 15% "screened" Hartree-Fock exchange) can provide reliable values of lattice constants, Fe magnetic moments, band gaps, and formation energies of all four iron oxides, while standard HSE (a = 0.25) seriously overestimates the band gaps and formation energies. For PBE+U, a suitable U value can give quite good results for the electronic properties of each iron oxide, but it is challenging to accurately get other properties of the four iron oxides using the same U value. Subsequently, we calculated the Gibbs free energies of transformation reactions among iron oxides using the HSE (a = 0.15) functional and plotted the equilibrium phase diagrams of the iron oxide system under various conditions, which provide reliable theoretical insight into the phase transformations of iron oxides.
Discovery of functional and approximate functional dependencies in relational databases
Ronald S. King
2003-01-01
Full Text Available This study develops the foundation for a simple, yet efficient method for uncovering functional and approximate functional dependencies in relational databases. The technique is based upon the mathematical theory of partitions defined over a relation's row identifiers. Using a levelwise algorithm the minimal non-trivial functional dependencies can be found using computations conducted on integers. Therefore, the required operations on partitions are both simple and fast. Additionally, the row identifiers provide the added advantage of nominally identifying the exceptions to approximate functional dependencies, which can be used effectively in practical data mining applications.
Approximation of Analytic Functions by Bessel's Functions of Fractional Order
Soon-Mo Jung
2011-01-01
Full Text Available We will solve the inhomogeneous Bessel's differential equation x2y″(x+xy′(x+(x2-ν2y(x=∑m=0∞amxm, where ν is a positive nonintegral number and apply this result for approximating analytic functions of a special type by the Bessel functions of fractional order.
Complexity of Gaussian-Radial-Basis Networks Approximating Smooth Functions
Kainen, P.C.; Kůrková, Věra; Sanguineti, M.
2009-01-01
Roč. 25, č. 1 (2009), s. 63-74 ISSN 0885-064X R&D Projects: GA ČR GA201/08/1744 Institutional research plan: CEZ:AV0Z10300504 Keywords : Gaussian-radial-basis-function networks * rates of approximation * model complexity * variation norms * Bessel and Sobolev norms * tractability of approximation Subject RIV: IN - Informatics, Computer Science Impact factor: 1.227, year: 2009
Yuan, Shifei; Jiang, Lei; Yin, Chengliang; Wu, Hongjie; Zhang, Xi
2017-06-01
To guarantee the safety, high efficiency and long lifetime for lithium-ion battery, an advanced battery management system requires a physics-meaningful yet computationally efficient battery model. The pseudo-two dimensional (P2D) electrochemical model can provide physical information about the lithium concentration and potential distributions across the cell dimension. However, the extensive computation burden caused by the temporal and spatial discretization limits its real-time application. In this research, we propose a new simplified electrochemical model (SEM) by modifying the boundary conditions for electrolyte diffusion equations, which significantly facilitates the analytical solving process. Then to obtain a reduced order transfer function, the Padé approximation method is adopted to simplify the derived transcendental impedance solution. The proposed model with the reduced order transfer function can be briefly computable and preserve physical meanings through the presence of parameters such as the solid/electrolyte diffusion coefficients (Ds&De) and particle radius. The simulation illustrates that the proposed simplified model maintains high accuracy for electrolyte phase concentration (Ce) predictions, saying 0.8% and 0.24% modeling error respectively, when compared to the rigorous model under 1C-rate pulse charge/discharge and urban dynamometer driving schedule (UDDS) profiles. Meanwhile, this simplified model yields significantly reduced computational burden, which benefits its real-time application.
Using function approximation to determine neural network accuracy
Wichman, R.F.; Alexander, J.
2013-01-01
Many, if not most, control processes demonstrate nonlinear behavior in some portion of their operating range and the ability of neural networks to model non-linear dynamics makes them very appealing for control. Control of high reliability safety systems, and autonomous control in process or robotic applications, however, require accurate and consistent control and neural networks are only approximators of various functions so their degree of approximation becomes important. In this paper, the factors affecting the ability of a feed-forward back-propagation neural network to accurately approximate a non-linear function are explored. Compared to pattern recognition using a neural network for function approximation provides an easy and accurate method for determining the network's accuracy. In contrast to other techniques, we show that errors arising in function approximation or curve fitting are caused by the neural network itself rather than scatter in the data. A method is proposed that provides improvements in the accuracy achieved during training and resulting ability of the network to generalize after training. Binary input vectors provided a more accurate model than with scalar inputs and retraining using a small number of the outlier x,y pairs improved generalization. (author)
Function approximation with polynomial regression slines
Urbanski, P.
1996-01-01
Principles of the polynomial regression splines as well as algorithms and programs for their computation are presented. The programs prepared using software package MATLAB are generally intended for approximation of the X-ray spectra and can be applied in the multivariate calibration of radiometric gauges. (author)
Dynamical Vertex Approximation for the Hubbard Model
Toschi, Alessandro
A full understanding of correlated electron systems in the physically relevant situations of three and two dimensions represents a challenge for the contemporary condensed matter theory. However, in the last years considerable progress has been achieved by means of increasingly more powerful quantum many-body algorithms, applied to the basic model for correlated electrons, the Hubbard Hamiltonian. Here, I will review the physics emerging from studies performed with the dynamical vertex approximation, which includes diagrammatic corrections to the local description of the dynamical mean field theory (DMFT). In particular, I will first discuss the phase diagram in three dimensions with a special focus on the commensurate and incommensurate magnetic phases, their (quantum) critical properties, and the impact of fluctuations on electronic lifetimes and spectral functions. In two dimensions, the effects of non-local fluctuations beyond DMFT grow enormously, determining the appearance of a low-temperature insulating behavior for all values of the interaction in the unfrustrated model: Here the prototypical features of the Mott-Hubbard metal-insulator transition, as well as the existence of magnetically ordered phases, are completely overwhelmed by antiferromagnetic fluctuations of exponentially large extension, in accordance with the Mermin-Wagner theorem. Eventually, by a fluctuation diagnostics analysis of cluster DMFT self-energies, the same magnetic fluctuations are identified as responsible for the pseudogap regime in the holed-doped frustrated case, with important implications for the theoretical modeling of the cuprate physics.
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Approximation Algorithms for Model-Based Diagnosis
Feldman, A.B.
2010-01-01
Model-based diagnosis is an area of abductive inference that uses a system model, together with observations about system behavior, to isolate sets of faulty components (diagnoses) that explain the observed behavior, according to some minimality criterion. This thesis presents greedy approximation
Discontinuous approximate molecular electronic wave-functions
Stuebing, E.W.; Weare, J.H.; Parr, R.G.
1977-01-01
Following Kohn, Schlosser and Marcus and Weare and Parr an energy functional is defined for a molecular problem which is stationary in the neighborhood of the exact solution and permits the use of trial functions that are discontinuous. The functional differs from the functional of the standard Rayleigh--Ritz method in the replacement of the usual kinetic energy operators circumflex T(μ) with operators circumflex T'(μ) = circumflex T(μ) + circumflex I(μ) generates contributions from surfaces of nonsmooth behavior. If one uses the nabla PSI . nabla PSI way of writing the usual kinetic energy contributions, one must add surface integrals of the product of the average of nabla PSI and the change of PSI across surfaces of discontinuity. Various calculations are carried out for the hydrogen molecule-ion and the hydrogen molecule. It is shown that ab initio calculations on molecules can be carried out quite generally with a basis of atomic orbitals exactly obeying the zero-differential overlap (ZDO) condition, and a firm basis is thereby provided for theories of molecular electronic structure invoking the ZDO aoproximation. It is demonstrated that a valence bond theory employing orbitals exactly obeying ZDO can provide an adequate account of chemical bonding, and several suggestions are made regarding molecular orbital methods
Approximation methods for the partition functions of anharmonic systems
Lew, P.; Ishida, T.
1979-07-01
The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations
Messica, A.
2016-10-01
The probability distribution function of a weighted sum of non-identical lognormal random variables is required in various fields of science and engineering and specifically in finance for portfolio management as well as exotic options valuation. Unfortunately, it has no known closed form and therefore has to be approximated. Most of the approximations presented to date are complex as well as complicated for implementation. This paper presents a simple, and easy to implement, approximation method via modified moments matching and a polynomial asymptotic series expansion correction for a central limit theorem of a finite sum. The method results in an intuitively-appealing and computation-efficient approximation for a finite sum of lognormals of at least ten summands and naturally improves as the number of summands increases. The accuracy of the method is tested against the results of Monte Carlo simulationsand also compared against the standard central limit theorem andthe commonly practiced Markowitz' portfolio equations.
An inductive algorithm for smooth approximation of functions
Kupenova, T.N.
2011-01-01
An inductive algorithm is presented for smooth approximation of functions, based on the Tikhonov regularization method and applied to a specific kind of the Tikhonov parametric functional. The discrepancy principle is used for estimation of the regularization parameter. The principle of heuristic self-organization is applied for assessment of some parameters of the approximating function
Comparison of four support-vector based function approximators
de Kruif, B.J.; de Vries, Theodorus J.A.
2004-01-01
One of the uses of the support vector machine (SVM), as introduced in V.N. Vapnik (2000), is as a function approximator. The SVM and approximators based on it, approximate a relation in data by applying interpolation between so-called support vectors, being a limited number of samples that have been
Quasi-fractional approximation to the Bessel functions
Guerrero, P.M.L.
1989-01-01
In this paper the authors presents a simple Quasi-Fractional Approximation for Bessel Functions J ν (x), (- 1 ≤ ν < 0.5). This has been obtained by extending a method published which uses simultaneously power series and asymptotic expansions. Both functions, exact and approximated, coincide in at least two digits for positive x, and ν between - 1 and 0,4
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
On root mean square approximation by exponential functions
Sharipov, Ruslan
2014-01-01
The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.
Function approximation using combined unsupervised and supervised learning.
Andras, Peter
2014-03-01
Function approximation is one of the core tasks that are solved using neural networks in the context of many engineering problems. However, good approximation results need good sampling of the data space, which usually requires exponentially increasing volume of data as the dimensionality of the data increases. At the same time, often the high-dimensional data is arranged around a much lower dimensional manifold. Here we propose the breaking of the function approximation task for high-dimensional data into two steps: (1) the mapping of the high-dimensional data onto a lower dimensional space corresponding to the manifold on which the data resides and (2) the approximation of the function using the mapped lower dimensional data. We use over-complete self-organizing maps (SOMs) for the mapping through unsupervised learning, and single hidden layer neural networks for the function approximation through supervised learning. We also extend the two-step procedure by considering support vector machines and Bayesian SOMs for the determination of the best parameters for the nonlinear neurons in the hidden layer of the neural networks used for the function approximation. We compare the approximation performance of the proposed neural networks using a set of functions and show that indeed the neural networks using combined unsupervised and supervised learning outperform in most cases the neural networks that learn the function approximation using the original high-dimensional data.
Jie Shen
2015-01-01
Full Text Available We describe an extension of the redistributed technique form classical proximal bundle method to the inexact situation for minimizing nonsmooth nonconvex functions. The cutting-planes model we construct is not the approximation to the whole nonconvex function, but to the local convexification of the approximate objective function, and this kind of local convexification is modified dynamically in order to always yield nonnegative linearization errors. Since we only employ the approximate function values and approximate subgradients, theoretical convergence analysis shows that an approximate stationary point or some double approximate stationary point can be obtained under some mild conditions.
Legendre-tau approximations for functional differential equations
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
Precise analytic approximations for the Bessel function J1 (x)
Maass, Fernando; Martin, Pablo
2018-03-01
Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.
Approximate formulas for elasticity of the Tornquist functions and some their advantages
Issin, Meyram
2017-09-01
In this article functions of demand for prime necessity, second necessity and luxury goods depending on the income are considered. These functions are called Tornquist functions. By means of the return model the demand for prime necessity goods and second necessity goods are approximately described. Then on the basis of a method of the smallest squares approximate formulas for elasticity of these Tornquist functions are received. To receive an approximate formula for elasticity of function of demand for luxury goods, the linear asymptotic formula is constructed for this function. Some benefits of approximate formulas for elasticity of Tornquist functions are specified.
Approximate analytical modeling of leptospirosis infection
Ismail, Nur Atikah; Azmi, Amirah; Yusof, Fauzi Mohamed; Ismail, Ahmad Izani
2017-11-01
Leptospirosis is an infectious disease carried by rodents which can cause death in humans. The disease spreads directly through contact with feces, urine or through bites of infected rodents and indirectly via water contaminated with urine and droppings from them. Significant increase in the number of leptospirosis cases in Malaysia caused by the recent severe floods were recorded during heavy rainfall season. Therefore, to understand the dynamics of leptospirosis infection, a mathematical model based on fractional differential equations have been developed and analyzed. In this paper an approximate analytical method, the multi-step Laplace Adomian decomposition method, has been used to conduct numerical simulations so as to gain insight on the spread of leptospirosis infection.
Reducing Approximation Error in the Fourier Flexible Functional Form
Tristan D. Skolrud
2017-12-01
Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.
Approximate models for neutral particle transport calculations in ducts
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
On approximation and energy estimates for delta 6-convex functions.
Saleem, Muhammad Shoaib; Pečarić, Josip; Rehman, Nasir; Khan, Muhammad Wahab; Zahoor, Muhammad Sajid
2018-01-01
The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted [Formula: see text]-norm.
On approximation and energy estimates for delta 6-convex functions
Muhammad Shoaib Saleem
2018-02-01
Full Text Available Abstract The smooth approximation and weighted energy estimates for delta 6-convex functions are derived in this research. Moreover, we conclude that if 6-convex functions are closed in uniform norm, then their third derivatives are closed in weighted L2 $L^{2}$-norm.
Cheap contouring of costly functions: the Pilot Approximation Trajectory algorithm
Huttunen, Janne M J; Stark, Philip B
2012-01-01
The Pilot Approximation Trajectory (PAT) contour algorithm can find the contour of a function accurately when it is not practical to evaluate the function on a grid dense enough to use a standard contour algorithm, for instance, when evaluating the function involves conducting a physical experiment or a computationally intensive simulation. PAT relies on an inexpensive pilot approximation to the function, such as interpolating from a sparse grid of inexact values, or solving a partial differential equation (PDE) numerically using a coarse discretization. For each level of interest, the location and ‘trajectory’ of an approximate contour of this pilot function are used to decide where to evaluate the original function to find points on its contour. Those points are joined by line segments to form the PAT approximation of the contour of the original function. Approximating a contour numerically amounts to estimating a lower level set of the function, the set of points on which the function does not exceed the contour level. The area of the symmetric difference between the true lower level set and the estimated lower level set measures the accuracy of the contour. PAT measures its own accuracy by finding an upper confidence bound for this area. In examples, PAT can estimate a contour more accurately than standard algorithms, using far fewer function evaluations than standard algorithms require. We illustrate PAT by constructing a confidence set for viscosity and thermal conductivity of a flowing gas from simulated noisy temperature measurements, a problem in which each evaluation of the function to be contoured requires solving a different set of coupled nonlinear PDEs. (paper)
An approximate fractional Gaussian noise model with computational cost
Sørbye, Sigrunn H.
2017-09-18
Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.
Mathieu functions and its useful approximation for elliptical waveguides
Pillay, Shamini; Kumar, Deepak
2017-11-01
The standard form of the Mathieu differential equation is where a and q are real parameters and q > 0. In this paper we obtain closed formula for the generic term of expansions of modified Mathieu functions in terms of Bessel and modified Bessel functions in the following cases: Let ξ0 = ξ0, where i can take the values 1 and 2 corresponding to the first and the second boundary. These approximations also provide alternative methods for numerical evaluation of Mathieu functions.
Analytical approximations to seawater optical phase functions of scattering
Haltrin, Vladimir I.
2004-11-01
This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.
On Approximate Solutions of Functional Equations in Vector Lattices
Bogdan Batko
2014-01-01
Full Text Available We provide a method of approximation of approximate solutions of functional equations in the class of functions acting into a Riesz space (algebra. The main aim of the paper is to provide a general theorem that can act as a tool applicable to a possibly wide class of functional equations. The idea is based on the use of the Spectral Representation Theory for Riesz spaces. The main result will be applied to prove the stability of an alternative Cauchy functional equation F(x+y+F(x+F(y≠0⇒F(x+y=F(x+F(y in Riesz spaces, the Cauchy equation with squares F(x+y2=(F(x+F(y2 in f-algebras, and the quadratic functional equation F(x+y+F(x-y=2F(x+2F(y in Riesz spaces.
Approximation of the Doppler broadening function by Frobenius method
Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.
2005-01-01
An analytical approximation of the Doppler broadening function ψ(x,ξ) is proposed. This approximation is based on the solution of the differential equation for ψ(x,ξ) using the methods of Frobenius and the parameters variation. The analytical form derived for ψ(x,ξ) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)
APPROXIMATION OF PROBABILITY DISTRIBUTIONS IN QUEUEING MODELS
T. I. Aliev
2013-03-01
Full Text Available For probability distributions with variation coefficient, not equal to unity, mathematical dependences for approximating distributions on the basis of first two moments are derived by making use of multi exponential distributions. It is proposed to approximate distributions with coefficient of variation less than unity by using hypoexponential distribution, which makes it possible to generate random variables with coefficient of variation, taking any value in a range (0; 1, as opposed to Erlang distribution, having only discrete values of coefficient of variation.
Approximate convex hull of affine iterated function system attractors
Mishkinis, Anton; Gentil, Christian; Lanquetin, Sandrine; Sokolov, Dmitry
2012-01-01
Highlights: ► We present an iterative algorithm to approximate affine IFS attractor convex hull. ► Elimination of the interior points significantly reduces the complexity. ► To optimize calculations, we merge the convex hull images at each iteration. ► Approximation by ellipses increases speed of convergence to the exact convex hull. ► We present a method of the output convex hull simplification. - Abstract: In this paper, we present an algorithm to construct an approximate convex hull of the attractors of an affine iterated function system (IFS). We construct a sequence of convex hull approximations for any required precision using the self-similarity property of the attractor in order to optimize calculations. Due to the affine properties of IFS transformations, the number of points considered in the construction is reduced. The time complexity of our algorithm is a linear function of the number of iterations and the number of points in the output approximate convex hull. The number of iterations and the execution time increases logarithmically with increasing accuracy. In addition, we introduce a method to simplify the approximate convex hull without loss of accuracy.
Bessel collocation approach for approximate solutions of Hantavirus infection model
Suayip Yuzbasi
2017-11-01
Full Text Available In this study, a collocation method is introduced to find the approximate solutions of Hantavirus infection model which is a system of nonlinear ordinary differential equations. The method is based on the Bessel functions of the first kind, matrix operations and collocation points. This method converts Hantavirus infection model into a matrix equation in terms of the Bessel functions of first kind, matrix operations and collocation points. The matrix equation corresponds to a system of nonlinear equations with the unknown Bessel coefficients. The reliability and efficiency of the suggested scheme are demonstrated by numerical applications and all numerical calculations have been done by using a program written in Maple.
An Approximate Proximal Bundle Method to Minimize a Class of Maximum Eigenvalue Functions
Wei Wang
2014-01-01
Full Text Available We present an approximate nonsmooth algorithm to solve a minimization problem, in which the objective function is the sum of a maximum eigenvalue function of matrices and a convex function. The essential idea to solve the optimization problem in this paper is similar to the thought of proximal bundle method, but the difference is that we choose approximate subgradient and function value to construct approximate cutting-plane model to solve the above mentioned problem. An important advantage of the approximate cutting-plane model for objective function is that it is more stable than cutting-plane model. In addition, the approximate proximal bundle method algorithm can be given. Furthermore, the sequences generated by the algorithm converge to the optimal solution of the original problem.
Integral approximants for functions of higher monodromic dimension
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
Sequential function approximation on arbitrarily distributed point sets
Wu, Kailiang; Xiu, Dongbin
2018-02-01
We present a randomized iterative method for approximating unknown function sequentially on arbitrary point set. The method is based on a recently developed sequential approximation (SA) method, which approximates a target function using one data point at each step and avoids matrix operations. The focus of this paper is on data sets with highly irregular distribution of the points. We present a nearest neighbor replacement (NNR) algorithm, which allows one to sample the irregular data sets in a near optimal manner. We provide mathematical justification and error estimates for the NNR algorithm. Extensive numerical examples are also presented to demonstrate that the NNR algorithm can deliver satisfactory convergence for the SA method on data sets with high irregularity in their point distributions.
Pade approximants for entire functions with regularly decreasing Taylor coefficients
Rusak, V N; Starovoitov, A P
2002-01-01
For a class of entire functions the asymptotic behaviour of the Hadamard determinants D n,m as 0≤m≤m(n)→∞ and n→∞ is described. This enables one to study the behaviour of parabolic sequences from Pade and Chebyshev tables for many individual entire functions. The central result of the paper is as follows: for some sequences {(n,m(n))} in certain classes of entire functions (with regular Taylor coefficients) the Pade approximants {π n,m(n) }, which provide the locally best possible rational approximations, converge to the given function uniformly on the compact set D={z:|z|≤1} with asymptotically best rate
Approximation solutions for indifference pricing under general utility functions
Chen, An; Pelsser, Antoon; Vellekoop, M.H.
2008-01-01
With the aid of Taylor-based approximations, this paper presents results for pricing insurance contracts by using indifference pricing under general utility functions. We discuss the connection between the resulting "theoretical" indifference prices and the pricing rule-of-thumb that practitioners
Animating Nested Taylor Polynomials to Approximate a Function
Mazzone, Eric F.; Piper, Bruce R.
2010-01-01
The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…
Approximate Solutions for Indifference Pricing under General Utility Functions
Chen, A.; Pelsser, A.; Vellekoop, M.
2007-01-01
With the aid of Taylor-based approximations, this paper presents results for pricing insurance contracts by using indifference pricing under general utility functions. We discuss the connection between the resulting "theoretical" indifference prices and the pricing rule-of-thumb that practitioners
Applications exponential approximation by integer shifts of Gaussian functions
S. M. Sitnik
2013-01-01
Full Text Available In this paper we consider approximations of functions using integer shifts of Gaussians – quadratic exponentials. A method is proposed to find coefficients of node functions by solving linear systems of equations. The explicit formula for the determinant of the system is found, based on it solvability of linear system under consideration is proved and uniqueness of its solution. We compare results with known ones and briefly indicate applications to signal theory.
Are there approximate relations among transverse momentum dependent distribution functions?
Harutyun AVAKIAN; Anatoli Efremov; Klaus Goeke; Andreas Metz; Peter Schweitzer; Tobias Teckentrup
2007-10-11
Certain {\\sl exact} relations among transverse momentum dependent parton distribution functions due to QCD equations of motion turn into {\\sl approximate} ones upon the neglect of pure twist-3 terms. On the basis of available data from HERMES we test the practical usefulness of one such ``Wandzura-Wilczek-type approximation'', namely of that connecting $h_{1L}^{\\perp(1)a}(x)$ to $h_L^a(x)$, and discuss how it can be further tested by future CLAS and COMPASS data.
Strong semiclassical approximation of Wigner functions for the Hartree dynamics
Athanassoulis, Agissilaos; Paul, Thierry; Pezzotti, Federica; Pulvirenti, Mario
2011-01-01
We consider the Wigner equation corresponding to a nonlinear Schrödinger evolution of the Hartree type in the semiclassical limit h → 0. Under appropriate assumptions on the initial data and the interaction potential, we show that the Wigner function is close in L 2 to its weak limit, the solution of the corresponding Vlasov equation. The strong approximation allows the construction of semiclassical operator-valued observables, approximating their quantum counterparts in Hilbert-Schmidt topology. The proof makes use of a pointwise-positivity manipulation, which seems necessary in working with the L 2 norm and the precise form of the nonlinearity. We employ the Husimi function as a pivot between the classical probability density and the Wigner function, which - as it is well known - is not pointwise positive in general.
Quantal density functional theory II. Approximation methods and applications
Sahni, Viraht
2010-01-01
This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)
APPROXIMATING INNOVATION POTENTIAL WITH NEUROFUZZY ROBUST MODEL
Kasa, Richard
2015-01-01
Full Text Available In a remarkably short time, economic globalisation has changed the world’s economic order, bringing new challenges and opportunities to SMEs. These processes pushed the need to measure innovation capability, which has become a crucial issue for today’s economic and political decision makers. Companies cannot compete in this new environment unless they become more innovative and respond more effectively to consumers’ needs and preferences – as mentioned in the EU’s innovation strategy. Decision makers cannot make accurate and efficient decisions without knowing the capability for innovation of companies in a sector or a region. This need is forcing economists to develop an integrated, unified and complete method of measuring, approximating and even forecasting the innovation performance not only on a macro but also a micro level. In this recent article a critical analysis of the literature on innovation potential approximation and prediction is given, showing their weaknesses and a possible alternative that eliminates the limitations and disadvantages of classical measuring and predictive methods.
On the approximation of the limit cycles function
L. Cherkas
2007-11-01
Full Text Available We consider planar vector fields depending on a real parameter. It is assumed that this vector field has a family of limit cycles which can be described by means of the limit cycles function $l$. We prove a relationship between the multiplicity of a limit cycle of this family and the order of a zero of the limit cycles function. Moreover, we present a procedure to approximate $l(x$, which is based on the Newton scheme applied to the Poincaré function and represents a continuation method. Finally, we demonstrate the effectiveness of the proposed procedure by means of a Liénard system.
Numerical approximations of difference functional equations and applications
Zdzisław Kamont
2005-01-01
Full Text Available We give a theorem on the error estimate of approximate solutions for difference functional equations of the Volterra type. We apply this general result in the investigation of the stability of difference schemes generated by nonlinear first order partial differential functional equations and by parabolic problems. We show that all known results on difference methods for initial or initial boundary value problems can be obtained as particular cases of this general and simple result. We assume that the right hand sides of equations satisfy nonlinear estimates of the Perron type with respect to functional variables.
Modeling Rocket Flight in the Low-Friction Approximation
Logan White
2014-09-01
Full Text Available In a realistic model for rocket dynamics, in the presence of atmospheric drag and altitude-dependent gravity, the exact kinematic equation cannot be integrated in closed form; even when neglecting friction, the exact solution is a combination of elliptic functions of Jacobi type, which are not easy to use in a computational sense. This project provides a precise analysis of the various terms in the full equation (such as gravity, drag, and exhaust momentum, and the numerical ranges for which various approximations are accurate to within 1%. The analysis leads to optimal approximations expressed through elementary functions, which can be implemented for efficient flight prediction on simple computational devices, such as smartphone applications.
Approximation of the exponential integral (well function) using sampling methods
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Corrected Fourier series and its application to function approximation
Qing-Hua Zhang
2005-01-01
Full Text Available Any quasismooth function f(x in a finite interval [0,x0], which has only a finite number of finite discontinuities and has only a finite number of extremes, can be approximated by a uniformly convergent Fourier series and a correction function. The correction function consists of algebraic polynomials and Heaviside step functions and is required by the aperiodicity at the endpoints (i.e., f(0≠f(x0 and the finite discontinuities in between. The uniformly convergent Fourier series and the correction function are collectively referred to as the corrected Fourier series. We prove that in order for the mth derivative of the Fourier series to be uniformly convergent, the order of the polynomial need not exceed (m+1. In other words, including the no-more-than-(m+1 polynomial has eliminated the Gibbs phenomenon of the Fourier series until its mth derivative. The corrected Fourier series is then applied to function approximation; the procedures to determine the coefficients of the corrected Fourier series are illustrated in detail using examples.
Discrete approximations to vector spin models
Van Enter, Aernout C D [University of Groningen, Johann Bernoulli Institute of Mathematics and Computing Science, Postbus 407, 9700 AK Groningen (Netherlands); Kuelske, Christof [Ruhr-Universitaet Bochum, Fakultaet fuer Mathematik, D44801 Bochum (Germany); Opoku, Alex A, E-mail: A.C.D.v.Enter@math.rug.nl, E-mail: Christof.Kuelske@ruhr-uni-bochum.de, E-mail: opoku@math.leidenuniv.nl [Mathematisch Instituut, Universiteit Leiden, Postbus 9512, 2300 RA, Leiden (Netherlands)
2011-11-25
We strengthen a result from Kuelske and Opoku (2008 Electron. J. Probab. 13 1307-44) on the existence of effective interactions for discretized continuous-spin models. We also point out that such an interaction cannot exist at very low temperatures. Moreover, we compare two ways of discretizing continuous-spin models, and show that except for very low temperatures, they behave similarly in two dimensions. We also discuss some possibilities in higher dimensions. (paper)
Discrete approximations to vector spin models
Van Enter, Aernout C D; Külske, Christof; Opoku, Alex A
2011-01-01
We strengthen a result from Külske and Opoku (2008 Electron. J. Probab. 13 1307–44) on the existence of effective interactions for discretized continuous-spin models. We also point out that such an interaction cannot exist at very low temperatures. Moreover, we compare two ways of discretizing continuous-spin models, and show that except for very low temperatures, they behave similarly in two dimensions. We also discuss some possibilities in higher dimensions. (paper)
Approximate inference for spatial functional data on massively parallel processors
Raket, Lars Lau; Markussen, Bo
2014-01-01
With continually increasing data sizes, the relevance of the big n problem of classical likelihood approaches is greater than ever. The functional mixed-effects model is a well established class of models for analyzing functional data. Spatial functional data in a mixed-effects setting...... in linear time. An extremely efficient GPU implementation is presented, and the proposed methods are illustrated by conducting a classical statistical analysis of 2D chromatography data consisting of more than 140 million spatially correlated observation points....
Diffusion approximation of neuronal models revisited
Čupera, Jakub
2014-01-01
Roč. 11, č. 1 (2014), s. 11-25 ISSN 1547-1063. [International Workshop on Neural Coding (NC) /10./. Praha, 02.09.2012-07.09.2012] R&D Projects: GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : stochastic model * neuronal activity * first-passage time Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.840, year: 2014
Topological approximation of the nonlinear Anderson model
Milovanov, Alexander V.; Iomin, Alexander
2014-06-01
We study the phenomena of Anderson localization in the presence of nonlinear interaction on a lattice. A class of nonlinear Schrödinger models with arbitrary power nonlinearity is analyzed. We conceive the various regimes of behavior, depending on the topology of resonance overlap in phase space, ranging from a fully developed chaos involving Lévy flights to pseudochaotic dynamics at the onset of delocalization. It is demonstrated that the quadratic nonlinearity plays a dynamically very distinguished role in that it is the only type of power nonlinearity permitting an abrupt localization-delocalization transition with unlimited spreading already at the delocalization border. We describe this localization-delocalization transition as a percolation transition on the infinite Cayley tree (Bethe lattice). It is found in the vicinity of the criticality that the spreading of the wave field is subdiffusive in the limit t →+∞. The second moment of the associated probability distribution grows with time as a power law ∝ tα, with the exponent α =1/3 exactly. Also we find for superquadratic nonlinearity that the analog pseudochaotic regime at the edge of chaos is self-controlling in that it has feedback on the topology of the structure on which the transport processes concentrate. Then the system automatically (without tuning of parameters) develops its percolation point. We classify this type of behavior in terms of self-organized criticality dynamics in Hilbert space. For subquadratic nonlinearities, the behavior is shown to be sensitive to the details of definition of the nonlinear term. A transport model is proposed based on modified nonlinearity, using the idea of "stripes" propagating the wave process to large distances. Theoretical investigations, presented here, are the basis for consistency analysis of the different localization-delocalization patterns in systems with many coupled degrees of freedom in association with the asymptotic properties of the
Approximating chiral quark models with linear σ-models
Broniowski, Wojciech; Golli, Bojan
2003-01-01
We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea
John H. Summerfield
2015-01-01
Full Text Available This work investigates a one-dimensional model for the solid-state diffusion in a LiC6/LiMnO2 rechargeable cell. This cell is used in hybrid electric vehicles. In this environment the cell experiences low frequency electrical pulses that degrade the electrodes. The model’s starting point is Fick’s second law of diffusion. The Laplace transform is used to move from time as the independent variable to frequency as the independent variable. To better understand the effect of frequency changes on the cell, a transfer function is constructed. The transfer function is a transcendental function so a Padé approximant is found to better describe the model at the origin. Consider ∂c(r,t/∂t=D∂2c(r/∂2r+(2/r(∂c(r/∂r.
Multi-level methods and approximating distribution functions
Wilson, D.; Baker, R. E.
2016-01-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Multi-level methods and approximating distribution functions
Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E. [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)
2016-07-15
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Approximated Function Based Spectral Gradient Algorithm for Sparse Signal Recovery
Weifeng Wang
2014-02-01
Full Text Available Numerical algorithms for the l0-norm regularized non-smooth non-convex minimization problems have recently became a topic of great interest within signal processing, compressive sensing, statistics, and machine learning. Nevertheless, the l0-norm makes the problem combinatorial and generally computationally intractable. In this paper, we construct a new surrogate function to approximate l0-norm regularization, and subsequently make the discrete optimization problem continuous and smooth. Then we use the well-known spectral gradient algorithm to solve the resulting smooth optimization problem. Experiments are provided which illustrate this method is very promising.
Perturbative corrections for approximate inference in gaussian latent variable models
Opper, Manfred; Paquet, Ulrich; Winther, Ole
2013-01-01
Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....
Approximate models for broken clouds in stochastic radiative transfer theory
Doicu, Adrian; Efremenko, Dmitry S.; Loyola, Diego; Trautmann, Thomas
2014-01-01
This paper presents approximate models in stochastic radiative transfer theory. The independent column approximation and its modified version with a solar source computed in a full three-dimensional atmosphere are formulated in a stochastic framework and for arbitrary cloud statistics. The nth-order stochastic models describing the independent column approximations are equivalent to the nth-order stochastic models for the original radiance fields in which the gradient vectors are neglected. Fast approximate models are further derived on the basis of zeroth-order stochastic models and the independent column approximation. The so-called “internal mixing” models assume a combination of the optical properties of the cloud and the clear sky, while the “external mixing” models assume a combination of the radiances corresponding to completely overcast and clear skies. A consistent treatment of internal and external mixing models is provided, and a new parameterization of the closure coefficient in the effective thickness approximation is given. An efficient computation of the closure coefficient for internal mixing models, using a previously derived vector stochastic model as a reference, is also presented. Equipped with appropriate look-up tables for the closure coefficient, these models can easily be integrated into operational trace gas retrieval systems that exploit absorption features in the near-IR solar spectrum. - Highlights: • Independent column approximation in a stochastic setting. • Fast internal and external mixing models for total and diffuse radiances. • Efficient optimization of internal mixing models to match reference models
Approximate Bayesian computation for forward modeling in cosmology
Akeret, Joël; Refregier, Alexandre; Amara, Adam; Seehars, Sebastian; Hasner, Caspar
2015-01-01
Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to the posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release
Approximate Treatment of the Dirac Equation with Hyperbolic Potential Function
Durmus, Aysen
2018-03-01
The time independent Dirac equation is solved analytically for equal scalar and vector hyperbolic potential function in the presence of Greene and Aldrich approximation scheme. The bound state energy equation and spinor wave functions expressed by the hypergeometric function have been obtained in detail with asymptotic iteration approach. In order to indicate the accuracy of this different approach proposed to solve second order linear differential equations, we present that in the non-relativistic limit, analytical solutions of the Dirac equation converge to those of the Schrödinger one. We introduce numerical results of the theoretical analysis for hyperbolic potential function. Bound states corresponding to arbitrary values of n and l are reported for potential parameters covering a wide range of interaction. Also, we investigate relativistic vibrational energy spectra of alkali metal diatomic molecules in the different electronic states. It is observed that theoretical vibrational energy values are consistent with experimental Rydberg-Klein-Rees (RKR) results and vibrational energies of NaK, K_2 and KRb diatomic molecules interacting with hyperbolic potential smoothly converge to the experimental dissociation limit D_e=2508cm^{-1}, 254cm^{-1} and 4221cm^{-1}, respectively.
Reply to Steele & Ferrer: Modeling Oscillation, Approximately or Exactly?
Oud, Johan H. L.; Folmer, Henk
2011-01-01
This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent version of the local linear approximation procedure…
Controlled Nonlinear Stochastic Delay Equations: Part I: Modeling and Approximations
Kushner, Harold J.
2012-01-01
This two-part paper deals with “foundational” issues that have not been previously considered in the modeling and numerical optimization of nonlinear stochastic delay systems. There are new classes of models, such as those with nonlinear functions of several controls (such as products), each with is own delay, controlled random Poisson measure driving terms, admissions control with delayed retrials, and others. There are two basic and interconnected themes for these models. The first, dealt with in this part, concerns the definition of admissible control. The classical definition of an admissible control as a nonanticipative relaxed control is inadequate for these models and needs to be extended. This is needed for the convergence proofs of numerical approximations for optimal controls as well as to have a well-defined model. It is shown that the new classes of admissible controls do not enlarge the range of the value functions, is closed (together with the associated paths) under weak convergence, and is approximatable by ordinary controls. The second theme, dealt with in Part II, concerns transportation equation representations, and their role in the development of numerical algorithms with much reduced memory and computational requirements.
Reply to Steele & Ferrer : Modeling Oscillation, Approximately or Exactly?
Oud, Johan H. L.; Folmer, Henk
2011-01-01
This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent
Reply to Steele & Ferrer: Modeling oscillation, approximately or exactly?
Folmer, H.; Oud, J.H.L.
2011-01-01
This article addresses modeling oscillation in continuous time. It criticizes Steele and Ferrer's article "Latent Differential Equation Modeling of Self-Regulatory and Coregulatory Affective Processes" (2011), particularly the approximate estimation procedure applied. This procedure is the latent
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.
2016-11-25
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Bayesian Parameter Estimation via Filtering and Functional Approximations
Matthies, Hermann G.; Litvinenko, Alexander; Rosic, Bojana V.; Zander, Elmar
2016-01-01
The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the Gauss-Markov-Kalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant --- the Ensemble Kalman Filter (EnKF) --- is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.
Linear approximation model network and its formation via ...
To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...
Analysis of the dynamical cluster approximation for the Hubbard model
Aryanpour, K.; Hettler, M. H.; Jarrell, M.
2002-01-01
We examine a central approximation of the recently introduced Dynamical Cluster Approximation (DCA) by example of the Hubbard model. By both analytical and numerical means we study non-compact and compact contributions to the thermodynamic potential. We show that approximating non-compact diagrams by their cluster analogs results in a larger systematic error as compared to the compact diagrams. Consequently, only the compact contributions should be taken from the cluster, whereas non-compact ...
Singlet structure function F_1 in double-logarithmic approximation
Ermolaev, B. I.; Troyan, S. I.
2018-03-01
The conventional ways to calculate the perturbative component of the DIS singlet structure function F_1 involve approaches based on BFKL which account for the single-logarithmic contributions accompanying the Born factor 1 / x. In contrast, we account for the double-logarithmic (DL) contributions unrelated to 1 / x and because of that they were disregarded as negligibly small. We calculate the singlet F_1 in the double-logarithmic approximation (DLA) and account at the same time for the running α _s effects. We start with a total resummation of both quark and gluon DL contributions and obtain the explicit expression for F_1 in DLA. Then, applying the saddle-point method, we calculate the small- x asymptotics of F_1, which proves to be of the Regge form with the leading singularity ω _0 = 1.066. Its large value compensates for the lack of the factor 1 / x in the DLA contributions. Therefore, this Reggeon can be identified as a new Pomeron, which can be quite important for the description of all QCD processes involving the vacuum (Pomeron) exchanges at very high energies. We prove that the expression for the small- x asymptotics of F_1 scales: it depends on a single variable Q^2/x^2 only instead of x and Q^2 separately. Finally, we show that the small- x asymptotics reliably represent F_1 at x ≤ 10^{-6}.
Linear approximation model network and its formation via ...
niques, an alternative `linear approximation model' (LAM) network approach is .... network is LPV, existing LTI theory is difficult to apply (Kailath 1980). ..... Beck J V, Arnold K J 1977 Parameter estimation in engineering and science (New York: ...
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-01-01
of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous
Davies, Patrick Laurie
2014-01-01
Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...
Finite Element Approximation of the FENE-P Model
Barrett , John ,; Boyaval , Sébastien
2017-01-01
We extend our analysis on the Oldroyd-B model in Barrett and Boyaval [1] to consider the finite element approximation of the FENE-P system of equations, which models a dilute polymeric fluid, in a bounded domain $D $\\subset$ R d , d = 2 or 3$, subject to no flow boundary conditions. Our schemes are based on approximating the pressure and the symmetric conforma-tion tensor by either (a) piecewise constants or (b) continuous piecewise linears. In case (a) the velocity field is approximated by c...
Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji
2016-12-01
Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed
2015-07-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.
Optimized implementations of rational approximations for the Voigt and complex error function
Schreier, Franz
2011-01-01
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued functions. For the complex error function w(x+iy), whose real part is the Voigt function K(x,y), code optimizations of rational approximations are investigated. An assessment of requirements for atmospheric radiative transfer modeling indicates a y range over many orders of magnitude and accuracy better than 10 -4 . Following a brief survey of complex error function algorithms in general and rational function approximations in particular the problems associated with subdivisions of the x, y plane (i.e., conditional branches in the code) are discussed and practical aspects of Fortran and Python implementations are considered. Benchmark tests of a variety of algorithms demonstrate that programming language, compiler choice, and implementation details influence computational speed and there is no unique ranking of algorithms. A new implementation, based on subdivision of the upper half-plane in only two regions, combining Weideman's rational approximation for small |x|+y<15 and Humlicek's rational approximation otherwise is shown to be efficient and accurate for all x, y.
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan
2011-10-10
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
A full scale approximation of covariance functions for large spatial data sets
Sang, Huiyan; Huang, Jianhua Z.
2011-01-01
Gaussian process models have been widely used in spatial statistics but face tremendous computational challenges for very large data sets. The model fitting and spatial prediction of such models typically require O(n 3) operations for a data set of size n. Various approximations of the covariance functions have been introduced to reduce the computational cost. However, most existing approximations cannot simultaneously capture both the large- and the small-scale spatial dependence. A new approximation scheme is developed to provide a high quality approximation to the covariance function at both the large and the small spatial scales. The new approximation is the summation of two parts: a reduced rank covariance and a compactly supported covariance obtained by tapering the covariance of the residual of the reduced rank approximation. Whereas the former part mainly captures the large-scale spatial variation, the latter part captures the small-scale, local variation that is unexplained by the former part. By combining the reduced rank representation and sparse matrix techniques, our approach allows for efficient computation for maximum likelihood estimation, spatial prediction and Bayesian inference. We illustrate the new approach with simulated and real data sets. © 2011 Royal Statistical Society.
Śmiga, Szymon; Fabiano, Eduardo; Laricchia, Savio; Constantin, Lucian A; Della Sala, Fabio
2015-04-21
We analyze the methodology and the performance of subsystem density functional theory (DFT) with meta-generalized gradient approximation (meta-GGA) exchange-correlation functionals for non-bonded molecular systems. Meta-GGA functionals depend on the Kohn-Sham kinetic energy density (KED), which is not known as an explicit functional of the density. Therefore, they cannot be directly applied in subsystem DFT calculations. We propose a Laplacian-level approximation to the KED which overcomes this limitation and provides a simple and accurate way to apply meta-GGA exchange-correlation functionals in subsystem DFT calculations. The so obtained density and energy errors, with respect to the corresponding supermolecular calculations, are comparable with conventional approaches, depending almost exclusively on the approximations in the non-additive kinetic embedding term. An embedding energy error decomposition explains the accuracy of our method.
Efficient and accurate log-Lévy approximations to Lévy driven LIBOR models
Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David
2011-01-01
The LIBOR market model is very popular for pricing interest rate derivatives, but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term is growing exponentially fast (as a function of the tenor length). In this work, we con...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-L\\'evy approximation of annuities, which offers good approximations for high volatility regimes....
Adaptive control using neural networks and approximate models.
Narendra, K S; Mukhopadhyay, S
1997-01-01
The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.
Analytical models approximating individual processes: a validation method.
Favier, C; Degallier, N; Menkès, C E
2010-12-01
Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.
Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis
Layton, William J
2012-01-01
This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.
Approximating methods for intractable probabilistic models: Applications in neuroscience
Højen-Sørensen, Pedro
2002-01-01
This thesis investigates various methods for carrying out approximate inference in intractable probabilistic models. By capturing the relationships between random variables, the framework of graphical models hints at which sets of random variables pose a problem to the inferential step. The appro...
Diffusion approximation for modeling of 3-D radiation distributions
Zardecki, A.; Gerstl, S.A.W.; De Kinder, R.E. Jr.
1985-01-01
A three-dimensional transport code DIF3D, based on the diffusion approximation, is used to model the spatial distribution of radiation energy arising from volumetric isotropic sources. Future work will be concerned with the determination of irradiances and modeling of realistic scenarios, relevant to the battlefield conditions. 8 refs., 4 figs
Fuzzy Approximate Model for Distributed Thermal Solar Collectors Control
Elmetennani, Shahrazed
2014-07-01
This paper deals with the problem of controlling concentrated solar collectors where the objective consists of making the outlet temperature of the collector tracking a desired reference. The performance of the novel approximate model based on fuzzy theory, which has been introduced by the authors in [1], is evaluated comparing to other methods in the literature. The proposed approximation is a low order state representation derived from the physical distributed model. It reproduces the temperature transfer dynamics through the collectors accurately and allows the simplification of the control design. Simulation results show interesting performance of the proposed controller.
On the functional integral approach in quantum statistics. 1. Some approximations
Dai Xianxi.
1990-08-01
In this paper the susceptibility of a Kondo system in a fairly wide temperature region is calculated in the first harmonic approximation in a functional integral approach. The comparison with that of the renormalization group theory shows that in this region the two results agree quite well. The expansion of the partition function with infinite independent harmonics for the Anderson model is studied. Some symmetry relations are generalized. It is a challenging problem to develop a functional integral approach including diagram analysis, mixed mode effects and some exact relations in the Anderson system proved in the functional integral approach. These topics will be discussed in the next paper. (author). 22 refs, 1 fig
Wavelet series approximation using wavelet function with compactly ...
The Wavelets generated by Scaling Function with Compactly Support are useful in various applications especially for reconstruction of functions. Generally, the computational process will be faster if Scaling Function support descends, so computational errors are summarized from one level to another level. In this article, the ...
Yunfeng Wu
2014-01-01
Full Text Available This paper presents a novel adaptive linear and normalized combination (ALNC method that can be used to combine the component radial basis function networks (RBFNs to implement better function approximation and regression tasks. The optimization of the fusion weights is obtained by solving a constrained quadratic programming problem. According to the instantaneous errors generated by the component RBFNs, the ALNC is able to perform the selective ensemble of multiple leaners by adaptively adjusting the fusion weights from one instance to another. The results of the experiments on eight synthetic function approximation and six benchmark regression data sets show that the ALNC method can effectively help the ensemble system achieve a higher accuracy (measured in terms of mean-squared error and the better fidelity (characterized by normalized correlation coefficient of approximation, in relation to the popular simple average, weighted average, and the Bagging methods.
Investigation of approximate models of experimental temperature characteristics of machines
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
The dilute random field Ising model by finite cluster approximation
Benyoussef, A.; Saber, M.
1987-09-01
Using the finite cluster approximation, phase diagrams of bond and site diluted three-dimensional simple cubic Ising models with a random field have been determined. The resulting phase diagrams have the same general features for both bond and site dilution. (author). 7 refs, 4 figs
Evaluation of Gaussian approximations for data assimilation in reservoir models
Iglesias, Marco A.
2013-07-14
The Bayesian framework is the standard approach for data assimilation in reservoir modeling. This framework involves characterizing the posterior distribution of geological parameters in terms of a given prior distribution and data from the reservoir dynamics, together with a forward model connecting the space of geological parameters to the data space. Since the posterior distribution quantifies the uncertainty in the geologic parameters of the reservoir, the characterization of the posterior is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based on Gaussian approximations, are often used for characterizing the posterior distribution. Evaluating the performance of those Gaussian approximations is typically conducted by assessing their ability at reproducing the truth within the confidence interval provided by the ad hoc technique under consideration. This has the disadvantage of mixing up the approximation properties of the history matching algorithm employed with the information content of the particular observations used, making it hard to evaluate the effect of the ad hoc approximations alone. In this paper, we avoid this disadvantage by comparing the ad hoc techniques with a fully resolved state-of-the-art probing of the Bayesian posterior distribution. The ad hoc techniques whose performance we assess are based on (1) linearization around the maximum a posteriori estimate, (2) randomized maximum likelihood, and (3) ensemble Kalman filter-type methods. In order to fully resolve the posterior distribution, we implement a state-of-the art Markov chain Monte Carlo (MCMC) method that scales well with respect to the dimension of the parameter space, enabling us to study realistic forward models, in two space dimensions, at a high level of grid refinement. Our
Mathematics of epidemics on networks from exact to approximate models
Kiss, István Z; Simon, Péter L
2017-01-01
This textbook provides an exciting new addition to the area of network science featuring a stronger and more methodical link of models to their mathematical origin and explains how these relate to each other with special focus on epidemic spread on networks. The content of the book is at the interface of graph theory, stochastic processes and dynamical systems. The authors set out to make a significant contribution to closing the gap between model development and the supporting mathematics. This is done by: Summarising and presenting the state-of-the-art in modeling epidemics on networks with results and readily usable models signposted throughout the book; Presenting different mathematical approaches to formulate exact and solvable models; Identifying the concrete links between approximate models and their rigorous mathematical representation; Presenting a model hierarchy and clearly highlighting the links between model assumptions and model complexity; Providing a reference source for advanced undergraduate...
Leaky-box approximation to the fractional diffusion model
Uchaikin, V V; Sibatov, R T; Saenko, V V
2013-01-01
Two models based on fractional differential equations for galactic cosmic ray diffusion are applied to the leaky-box approximation. One of them (Lagutin-Uchaikin, 2000) assumes a finite mean free path of cosmic ray particles, another one (Lagutin-Tyumentsev, 2004) uses distribution with infinite mean distance between collision with magnetic clouds, when the trajectories have form close to ballistic. Calculations demonstrate that involving boundary conditions is incompatible with spatial distributions given by the second model.
Foot trajectory approximation using the pendulum model of walking.
Fang, Juan; Vuckovic, Aleksandra; Galen, Sujay; Conway, Bernard A; Hunt, Kenneth J
2014-01-01
Generating a natural foot trajectory is an important objective in robotic systems for rehabilitation of walking. Human walking has pendular properties, so the pendulum model of walking has been used in bipedal robots which produce rhythmic gait patterns. Whether natural foot trajectories can be produced by the pendulum model needs to be addressed as a first step towards applying the pendulum concept in gait orthosis design. This study investigated circle approximation of the foot trajectories, with focus on the geometry of the pendulum model of walking. Three able-bodied subjects walked overground at various speeds, and foot trajectories relative to the hip were analysed. Four circle approximation approaches were developed, and best-fit circle algorithms were derived to fit the trajectories of the ankle, heel and toe. The study confirmed that the ankle and heel trajectories during stance and the toe trajectory in both the stance and the swing phases during walking at various speeds could be well modelled by a rigid pendulum. All the pendulum models were centred around the hip with pendular lengths approximately equal to the segment distances from the hip. This observation provides a new approach for using the pendulum model of walking in gait orthosis design.
Approximate dynamic fault tree calculations for modelling water supply risks
Lindhe, Andreas; Norberg, Tommy; Rosén, Lars
2012-01-01
Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.
An approximation method for diffusion based leaching models
Shukla, B.S.; Dignam, M.J.
1987-01-01
In connection with the fixation of nuclear waste in a glassy matrix equations have been derived for leaching models based on a uniform concentration gradient approximation, and hence a uniform flux, therefore requiring the use of only Fick's first law. In this paper we improve on the uniform flux approximation, developing and justifying the approach. The resulting set of equations are solved to a satisfactory approximation for a matrix dissolving at a constant rate in a finite volume of leachant to give analytical expressions for the time dependence of the thickness of the leached layer, the diffusional and dissolutional contribution to the flux, and the leachant composition. Families of curves are presented which cover the full range of all the physical parameters for this system. The same procedure can be readily extended to more complex systems. (author)
Deep-inelastic structure functions in an approximation to the bag theory
Jaffe, R.L.
1975-01-01
A cavity approximation to the bag theory developed earlier is extended to the treatment of forward virtual Compton scattering. In the Bjorken limit and for small values of ω (ω = vertical-bar2p center-dot q/q 2 vertical-bar) it is argued that the operator nature of the bag boundaries might be ignored. Structure functions are calculated in one and three dimensions. Bjorken scaling is obtained. The model provides a realization of light-cone current algebra and possesses a parton interpretation. The structure functions show a quasielastic peak. The spreading of the structure functions about the peak is associated with confinement. As expected, Regge behavior is not obtained for large ω. The ''momentum sum rule'' is saturated, indicating that the hadron's charged constituents carry all the momentum in this model. νW/subL/ is found to scale and is calculable. Application of the model to the calculation of spin-dependent and chiral-symmetry--violating structure functions is proposed. The nature of the intermediate states in this approximation is discussed. Problems associated with the cavity approximation are also discussed
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Monte Carlo Euler approximations of HJM term structure financial models
Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.
2012-01-01
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.
Approximate scattering wave functions for few-particle continua
Briggs, J.S.
1990-01-01
An operator identity which allows the wave operator for N particles interacting pairwise to be expanded as products of operators in which fewer than N particles interact is given. This identity is used to derive appproximate scattering wave functions for N-particle continua that avoid certain difficulties associated with Faddeev-type expansions. For example, a derivation is given of a scattering wave function used successfully recently to describe the three-particle continuum occurring in the electron impact ionization of the hydrogen atom
Fuzzy Universal Model Approximator for Distributed Solar Collector Field Control
Elmetennani, Shahrazed
2014-07-01
This paper deals with the control of concentrating parabolic solar collectors by forcing the outlet oil temperature to track a set reference. A fuzzy universal approximate model is introduced in order to accurately reproduce the behavior of the system dynamics. The proposed model is a low order state space representation derived from the partial differential equation describing the oil temperature evolution using fuzzy transform theory. The resulting set of ordinary differential equations simplifies the system analysis and the control law design and is suitable for real time control implementation. Simulation results show good performance of the proposed model.
Piecewise quadratic Lyapunov functions for stability verification of approximate explicit MPC
Morten Hovd
2010-04-01
Full Text Available Explicit MPC of constrained linear systems is known to result in a piecewise affine controller and therefore also piecewise affine closed loop dynamics. The complexity of such analytic formulations of the control law can grow exponentially with the prediction horizon. The suboptimal solutions offer a trade-off in terms of complexity and several approaches can be found in the literature for the construction of approximate MPC laws. In the present paper a piecewise quadratic (PWQ Lyapunov function is used for the stability verification of an of approximate explicit Model Predictive Control (MPC. A novel relaxation method is proposed for the LMI criteria on the Lyapunov function design. This relaxation is applicable to the design of PWQ Lyapunov functions for discrete-time piecewise affine systems in general.
Swiss-cheese models and the Dyer-Roeder approximation
Fleury, Pierre, E-mail: fleury@iap.fr [Institut d' Astrophysique de Paris, UMR-7095 du CNRS, Université Pierre et Marie Curie, 98 bis, boulevard Arago, 75014 Paris (France)
2014-06-01
In view of interpreting the cosmological observations precisely, especially when they involve narrow light beams, it is crucial to understand how light propagates in our statistically homogeneous, clumpy, Universe. Among the various approaches to tackle this issue, Swiss-cheese models propose an inhomogeneous spacetime geometry which is an exact solution of Einstein's equation, while the Dyer-Roeder approximation deals with inhomogeneity in an effective way. In this article, we demonstrate that the distance-redshift relation of a certain class of Swiss-cheese models is the same as the one predicted by the Dyer-Roeder approach, at a well-controlled level of approximation. Both methods are therefore equivalent when applied to the interpretation of, e.g., supernova obervations. The proof relies on completely analytical arguments, and is illustrated by numerical results.
Finite element approximation to a model problem of transonic flow
Tangmanee, S.
1986-12-01
A model problem of transonic flow ''the Tricomi equation'' in Ω is contained in IR 2 bounded by the rectangular-curve boundary is posed in the form of symmetric positive differential equations. The finite element method is then applied. When the triangulation of Ω-bar is made of quadrilaterals and the approximation space is the Lagrange polynomial, we get the error estimates. 14 refs, 1 fig
Nucleon-pair approximation to the nuclear shell model
Zhao, Y.M., E-mail: ymzhao@sjtu.edu.cn [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Arima, A. [Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240 (China); Musashi Gakuen, 1-26-1 Toyotamakami Nerima-ku, Tokyo 176-8533 (Japan)
2014-12-01
Atomic nuclei are complex systems of nucleons–protons and neutrons. Nucleons interact with each other via an attractive and short-range force. This feature of the interaction leads to a pattern of dominantly monopole and quadrupole correlations between like particles (i.e., proton–proton and neutron–neutron correlations) in low-lying states of atomic nuclei. As a consequence, among dozens or even hundreds of possible types of nucleon pairs, very few nucleon pairs such as proton and neutron pairs with spin zero, two (in some cases spin four), and occasionally isoscalar spin-aligned proton–neutron pairs, play important roles in low-energy nuclear structure. The nucleon-pair approximation therefore provides us with an efficient truncation scheme of the full shell model configurations which are otherwise too large to handle for medium and heavy nuclei in foreseeable future. Furthermore, the nucleon-pair approximation leads to simple pictures in physics, as the dimension of nucleon-pair subspace is always small. The present paper aims at a sound review of its history, formulation, validity, applications, as well as its link to previous approaches, with the focus on the new developments in the last two decades. The applicability of the nucleon-pair approximation and numerical calculations of low-lying states for realistic atomic nuclei are demonstrated with examples. Applications of pair approximations to other problems are also discussed.
Cafiero, Mauricio; Gonzalez, Carlos
2005-01-01
We show that potentials for exchange-correlation functionals within the Kohn-Sham density-functional-theory framework may be written as potentials for simpler functionals multiplied by a factor close to unity, and in a self-consistent field calculation, these effective potentials find the correct self-consistent solutions. This simple theory is demonstrated with self-consistent exchange-only calculations of the atomization energies of some small molecules using the Perdew-Kurth-Zupan-Blaha (PKZB) meta-generalized-gradient-approximation (meta-GGA) exchange functional. The atomization energies obtained with our method agree with or surpass previous meta-GGA calculations performed in a non-self-consistent manner. The results of this work suggest the utility of this simple theory to approximate exchange-correlation potentials corresponding to energy functionals too complicated to generate closed forms for their potentials. We hope that this method will encourage the development of complex functionals which have correct boundary conditions and are free of self-interaction errors without the worry that the functionals are too complex to differentiate to obtain potentials
H4: A challenging system for natural orbital functional approximations
Ramos-Cordoba, Eloy; Lopez, Xabier; Piris, Mario; Matito, Eduard
2015-01-01
The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of D 2h to D 4h symmetry in H 4 molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurgence of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work, we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H 4 D 4h /D 2h potential energy surface (PES). Thus far, the wrongful behavior of single-reference methods at the D 2h –D 4h transition of H 4 has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches, it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually interpair nondynamic correlation is the key to a cusp-free qualitatively correct description of H 4 PES. By introducing interpair nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features at distances close to the equilibrium, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices
H4: A challenging system for natural orbital functional approximations
Ramos-Cordoba, Eloy; Lopez, Xabier; Piris, Mario; Matito, Eduard
2015-10-01
The correct description of nondynamic correlation by electronic structure methods not belonging to the multireference family is a challenging issue. The transition of D2h to D4h symmetry in H4 molecule is among the most simple archetypal examples to illustrate the consequences of missing nondynamic correlation effects. The resurgence of interest in density matrix functional methods has brought several new methods including the family of Piris Natural Orbital Functionals (PNOF). In this work, we compare PNOF5 and PNOF6, which include nondynamic electron correlation effects to some extent, with other standard ab initio methods in the H4 D4h/D2h potential energy surface (PES). Thus far, the wrongful behavior of single-reference methods at the D2h-D4h transition of H4 has been attributed to wrong account of nondynamic correlation effects, whereas in geminal-based approaches, it has been assigned to a wrong coupling of spins and the localized nature of the orbitals. We will show that actually interpair nondynamic correlation is the key to a cusp-free qualitatively correct description of H4 PES. By introducing interpair nondynamic correlation, PNOF6 is shown to avoid cusps and provide the correct smooth PES features at distances close to the equilibrium, total and local spin properties along with the correct electron delocalization, as reflected by natural orbitals and multicenter delocalization indices.
Approximating Smooth Step Functions Using Partial Fourier Series Sums
2012-09-01
interp1(xt(ii), smoothstepbez( t(ii), min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); ii = find( abs(t - tau/2) <= epi ); iii = t(ii...interp1( xt(ii), smoothstepbez( rt, min(rt), max(rt), ’y’), t(ii), ’linear’, ’ extrap ’ ); % stepm(ii) = 1 - interp1(xt(ii), smoothstepbez( t(ii...min(t(ii)), max(t(ii)), ’y’), t(ii), ’linear’, ’ extrap ’); In this case, because x is also defined as a function of the independent parameter
Approximate relativistic corrections to atomic radial wave functions
Cowan, R.D.; Griffin, D.C.
1976-01-01
The mass-velocity and Darwin terms of the one-electron-atom Pauli equation have been added to the Hartree-Fock differential equations by using the HX formula to calculate a local central field potential for use in these terms. Introduction of the quantum number j is avoided by omitting the spin-orbit term of the Pauli equation. The major relativistic effects, both direct and indirect, are thereby incorporated into the wave functions, while allowing retention of the commonly used nonrelativistic formulation of energy level calculations. The improvement afforded in calculated total binding energies, excitation energies, spin-orbit parameters, and expectation values of r/sub m/ is comparable with that provided by fully relativistic Dirac-Hartree-Fock calculations
D’Amore, L; Campagna, R; Murli, A; Galletti, A; Marcellino, L
2012-01-01
The scientific and application-oriented interest in the Laplace transform and its inversion is testified by more than 1000 publications in the last century. Most of the inversion algorithms available in the literature assume that the Laplace transform function is available everywhere. Unfortunately, such an assumption is not fulfilled in the applications of the Laplace transform. Very often, one only has a finite set of data and one wants to recover an estimate of the inverse Laplace function from that. We propose a fitting model of data. More precisely, given a finite set of measurements on the real axis, arising from an unknown Laplace transform function, we construct a dth degree generalized polynomial smoothing spline, where d = 2m − 1, such that internally to the data interval it is a dth degree polynomial complete smoothing spline minimizing a regularization functional, and outside the data interval, it mimics the Laplace transform asymptotic behavior, i.e. it is a rational or an exponential function (the end behavior model), and at the boundaries of the data set it joins with regularity up to order m − 1, with the end behavior model. We analyze in detail the generalized polynomial smoothing spline of degree d = 3. This choice was motivated by the (ill)conditioning of the numerical computation which strongly depends on the degree of the complete spline. We prove existence and uniqueness of this spline. We derive the approximation error and give a priori and computable bounds of it on the whole real axis. In such a way, the generalized polynomial smoothing spline may be used in any real inversion algorithm to compute an approximation of the inverse Laplace function. Experimental results concerning Laplace transform approximation, numerical inversion of the generalized polynomial smoothing spline and comparisons with the exponential smoothing spline conclude the work. (paper)
Multiple Scattering Model for Optical Coherence Tomography with Rytov Approximation
Li, Muxingzi
2017-04-24
Optical Coherence Tomography (OCT) is a coherence-gated, micrometer-resolution imaging technique that focuses a broadband near-infrared laser beam to penetrate into optical scattering media, e.g. biological tissues. The OCT resolution is split into two parts, with the axial resolution defined by half the coherence length, and the depth-dependent lateral resolution determined by the beam geometry, which is well described by a Gaussian beam model. The depth dependence of lateral resolution directly results in the defocusing effect outside the confocal region and restricts current OCT probes to small numerical aperture (NA) at the expense of lateral resolution near the focus. Another limitation on OCT development is the presence of a mixture of speckles due to multiple scatterers within the coherence length, and other random noise. Motivated by the above two challenges, a multiple scattering model based on Rytov approximation and Gaussian beam optics is proposed for the OCT setup. Some previous papers have adopted the first Born approximation with the assumption of small perturbation of the incident field in inhomogeneous media. The Rytov method of the same order with smooth phase perturbation assumption benefits from a wider spatial range of validity. A deconvolution method for solving the inverse problem associated with the first Rytov approximation is developed, significantly reducing the defocusing effect through depth and therefore extending the feasible range of NA.
Sparse linear models: Variational approximate inference and Bayesian experimental design
Seeger, Matthias W
2009-01-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Sparse linear models: Variational approximate inference and Bayesian experimental design
Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)
2009-12-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Approximate Riemann solver for the two-fluid plasma model
Shumlak, U.; Loverich, J.
2003-01-01
An algorithm is presented for the simulation of plasma dynamics using the two-fluid plasma model. The two-fluid plasma model is more general than the magnetohydrodynamic (MHD) model often used for plasma dynamic simulations. The two-fluid equations are derived in divergence form and an approximate Riemann solver is developed to compute the fluxes of the electron and ion fluids at the computational cell interfaces and an upwind characteristic-based solver to compute the electromagnetic fields. The source terms that couple the fluids and fields are treated implicitly to relax the stiffness. The algorithm is validated with the coplanar Riemann problem, Langmuir plasma oscillations, and the electromagnetic shock problem that has been simulated with the MHD plasma model. A numerical dispersion relation is also presented that demonstrates agreement with analytical plasma waves
Analytic approximation for the modified Bessel function I -2/3(x)
Martin, Pablo; Olivares, Jorge; Maass, Fernando
2017-12-01
In the present work an analytic approximation to modified Bessel function of negative fractional order I -2/3(x) is presented. The validity of the approximation is for every positive value of the independent variable. The accuracy is high in spite of the small number (4) of parameters used. The approximation is a combination of elementary functions with rational ones. Power series and assymptotic expansions are simultaneously used to obtain the approximation.
New realisation of Preisach model using adaptive polynomial approximation
Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young
2012-09-01
Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.
Rudolph, E [Max-Planck-Institut fuer Physik und Astrophysik, Muenchen (F.R. Germany)
1975-01-01
As a model for gravitational radiation damping of a planet the electromagnetic radiation damping of an extended charged body moving in an external gravitational field is calculated in harmonic coordinates using a weak field, slowing-motion approximation. Special attention is paid to the case where this gravitational field is a weak Schwarzschild field. Using Green's function methods for this purpose it is shown that in a slow-motion approximation there is a strange connection between the tail part and the sharp part: radiation reaction terms of the tail part can cancel corresponding terms of the sharp part. Due to this cancelling mechanism the lowest order electromagnetic radiation damping force in an external gravitational field in harmonic coordinates remains the flat space Abraham Lorentz force. It is demonstrated in this simplified model that a naive slow-motion approximation may easily lead to divergent higher order terms. It is shown that this difficulty does not arise up to the considered order.
Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon
2017-12-01
Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.
Efficient and Accurate Log-Levy Approximations of Levy-Driven LIBOR Models
Papapantoleon, Antonis; Schoenmakers, John; Skovmand, David
2012-01-01
The LIBOR market model is very popular for pricing interest rate derivatives but is known to have several pitfalls. In addition, if the model is driven by a jump process, then the complexity of the drift term grows exponentially fast (as a function of the tenor length). We consider a Lévy-driven ...... ratchet caps show that the approximations perform very well. In addition, we also consider the log-Lévy approximation of annuities, which offers good approximations for high-volatility regimes....
Pistorius, M.; Stolte, J.
2012-01-01
We present a new numerical method to price vanilla options quickly in time-changed Brownian motion models. The method is based on rational function approximations of the Black-Scholes formula. Detailed numerical results are given for a number of widely used models. In particular, we use the
Model Selection in Historical Research Using Approximate Bayesian Computation
Rubio-Campillo, Xavier
2016-01-01
Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Weighted Low-Rank Approximation of Matrices and Background Modeling
Dutta, Aritra
2018-04-15
We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.
Weighted Low-Rank Approximation of Matrices and Background Modeling
Dutta, Aritra; Li, Xin; Richtarik, Peter
2018-01-01
We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.
Sato, M.
1991-01-01
The Saha equation for a plasma in thermodynamic equilibrium (TE) is approximately solved to give the temperature as an explicit function of population densities. It is shown that the derived expressions for the Saha temperature are valid approximations to the exact solution. An application of the approximate temperature to the calculation of TE plasma parameters is also described. (orig.)
Approximate Stokes Drift Profiles and their use in Ocean Modelling
Breivik, Oyvind; Bidlot, Jea-Raymond; Janssen, Peter A. E. M.; Mogensen, Kristian
2016-04-01
Deep-water approximations to the Stokes drift velocity profile are explored as alternatives to the monochromatic profile. The alternative profiles investigated rely on the same two quantities required for the monochromatic profile, viz the Stokes transport and the surface Stokes drift velocity. Comparisons against parametric spectra and profiles under wave spectra from the ERA-Interim reanalysis and buoy observations reveal much better agreement than the monochromatic profile even for complex sea states. That the profiles give a closer match and a more correct shear has implications for ocean circulation models since the Coriolis-Stokes force depends on the magnitude and direction of the Stokes drift profile and Langmuir turbulence parameterizations depend sensitively on the shear of the profile. Of the two Stokes drift profiles explored here, the profile based on the Phillips spectrum is by far the best. In particular, the shear near the surface is almost identical to that influenced by the f-5 tail of spectral wave models. The NEMO general circulation ocean model was recently extended to incorporate the Stokes-Coriolis force along with two other wave-related effects. The ECWMF coupled atmosphere-wave-ocean ensemble forecast system now includes these wave effects in the ocean model component (NEMO).
Gaj, E.V.; Badikov, S.A.; Gusejnov, M.A.; Rabotnov, N.S.
1988-01-01
Possible applications of rational functions in the analysis of neutron cross sections, angular distributions and neutron constants generation are described. Results of investigations made in this direction, which have been obtained after the preceding conference in Kiev, are presented: the method of simultaneous treatment of several cross sections for one compound nucleus in the resonance range; the use of the Pade approximation for elastically scattered neutron angular distribution approximation; obtaining of subgroup constants on the basis of rational approximation of cross section functional dependence on dilution cross section; the first experience in function approximation by two variables
New fuzzy approximate model for indirect adaptive control of distributed solar collectors
Elmetennani, Shahrazed
2014-06-01
This paper studies the problem of controlling a parabolic solar collectors, which consists of forcing the outlet oil temperature to track a set reference despite possible environmental disturbances. An approximate model is proposed to simplify the controller design. The presented controller is an indirect adaptive law designed on the fuzzy model with soft-sensing of the solar irradiance intensity. The proposed approximate model allows the achievement of a simple low dimensional set of nonlinear ordinary differential equations that reproduces the dynamical behavior of the system taking into account its infinite dimension. Stability of the closed loop system is ensured by resorting to Lyapunov Control functions for an indirect adaptive controller.
New fuzzy approximate model for indirect adaptive control of distributed solar collectors
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2014-01-01
This paper studies the problem of controlling a parabolic solar collectors, which consists of forcing the outlet oil temperature to track a set reference despite possible environmental disturbances. An approximate model is proposed to simplify the controller design. The presented controller is an indirect adaptive law designed on the fuzzy model with soft-sensing of the solar irradiance intensity. The proposed approximate model allows the achievement of a simple low dimensional set of nonlinear ordinary differential equations that reproduces the dynamical behavior of the system taking into account its infinite dimension. Stability of the closed loop system is ensured by resorting to Lyapunov Control functions for an indirect adaptive controller.
Shiju, S.; Sumitra, S.
2017-12-01
In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.
Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference
Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah
1998-01-01
Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.
Marrero, S. I.; Turibus, S. N.; Assis, J. T. De; Monin, V. I.
2011-01-01
Data processing of the most of diffraction experiments is based on determination of diffraction line position and measurement of broadening of diffraction profile. High precision and digitalisation of these procedures can be resolved by approximation of experimental diffraction profiles by analytical functions. There are various functions for these purposes both simples, like Gauss function, but no suitable for wild range of experimental profiles and good approximating functions but complicated for practice using, like Vougt or PersonVII functions. Proposed analytical function is modified Cauchy function which uses two variable parameters allowing describing any experimental diffraction profile. In the presented paper modified function was applied for approximation of diffraction lines of steels after various physical and mechanical treatments and simulation of diffraction profiles applied for study of stress gradients and distortions of crystal structure. (Author)
Global Approximations to Cost and Production Functions using Artificial Neural Networks
Efthymios G. Tsionas
2009-06-01
Full Text Available The estimation of cost and production functions in economics relies on standard specifications which are less than satisfactory in numerous situations. However, instead of fitting the data with a pre-specified model, Artificial Neural Networks (ANNs let the data itself serve as evidence to support the modelrs estimation of the underlying process. In this context, the proposed approach combines the strengths of economics, statistics and machine learning research and the paper proposes a global approximation to arbitrary cost and production functions, respectively, given by ANNs. Suggestions on implementation are proposed and empirical application relies on standard techniques. All relevant measures such as Returns to Scale (RTS and Total Factor Productivity (TFP may be computed routinely.
First approximations in avalanche model validations using seismic information
Roig Lafon, Pere; Suriñach, Emma; Bartelt, Perry; Pérez-Guillén, Cristina; Tapia, Mar; Sovilla, Betty
2017-04-01
Avalanche dynamics modelling is an essential tool for snow hazard management. Scenario based numerical modelling provides quantitative arguments for decision-making. The software tool RAMMS (WSL Institute for Snow and Avalanche Research SLF) is one such tool, often used by government authorities and geotechnical offices. As avalanche models improve, the quality of the numerical results will depend increasingly on user experience on the specification of input (e.g. release and entrainment volumes, secondary releases, snow temperature and quality). New model developments must continue to be validated using real phenomena data, for improving performance and reliability. The avalanches group form University of Barcelona (RISKNAT - UB), has studied the seismic signals generated from avalanches since 1994. Presently, the group manages the seismic installation at SLF's Vallée de la Sionne experimental site (VDLS). At VDLS the recorded seismic signals can be correlated to other avalanche measurement techniques, including both advanced remote sensing methods (radars, videogrammetry) and obstacle based sensors (pressure, capacitance, optical sender-reflector barriers). This comparison between different measurement techniques allows the group to address the question if seismic analysis can be used alone, on more additional avalanche tracks, to gain insight and validate numerical avalanche dynamics models in different terrain conditions. In this study, we aim to add the seismic data as an external record of the phenomena, able to validate RAMMS models. The seismic sensors are considerable easy and cheaper to install than other physical measuring tools, and are able to record data from the phenomena in every atmospheric conditions (e.g. bad weather, low light, freezing make photography, and other kind of sensors not usable). With seismic signals, we record the temporal evolution of the inner and denser parts of the avalanche. We are able to recognize the approximate position
A new way of obtaining analytic approximations of Chandrasekhar's H function
Vukanic, J.; Arsenovic, D.; Davidovic, D.
2007-01-01
Applying the mean value theorem for definite integrals in the non-linear integral equation for Chandrasekhar's H function describing conservative isotropic scattering, we have derived a new, simple analytic approximation for it, with a maximal relative error below 2.5%. With this new function as a starting-point, after a single iteration in the corresponding integral equation, we have obtained a new, highly accurate analytic approximation for the H function. As its maximal relative error is below 0.07%, it significantly surpasses the accuracy of other analytic approximations
Ramazanov, A.-R K
2005-01-01
Necessary and sufficient conditions for the best polynomial approximation with an arbitrary and, generally speaking, unbounded sign-sensitive weight to a continuous function are obtained; the components of the weight can also take infinite values, therefore the conditions obtained cover, in particular, approximation with interpolation at fixed points and one-sided approximation; in the case of the weight with components equal to 1 one arrives at Chebyshev's classical alternation theorem.
W. Łenski
2015-01-01
Full Text Available The results generalizing some theorems on N, pnE, γ summability are shown. The same degrees of pointwise approximation as in earlier papers by weaker assumptions on considered functions and examined summability methods are obtained. From presented pointwise results, the estimation on norm approximation is derived. Some special cases as corollaries are also formulated.
Aarts, Ronald M; Janssen, Augustus J E M
2016-12-01
The Struve functions H n (z), n=0, 1, ... are approximated in a simple, accurate form that is valid for all z≥0. The authors previously treated the case n = 1 that arises in impedance calculations for the rigid-piston circular radiator mounted in an infinite planar baffle [Aarts and Janssen, J. Acoust. Soc. Am. 113, 2635-2637 (2003)]. The more general Struve functions occur when other acoustical quantities and/or non-rigid pistons are considered. The key step in the paper just cited is to express H 1 (z) as (2/π)-J 0 (z)+(2/π) I(z), where J 0 is the Bessel function of order zero and the first kind and I(z) is the Fourier cosine transform of [(1-t)/(1+t)] 1/2 , 0≤t≤1. The square-root function is optimally approximated by a linear function ĉt+d̂, 0≤t≤1, and the resulting approximated Fourier integral is readily computed explicitly in terms of sin z/z and (1-cos z)/z 2 . The same approach has been used by Maurel, Pagneux, Barra, and Lund [Phys. Rev. B 75, 224112 (2007)] to approximate H 0 (z) for all z≥0. In the present paper, the square-root function is optimally approximated by a piecewise linear function consisting of two linear functions supported by [0,t̂ 0 ] and [t̂ 0 ,1] with t̂ 0 the optimal take-over point. It is shown that the optimal two-piece linear function is actually continuous at the take-over point, causing a reduction of the additional complexity in the resulting approximations of H 0 and H 1 . Furthermore, this allows analytic computation of the optimal two-piece linear function. By using the two-piece instead of the one-piece linear approximation, the root mean square approximation error is reduced by roughly a factor of 3 while the maximum approximation error is reduced by a factor of 4.5 for H 0 and of 2.6 for H 1 . Recursion relations satisfied by Struve functions, initialized with the approximations of H 0 and H 1 , yield approximations for higher order Struve functions.
Approximation of functions in two variables by some linear positive operators
Mariola Skorupka
1995-12-01
Full Text Available We introduce some linear positive operators of the Szasz-Mirakjan type in the weighted spaces of continuous functions in two variables. We study the degree of the approximation of functions by these operators. The similar results for functions in one variable are given in [5]. Some operators of the Szasz-Mirakjan type are examined also in [3], [4].
Franke, Richard
2001-01-01
.... It was found that for all levels the approximation of the covariance data for pressure height innovations by Legendre functions led to positive coefficients for up to 25 terms except at the some low and high levels...
Bisetti, Fabrizio
2012-01-01
with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix
Approximate Dynamic Programming Based on High Dimensional Model Representation
Pištěk, Miroslav
2013-01-01
Roč. 49, č. 5 (2013), s. 720-737 ISSN 0023-5954 R&D Projects: GA ČR(CZ) GAP102/11/0437 Institutional support: RVO:67985556 Keywords : approximate dynamic programming * Bellman equation * approximate HDMR minimization * trust region problem Subject RIV: BC - Control Systems Theory Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/pistek-0399560.pdf
Capelle, K.; Gross, E.
1997-01-01
It is shown that the exchange-correlation functional of spin-density functional theory is identical, on a certain set of densities, with the exchange-correlation functional of current-density functional theory. This rigorous connection is used to construct new approximations of the exchange-correlation functionals. These include a conceptually new generalized-gradient spin-density functional and a nonlocal current-density functional. copyright 1997 The American Physical Society
Gougam, L.A.; Taibi, H.; Chikhi, A.; Mekideche-Chafa, F.
2009-01-01
The problem of determining the analytical description for a set of data arises in numerous sciences and applications and can be referred to as data modeling or system identification. Neural networks are a convenient means of representation because they are known to be universal approximates that can learn data. The desired task is usually obtained by a learning procedure which consists in adjusting the s ynaptic weights . For this purpose, many learning algorithms have been proposed to update these weights. The convergence for these learning algorithms is a crucial criterion for neural networks to be useful in different applications. The aim of the present contribution is to use a training algorithm for feed forward wavelet networks used for function approximation. The training is based on the minimization of the least-square cost function. The minimization is performed by iterative second order gradient-based methods. We make use of the Levenberg-Marquardt algorithm to train the architecture of the chosen network and, then, the training procedure starts with a simple gradient method which is followed by a BFGS (Broyden, Fletcher, Glodfarb et Shanno) algorithm. The performances of the two algorithms are then compared. Our method is then applied to determine the energy of the ground state associated to a sextic potential. In fact, the Schrodinger equation does not always admit an exact solution and one has, generally, to solve it numerically. To this end, the sextic potential is, firstly, approximated with the above outlined wavelet network and, secondly, implemented into a numerical scheme. Our results are in good agreement with the ones found in the literature.
Kushwaha, Jitendra Kumar
2013-01-01
Approximation theory is a very important field which has various applications in pure and applied mathematics. The present study deals with a new theorem on the approximation of functions of Lipschitz class by using Euler's mean of conjugate series of Fourier series. In this paper, the degree of approximation by using Euler's means of conjugate of functions belonging to Lip (ξ(t), p) class has been obtained. Lipα and Lip (α, p) classes are the particular cases of Lip (ξ(t), p) class. The main result of this paper generalizes some well-known results in this direction. PMID:24379744
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Stipanović, Dušan M.; Tomlin, Claire J.; Leitmann, George
2012-01-01
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Stipanovic, Dusan M., E-mail: dusan@illinois.edu [University of Illinois at Urbana-Champaign, Coordinated Science Laboratory, Department of Industrial and Enterprise Systems Engineering (United States); Tomlin, Claire J., E-mail: tomlin@eecs.berkeley.edu [University of California at Berkeley, Department of Electrical Engineering and Computer Science (United States); Leitmann, George, E-mail: gleit@berkeley.edu [University of California at Berkeley, College of Engineering (United States)
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Rational function approximation method for discrete ordinates problems in slab geometry
Leal, Andre Luiz do C.; Barros, Ricardo C.
2009-01-01
In this work we use rational function approaches to obtain the transfer functions that appear in the spectral Green's function (SGF) auxiliary equations for one-speed isotropic scattering SN equations in one-dimensional Cartesian geometry. For this task we use the computation of the Pade approximants to compare the results with the standard SGF method's applied to deep penetration problems in homogeneous domains. This work is a preliminary investigation of a new proposal for handling leakage terms that appear in the two transverse integrated one-dimensional SN equations in the exponential SGF method (SGF-ExpN). Numerical results are presented to illustrate the rational function approximation accuracy. (author)
Inference Under a Wright-Fisher Model Using an Accurate Beta Approximation
Tataru, Paula; Bataillon, Thomas; Hobolth, Asger
2015-01-01
frequencies and the influence of evolutionary pressures, such as mutation and selection. Despite its simple mathematical formulation, exact results for the distribution of allele frequency (DAF) as a function of time are not available in closed analytic form. Existing approximations build......, the probability of being on the boundary can be positive, corresponding to the allele being either lost or fixed. Here, we introduce the beta with spikes, an extension of the beta approximation, which explicitly models the loss and fixation probabilities as two spikes at the boundaries. We show that the addition...
Correlation energy functional within the GW -RPA: Exact forms, approximate forms, and challenges
Ismail-Beigi, Sohrab
2010-05-01
In principle, the Luttinger-Ward Green’s-function formalism allows one to compute simultaneously the total energy and the quasiparticle band structure of a many-body electronic system from first principles. We present approximate and exact expressions for the correlation energy within the GW -random-phase approximation that are more amenable to computation and allow for developing efficient approximations to the self-energy operator and correlation energy. The exact form is a sum over differences between plasmon and interband energies. The approximate forms are based on summing over screened interband transitions. We also demonstrate that blind extremization of such functionals leads to unphysical results: imposing physical constraints on the allowed solutions (Green’s functions) is necessary. Finally, we present some relevant numerical results for atomic systems.
Description logics with approximate definitions precise modeling of vague concepts
Schlobach, Stefan; Klein, Michel; Peelen, Linda
2007-01-01
We extend traditional Description Logics (DL) with a simple mechanism to handle approximate concept definitions in a qualitative way. Often, for example in medical applications, concepts are not definable in a crisp way but can fairly exhaustively be constrained through a particular sub- and a
Higher-Order Approximation of Cubic-Quintic Duffing Model
Ganji, S. S.; Barari, Amin; Babazadeh, H.
2011-01-01
We apply an Artificial Parameter Lindstedt-Poincaré Method (APL-PM) to find improved approximate solutions for strongly nonlinear Duffing oscillations with cubic-quintic nonlinear restoring force. This approach yields simple linear algebraic equations instead of nonlinear algebraic equations...
Guidelines for Use of the Approximate Beta-Poisson Dose-Response Model.
Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie
2017-07-01
For dose-response analysis in quantitative microbial risk assessment (QMRA), the exact beta-Poisson model is a two-parameter mechanistic dose-response model with parameters α>0 and β>0, which involves the Kummer confluent hypergeometric function. Evaluation of a hypergeometric function is a computational challenge. Denoting PI(d) as the probability of infection at a given mean dose d, the widely used dose-response model PI(d)=1-(1+dβ)-α is an approximate formula for the exact beta-Poisson model. Notwithstanding the required conditions α1, issues related to the validity and approximation accuracy of this approximate formula have remained largely ignored in practice, partly because these conditions are too general to provide clear guidance. Consequently, this study proposes a probability measure Pr(0 (22α̂)0.50 for 0.020.99) . This validity measure and rule of thumb were validated by application to all the completed beta-Poisson models (related to 85 data sets) from the QMRA community portal (QMRA Wiki). The results showed that the higher the probability Pr(0 Poisson model dose-response curve. © 2016 Society for Risk Analysis.
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components. © 2012 Copyright Taylor and Francis Group, LLC.
Many-body perturbation theory using the density-functional concept: beyond the GW approximation.
Bruneval, Fabien; Sottile, Francesco; Olevano, Valerio; Del Sole, Rodolfo; Reining, Lucia
2005-05-13
We propose an alternative formulation of many-body perturbation theory that uses the density-functional concept. Instead of the usual four-point integral equation for the polarizability, we obtain a two-point one, which leads to excellent optical absorption and energy-loss spectra. The corresponding three-point vertex function and self-energy are then simply calculated via an integration, for any level of approximation. Moreover, we show the direct impact of this formulation on the time-dependent density-functional theory. Numerical results for the band gap of bulk silicon and solid argon illustrate corrections beyond the GW approximation for the self-energy.
Physical Applications of a Simple Approximation of Bessel Functions of Integer Order
Barsan, V.; Cojocaru, S.
2007-01-01
Applications of a simple approximation of Bessel functions of integer order, in terms of trigonometric functions, are discussed for several examples from electromagnetism and optics. The method may be applied in the intermediate regime, bridging the "small values regime" and the "asymptotic" one, and covering, in this way, an area of great…
Agarwal, Mukul
2018-01-01
It is proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution of the Markoff chain and thus, is also equal to the rate-distortion function of the Markoff chain.
Local density approximation for exchange in excited-state density functional theory
Harbola, Manoj K.; Samal, Prasanjit
2004-01-01
Local density approximation for the exchange energy is made for treatment of excited-states in density-functional theory. It is shown that taking care of the state-dependence of the LDA exchange energy functional leads to accurate excitation energies.
Bessel harmonic analysis and approximation of functions on the half-line
Platonov, Sergei S
2007-01-01
We study problems of approximation of functions on [0,+∞) in the metric of L p with power weight using generalized Bessel shifts. We prove analogues of direct Jackson theorems for the modulus of smoothness of arbitrary order defined in terms of generalized Bessel shifts. We establish the equivalence of the modulus of smoothness and the K-functional. We define function spaces of Nikol'skii-Besov type and describe them in terms of best approximations. As a tool for approximation, we use a certain class of entire functions of exponential type. In this class, we prove analogues of Bernstein's inequality and others for the Bessel differential operator and its fractional powers. The main tool we use to solve these problems is Bessel harmonic analysis
Validation of ecological state space models using the Laplace approximation
Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte
2017-01-01
Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....
Big geo data surface approximation using radial basis functions: A comparative study
Majdisova, Zuzana; Skala, Vaclav
2017-12-01
Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.
Marginalized approximate filtering of state-space models
Dedecius, Kamil
2018-01-01
Roč. 32, č. 1 (2018), s. 1-12 ISSN 0890-6327 R&D Projects: GA ČR(CZ) GA16-09848S Institutional support: RVO:67985556 Keywords : approximate filtering * marginalized filters * particle filtering Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.708, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/dedecius-0478074.pdf
Adaptive kernels in approximate filtering of state-space models
Dedecius, Kamil
2017-01-01
Roč. 31, č. 6 (2017), s. 938-952 ISSN 0890-6327 R&D Projects: GA ČR(CZ) GP14-06678P Institutional support: RVO:67985556 Keywords : filtering * nonlinear filters * Bayesian filtering * sequential Monte Carlo * approximate filtering Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 1.708, year: 2016 http://library.utia.cs.cz/separaty/2016/AS/dedecius-0466448.pdf
Numerical analysis of different neural transfer functions used for best approximation
Gougam, L.A.; Chikhi, A.; Biskri, S.; Chafa, F.
2006-01-01
It is widely recognised that the choice of transfer functions in neural networks is of en importance to their performance. In this paper, different neural transfer functions usec approximation are discussed. We begin with sigmoi'dal functions used most often by diffi authors . At a second step, we use Gaussian functions as previously suggested in refere Finally, we deal with a specified wavelet family. A comparison between the three cases < above is made exhibiting therefore the advantages of each transfer function. The approa< function improves as the dimension N of the elementary task basis increases
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can
Abdon Atangana
2014-01-01
Full Text Available The notion of uncertainty in groundwater hydrology is of great importance as it is known to result in misleading output when neglected or not properly accounted for. In this paper we examine this effect in groundwater flow models. To achieve this, we first introduce the uncertainties functions u as function of time and space. The function u accounts for the lack of knowledge or variability of the geological formations in which flow occur (aquifer in time and space. We next make use of Riemann-Liouville fractional derivatives that were introduced by Kobelev and Romano in 2000 and its approximation to modify the standard version of groundwater flow equation. Some properties of the modified Riemann-Liouville fractional derivative approximation are presented. The classical model for groundwater flow, in the case of density-independent flow in a uniform homogeneous aquifer is reformulated by replacing the classical derivative by the Riemann-Liouville fractional derivatives approximations. The modified equation is solved via the technique of green function and the variational iteration method.
Lee, M.W.; Bigeleisen, J.
1978-01-01
The MINIMAX finite polynomial approximation to an arbitrary function has been generalized to include a weighting function (WINIMAX). It is suggested that an exponential is a reasonable weighting function for the logarithm of the reduced partition function of a harmonic oscillator. Comparison of the error function for finite orthogonal polynomial (FOP), MINIMAX, and WINIMAX expansions of the logarithm of the reduced vibrational partition function show WINIMAX to be the best of the three approximations. A condensed table of WINIMAX coefficients is presented. The FOP, MINIMAX, and WINIMAX approximations are compared with exact calculations of the logarithm of the reduced partition function ratios for isotopic substitution in H 2 O, CH 4 , CH 2 O, C 2 H 4 , and C 2 H 6 at 300 0 K. Both deuterium and heavy atom isotope substitution are studied. Except for a third order expansion involving deuterium substitution, the WINIMAX method is superior to FOP and MINIMAX. At the level of a second order expansion WINIMAX approximations to ln(s/s')f are good to 2.5% and 6.5% for deuterium and heavy atom substitution, respectively
Smith, Kyle K. G.; Poulsen, Jens Aage; Nyman, Gunnar; Rossky, Peter J.
2015-01-01
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
FUNPACK-2, Subroutine Library, Bessel Function, Elliptical Integrals, Min-max Approximation
Cody, W.J.; Garbow, Burton S.
1975-01-01
1 - Description of problem or function: FUNPACK is a collection of FORTRAN subroutines to evaluate certain special functions. The individual subroutines are (Identification/Description): NATSI0 F2I0 Bessel function I 0 ; NATSI1 F2I1 Bessel function I 1 ; NATSJ0 F2J0 Bessel function J 0 ; NATSJ1 F2J1 Bessel function J 1 ; NATSK0 F2K0 Bessel function K 0 ; NATSK1 F2K1 Bessel function K 1 ; NATSBESY F2BY Bessel function Y ν ; DAW F1DW Dawson's integral; DELIPK F1EK Complete elliptic integral of the first kind; DELIPE F1EE Complete elliptic integral of the second kind; DEI F1EI Exponential integrals; NATSPSI F2PS Psi (logarithmic derivative of gamma function); MONERR F1MO Error monitoring package . 2 - Method of solution: FUNPACK uses evaluation of min-max approximations
Data-Driven Model Reduction and Transfer Operator Approximation
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
Cristinel Mortici
2015-01-01
Full Text Available In this survey we present our recent results on analysis of gamma function and related functions. The results obtained are in the theory of asymptotic analysis, approximation of gamma and polygamma functions, or in the theory of completely monotonic functions. The motivation of this first part is the work of C. Mortici [Product Approximations via Asymptotic Integration Amer. Math. Monthly 117 (2010 434-441] where a simple strategy for constructing asymptotic series is presented. The classical asymptotic series associated to Stirling, Wallis, Glaisher-Kinkelin are rediscovered. In the second section we discuss some new inequalities related to Landau constants and we establish some asymptotic formulas.
Haeggblom, H.
1968-08-01
The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances
Haeggblom, H
1968-08-15
The method of calculating the resonance interaction effect by series expansions has been studied. Starting from the assumption that the neutron flux in a homogeneous mixture is inversely proportional to the total cross section, the expression for the flux can be simplified by series expansions. Two types of expansions are investigated and it is shown that only one of them is generally applicable. It is also shown that this expansion gives sufficient accuracy if the approximate resonance line shape function is reasonably representative. An investigation is made of the approximation of the resonance shape function with a Gaussian function which in some cases has been used to calculate the interaction effect. It is shown that this approximation is not sufficiently accurate in all cases which can occur in practice. Then, a rational approximation is introduced which in the first order approximation gives the same order of accuracy as a practically exact shape function. The integrations can be made analytically in the complex plane and the method is therefore very fast compared to purely numerical integrations. The method can be applied both to statistically correlated and uncorrelated resonances.
Bridge density functional approximation for non-uniform hard core repulsive Yukawa fluid
Zhou Shiqi
2008-01-01
In this work, a bridge density functional approximation (BDFA) (J. Chem. Phys. 112, 8079 (2000)) for a non-uniform hard-sphere fluid is extended to a non-uniform hard-core repulsive Yukawa (HCRY) fluid. It is found that the choice of a bulk bridge functional approximation is crucial for both a uniform HCRY fluid and a non-uniform HCRY fluid. A new bridge functional approximation is proposed, which can accurately predict the radial distribution function of the bulk HCRY fluid. With the new bridge functional approximation and its associated bulk second order direct correlation function as input, the BDFA can be used to well calculate the density profile of the HCRY fluid subjected to the influence of varying external fields, and the theoretical predictions are in good agreement with the corresponding simulation data. The calculated results indicate that the present BDFA captures quantitatively the phenomena such as the coexistence of solid-like high density phase and low density gas phase, and the adsorption properties of the HCRY fluid, which qualitatively differ from those of the fluids combining both hard-core repulsion and an attractive tail. (condensed matter: structure, thermal and mechanical properties)
Balancing Exchange Mixing in Density-Functional Approximations for Iron Porphyrin.
Berryman, Victoria E J; Boyd, Russell J; Johnson, Erin R
2015-07-14
Predicting the correct ground-state multiplicity for iron(II) porphyrin, a high-spin quintet, remains a significant challenge for electronic-structure methods, including commonly employed density functionals. An even greater challenge for these methods is correctly predicting favorable binding of O2 to iron(II) porphyrin, due to the open-shell singlet character of the adduct. In this work, the performance of a modest set of contemporary density-functional approximations is assessed and the results interpreted using Bader delocalization indices. It is found that inclusion of greater proportions of Hartree-Fock exchange, in hybrid or range-separated hybrid functionals, has opposing effects; it improves the ability of the functional to identify the ground state but is detrimental to predicting favorable dioxygen binding. Because of the uncomplementary nature of these properties, accurate prediction of both the relative spin-state energies and the O2 binding enthalpy eludes conventional density-functional approximations.
Irina-Carmen ANDREI
2017-09-01
Full Text Available Following the demands of the design and performance analysis in case of liquid fuel propelled rocket engines, as well as the trajectory optimization, the development of efficient codes, which frequently need to call the Fuel Combustion Charts, became an important matter. This paper presents an efficient solution to the issue; the author has developed an original approach to determine the non-linear approximation function of two variables: the chamber pressure and the nozzle exit pressure ratio. The numerical algorithm based on this two variable approximation function is more efficient due to its simplicity, capability to providing numerical accuracy and prospects for an increased convergence rate of the optimization codes.
Slow Growth and Optimal Approximation of Pseudoanalytic Functions on the Disk
Devendra Kumar
2013-07-01
Full Text Available Pseudoanalytic functions (PAF are constructed as complex combination of real-valued analytic solutions to the Stokes-Betrami System. These solutions include the generalized biaxisymmetric potentials. McCoy [10] considered the approximation of pseudoanalytic functions on the disk. Kumar et al. [9] studied the generalized order and generalized type of PAF in terms of the Fourier coefficients occurring in its local expansion and optimal approximation errors in Bernstein sense on the disk. The aim of this paper is to improve the results of McCoy [10] and Kumar et al. [9]. Our results apply satisfactorily for slow growth.
Mean-field approximation for spacing distribution functions in classical systems
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
An approximate fractional Gaussian noise model with computational cost
Sø rbye, Sigrunn H.; Myrvoll-Nilsen, Eirik; Rue, Haavard
2017-01-01
Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood
Modeling Large Time Series for Efficient Approximate Query Processing
Perera, Kasun S; Hahmann, Martin; Lehner, Wolfgang
2015-01-01
query statistics derived from experiments and when running the system. Our approach can also reduce communication load by exchanging models instead of data. To allow seamless integration of model-based querying into traditional data warehouses, we introduce a SQL compatible query terminology. Our...
Rational Approximations to Rational Models: Alternative Algorithms for Category Learning
Sanborn, Adam N.; Griffiths, Thomas L.; Navarro, Daniel J.
2010-01-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models…
Approximate self-similarity in models of geological folding
Budd, C.J.; Peletier, M.A.
2000-01-01
We propose a model for the folding of rock under the compression of tectonic plates. This models an elastic rock layer imbedded in a viscous foundation by a fourth-order parabolic equation with a nonlinear constraint. The large-time behavior of solutions of this problem is examined and found to be
Modeling and identification of centrifugal compressor dynamics with approximate realizations
Helvoirt, van J.; Jager, de A.G.; Steinbuch, M.; Smeulers, J.P.M.
2005-01-01
This paper deals with the parameter identification of a model for the dynamic behavior of a large industrial centrifugal compression system. Experimental results are presented to evaluate a new approach for determining the parameters of the modified version of the well-known Greitzer model. This
Comparison of approximations to the transition rate in the DDHMS preequilibrium model
Brito, L.; Carlson, B.V.
2014-01-01
The double differential hybrid Monte Carlo simulation model (DDHMS) originally used exciton model densities and transition densities with approximate angular distributions obtained using linear momentum conservation. Because the model uses only the simplest transition rates, calculations using more complex approximations to these are still viable. We compare calculations using the original approximation to one using a nonrelativistic Fermi gas transition densities with the approximate angular distributions and with exact nonrelativistic and relativistic transition transition densities. (author)
Frydel, Derek; Ma, Manman
2016-06-01
Using the adiabatic connection, we formulate the free energy in terms of the correlation function of a fictitious system, h_{λ}(r,r^{'}), in which interactions λu(r,r^{'}) are gradually switched on as λ changes from 0 to 1. The function h_{λ}(r,r^{'}) is then obtained from the inhomogeneous Ornstein-Zernike equation and the two equations constitute a general liquid-state framework for treating inhomogeneous fluids. The two equations do not yet constitute a closed set. In the present work we use the closure c_{λ}(r,r^{'})≈-λβu(r,r^{'}), known as the random-phase approximation (RPA). We demonstrate that the RPA is identical with the variational Gaussian approximation derived within the field-theoretical framework, originally derived and used for charged particles. We apply our generalized RPA approximation to the Gaussian core model and Coulomb charges.
Kok, S
2012-07-01
Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...
Ideal Coulomb Plasma Approximation in Line Shape Models: Problematic Issues
Joel Rosato
2014-06-01
Full Text Available In weakly coupled plasmas, it is common to describe the microfield using a Debye model. We examine here an “artificial” ideal one-component plasma with an infinite Debye length, which has been used for the test of line shape codes. We show that the infinite Debye length assumption can lead to a misinterpretation of numerical simulations results, in particular regarding the convergence of calculations. Our discussion is done within an analytical collision operator model developed for hydrogen line shapes in near-impact regimes. When properly employed, this model can serve as a reference for testing the convergence of simulations.
Universality for 1d Random Band Matrices: Sigma-Model Approximation
Shcherbina, Mariya; Shcherbina, Tatyana
2018-02-01
The paper continues the development of the rigorous supersymmetric transfer matrix approach to the random band matrices started in (J Stat Phys 164:1233-1260, 2016; Commun Math Phys 351:1009-1044, 2017). We consider random Hermitian block band matrices consisting of W× W random Gaussian blocks (parametrized by j,k \\in Λ =[1,n]^d\\cap Z^d ) with a fixed entry's variance J_{jk}=δ _{j,k}W^{-1}+β Δ _{j,k}W^{-2} , β >0 in each block. Taking the limit W→ ∞ with fixed n and β , we derive the sigma-model approximation of the second correlation function similar to Efetov's one. Then, considering the limit β , n→ ∞, we prove that in the dimension d=1 the behaviour of the sigma-model approximation in the bulk of the spectrum, as β ≫ n , is determined by the classical Wigner-Dyson statistics.
A new formulation for the Doppler broadening function relaxing the approximations of Beth–Plackzec
Palma, Daniel A.P.; Gonçalves, Alessandro C.; Martinez, Aquilino S.; Mesquita, Amir Z.
2016-01-01
Highlights: • One of the Beth–Placzek approximation were relaxed. • An additional term in the form of an integral is obtained. • A new mathematical formulation for the Doppler broadening function is proposed. - Abstract: In all nuclear reactors some neutrons can be absorbed in the resonance region and, in the design of these reactors, an accurate treatment of the resonant absorptions is essential. Apart from that, the resonant absorption varies with fuel temperature due to the Doppler broadening of the resonances. The thermal agitation movement in the reactor core is adequately represented in the microscopic cross-section of the neutron-core interaction through the Doppler broadening function. This function is calculated numerically in modern systems for the calculation of macro-group constants, necessary to determine the power distribution of a nuclear reactor. It can also be applied to the calculation of self-shielding factors to correct the measurements of the microscopic cross-sections through the activation technique and used for the approximate calculations of the resonance integrals in heterogeneous fuel cells. In these types of application we can point at the need to develop precise analytical approximations for the Doppler broadening function to be used in the calculation codes that calculate the values of this function. However, the Doppler broadening function is based on a series of approximations proposed by Beth–Plackzec. In this work a relaxation of these approximations is proposed, generating an additional term in the form of an integral. Analytical solutions of this additional term are discussed. The results obtained show that the new term is important for high temperatures.
Evaluation of Gaussian approximations for data assimilation in reservoir models
Iglesias, Marco A.; Law, Kody J H; Stuart, Andrew M.
2013-01-01
is fundamental for the optimal management of reservoirs. Unfortunately, due to the large-scale highly nonlinear properties of standard reservoir models, characterizing the posterior is computationally prohibitive. Instead, more affordable ad hoc techniques, based
Approximating prediction uncertainty for random forest regression models
John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne
2016-01-01
Machine learning approaches such as random forest haveÂ increased for the spatial modeling and mapping of continuousÂ variables. Random forest is a non-parametric ensembleÂ approach, and unlike traditional regression approaches thereÂ is no direct quantification of prediction error. UnderstandingÂ prediction uncertainty is important when using model-basedÂ continuous maps as...
Enhanced Vehicle Beddown Approximations for the Improved Theater Distribution Model
2014-03-27
processed utilizing a heuristic routing and scheduling procedure the authors called the Airlift Planning Algorithm ( APA ). The linear programming model...LINGO 13 environment. The model is then solved by LINGO 13 and solution data is passed back to the Excel environment in a readable format . All original...DSS is relatively unchanged when solutions to the ITDM are referenced for comparison testing. Readers are encouraged to see Appendix I for ITDM VBA
An Improved QTM Subdivision Model with Approximate Equal-area
ZHAO Xuesheng
2016-01-01
Full Text Available To overcome the defect of large area deformation in the traditional QTM subdivision model, an improved subdivision model is proposed which based on the “parallel method” and the thought of the equal area subdivision with changed-longitude-latitude. By adjusting the position of the parallel, this model ensures that the grid area between two adjacent parallels combined with no variation, so as to control area variation and variation accumulation of the QTM grid. The experimental results show that this improved model not only remains some advantages of the traditional QTM model(such as the simple calculation and the clear corresponding relationship with longitude/latitude grid, etc, but also has the following advantages: ①this improved model has a better convergence than the traditional one. The ratio of area_max/min finally converges to 1.38, far less than 1.73 of the “parallel method”; ②the grid units in middle and low latitude regions have small area variations and successive distributions; meanwhile, with the increase of subdivision level, the grid units with large variations gradually concentrate to the poles; ③the area variation of grid unit will not cumulate with the increasing of subdivision level.
Lublinsky, Michael
2004-01-01
A simple analytic expression for the nonsinglet structure function f NS is given. The expression is derived from the result of Ermolaev, Manaenkov, and Ryskin obtained by low x resummation of the quark ladder diagrams in the double logarithmic approximation of perturbative QCD
Approximation Algorithms for the Highway Problem under the Coupon Model
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).
Approximate Solutions of Interactive Dynamic Influence Diagrams Using Model Clustering
Zeng, Yifeng; Doshi, Prashant; Qiongyu, Cheng
2007-01-01
Interactive dynamic influence diagrams (I-DIDs) offer a transparent and semantically clear representation for the sequential decision-making problem over multiple time steps in the presence of other interacting agents. Solving I-DIDs exactly involves knowing the solutions of possible models...
Investigation of some approximation used in promptly emitted particle models
Leray, S.; La Rana, G.; Lucas, R.; Ngo, C.; Barranco, M.; Pi, M.; Vinas, X.
1984-01-01
We investigate three effects which can be taken into account in a model for promptly emitted particles: the Pauli blocking, the velocity of the window separating the two ions with respect to each of the fragments and the spatial extension of the window
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
Obe approximation of NN scattering in bag-model QCD
Bakker, B.L.G.; Maslow, J.N.; Weber, H.J.
1981-01-01
A partial-wave helicity-state analysis of nucleon-nucleon scattering is carried out in momentum space. Its basis is a one-boson and two-pion exchange amplitude from bag-model quantum chromodynamics. The resulting phase shifts and bound-state parameters of the deuteron are compared with data up to laboratory energies of approx. equal to 350 MeV. (orig.)
Modeling opinion dynamics: Theoretical analysis and continuous approximation
Pinasco, Juan Pablo; Semeshenko, Viktoriya; Balenzuela, Pablo
2017-01-01
Highlights: • We study a simple model of persuasion dynamics with long range pairwise interactions. • The continuous limit of the master equation is a nonlinear, nonlocal, first order partial differential equation. • We compute the analytical solutions to this equation, and compare them with the simulations of the dynamics. - Abstract: Frequently we revise our first opinions after talking over with other individuals because we get convinced. Argumentation is a verbal and social process aimed at convincing. It includes conversation and persuasion and the agreement is reached because the new arguments are incorporated. Given the wide range of opinion formation mathematical approaches, there are however no models of opinion dynamics with nonlocal pair interactions analytically solvable. In this paper we present a novel analytical framework developed to solve the master equations with non-local kernels. For this we used a simple model of opinion formation where individuals tend to get more similar after each interactions, no matter their opinion differences, giving rise to nonlinear differential master equation with non-local terms. Simulation results show an excellent agreement with results obtained by the theoretical estimation.
APPROX, 1-D and 2-D Function Approximation by Polynomials, Splines, Finite Elements Method
Tollander, Bengt
1975-01-01
1 - Nature of physical problem solved: Approximates one- and two- dimensional functions using different forms of the approximating function, as polynomials, rational functions, Splines and (or) the finite element method. Different kinds of transformations of the dependent and (or) the independent variables can easily be made by data cards using a FORTRAN-like language. 2 - Method of solution: Approximations by polynomials, Splines and (or) the finite element method are made in L2 norm using the least square method by which the answer is directly given. For rational functions in one dimension the result given in L(infinite) norm is achieved by iterations moving the zero points of the error curve. For rational functions in two dimensions, the norm is L2 and the result is achieved by iteratively changing the coefficients of the denominator and then solving the coefficients of the numerator by the least square method. The transformation of the dependent and (or) independent variables is made by compiling the given transform data card(s) to an array of integers from which the transformation can be made
Zhou, Chenyi; Guo, Hong
2017-01-01
We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.
Approximated calculation of the vacuum wave function and vacuum energy of the LGT with RPA method
Hui Ping
2004-01-01
The coupled cluster method is improved with the random phase approximation (RPA) to calculate vacuum wave function and vacuum energy of 2 + 1 - D SU(2) lattice gauge theory. In this calculating, the trial wave function composes of single-hollow graphs. The calculated results of vacuum wave functions show very good scaling behaviors at weak coupling region l/g 2 >1.2 from the third order to the sixth order, and the vacuum energy obtained with RPA method is lower than the vacuum energy obtained without RPA method, which means that this method is a more efficient one
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
FDTD subcell graphene model beyond the thin-film approximation
Valuev, Ilya; Belousov, Sergei; Bogdanova, Maria; Kotov, Oleg; Lozovik, Yurii
2017-01-01
A subcell technique for calculation of optical properties of graphene with the finite-difference time-domain (FDTD) method is presented. The technique takes into account the surface conductivity of graphene which allows the correct calculation of its dispersive response for arbitrarily polarized incident waves interacting with the graphene. The developed technique is verified for a planar graphene sheet configuration against the exact analytical solution. Based on the same test case scenario, we also show that the subcell technique demonstrates a superior accuracy and numerical efficiency with respect to the widely used thin-film FDTD approach for modeling graphene. We further apply our technique to the simulations of a graphene metamaterial containing periodically spaced graphene strips (graphene strip-grating) and demonstrate good agreement with the available theoretical results.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
Hartree-Fock-Bogolubov approximation in the models with general four-fermion interaction
Bogolubov, N.N. Jr.; Soldatov, A.V.
1995-12-01
The foundation of this work was established by the lectures of Prof. N.N. Bogolubov (senior) written in the beginning of 1990. We should like to develop some of his ideas connected with Hartree-Fock-Bogolubov method and to show how this approximation works in connection with general equations for Green's functions with source terms for sufficiently general model Hamiltonian of four-fermion interaction type and how, for example, to get some results of superconductivity theory by means of this method. (author). 5 refs
An approximate framework for quantum transport calculation with model order reduction
Chen, Quan, E-mail: quanchen@eee.hku.hk [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Li, Jun [Department of Chemistry, The University of Hong Kong (Hong Kong); Yam, Chiyung [Beijing Computational Science Research Center (China); Zhang, Yu [Department of Chemistry, The University of Hong Kong (Hong Kong); Wong, Ngai [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Chen, Guanhua [Department of Chemistry, The University of Hong Kong (Hong Kong)
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
Chen Yuan; Song Chuangchuang; Xiang Ying
2010-01-01
In this paper, we apply the two-time Green's function method, and provide a simple way to study the magnetic properties of one-dimensional spin-(S,s) Heisenberg ferromagnets. The magnetic susceptibility and correlation functions are obtained by using the Tyablikov decoupling approximation. Our results show that the magnetic susceptibility and correlation length are a monotonically decreasing function of temperature regardless of the mixed spins. It is found that in the case of S=s, our results of one-dimensional mixed-spin model is reduced to be those of the isotropic ferromagnetic Heisenberg chain in the whole temperature region. Our results for the susceptibility are in agreement with those obtained by other theoretical approaches. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
A deterministic width function model
C. E. Puente
2003-01-01
Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.
The approximation function of bridge deck vibration derived from the measured eigenmodes
Sokol Milan
2017-12-01
Full Text Available This article deals with a method of how to acquire approximate displacement vibration functions. Input values are discrete, experimentally obtained mode shapes. A new improved approximation method based on the modal vibrations of the deck is derived using the least-squares method. An alternative approach to be employed in this paper is to approximate the displacement vibration function by a sum of sine functions whose periodicity is determined by spectral analysis adapted for non-uniformly sampled data and where the parameters of scale and phase are estimated as usual by the least-squares method. Moreover, this periodic component is supplemented by a cubic regression spline (fitted on its residuals that captures individual displacements between piers. The statistical evaluation of the stiffness parameter is performed using more vertical modes obtained from experimental results. The previous method (Sokol and Flesch, 2005, which was derived for near the pier areas, has been enhanced to the whole length of the bridge. The experimental data describing the mode shapes are not appropriate for direct use. Especially the higher derivatives calculated from these data are very sensitive to data precision.
Long-range-corrected Rung 3.5 density functional approximations
Janesko, Benjamin G.; Proynov, Emil; Scalmani, Giovanni; Frisch, Michael J.
2018-03-01
Rung 3.5 functionals are a new class of approximations for density functional theory. They provide a flexible intermediate between exact (Hartree-Fock, HF) exchange and semilocal approximations for exchange. Existing Rung 3.5 functionals inherit semilocal functionals' limitations in atomic cores and density tails. Here we address those limitations using range-separated admixture of HF exchange. We present three new functionals. LRC-ωΠLDA combines long-range HF exchange with short-range Rung 3.5 ΠLDA exchange. SLC-ΠLDA combines short- and long-range HF exchange with middle-range ΠLDA exchange. LRC-ωΠLDA-AC incorporates a combination of HF, semilocal, and Rung 3.5 exchange in the short range, based on an adiabatic connection. We test these in a new Rung 3.5 implementation including up to analytic fourth derivatives. LRC-ωΠLDA and SLC-ΠLDA improve atomization energies and reaction barriers by a factor of 8 compared to the full-range ΠLDA. LRC-ωΠLDA-AC brings further improvement approaching the accuracy of standard long-range corrected schemes LC-ωPBE and SLC-PBE. The new functionals yield highest occupied orbital energies closer to experimental ionization potentials and describe correctly the weak charge-transfer complex of ethylene and dichlorine and the hole-spin distribution created by an Al defect in quartz. This study provides a framework for more flexible range-separated Rung 3.5 approximations.
Li, Chen; Requist, Ryan; Gross, E. K. U.
2018-02-01
We perform model calculations for a stretched LiF molecule, demonstrating that nonadiabatic charge transfer effects can be accurately and seamlessly described within a density functional framework. In alkali halides like LiF, there is an abrupt change in the ground state electronic distribution due to an electron transfer at a critical bond length R = Rc, where an avoided crossing of the lowest adiabatic potential energy surfaces calls the validity of the Born-Oppenheimer approximation into doubt. Modeling the R-dependent electronic structure of LiF within a two-site Hubbard model, we find that nonadiabatic electron-nuclear coupling produces a sizable elongation of the critical Rc by 0.5 bohr. This effect is very accurately captured by a simple and rigorously derived correction, with an M-1 prefactor, to the exchange-correlation potential in density functional theory, M = reduced nuclear mass. Since this nonadiabatic term depends on gradients of the nuclear wave function and conditional electronic density, ∇Rχ(R) and ∇Rn(r, R), it couples the Kohn-Sham equations at neighboring R points. Motivated by an observed localization of nonadiabatic effects in nuclear configuration space, we propose a local conditional density approximation—an approximation that reduces the search for nonadiabatic density functionals to the search for a single function y(n).
Application of the resonating Hartree-Fock random phase approximation to the Lipkin model
Nishiyama, S.; Ishida, K.; Ido, M.
1996-01-01
We have applied the resonating Hartree-Fock (Res-HF) approximation to the exactly solvable Lipkin model by utilizing a newly developed orbital-optimization algorithm. The Res-HF wave function was superposed by two Slater determinants (S-dets) which give two corresponding local energy minima of monopole ''deformations''. The self-consistent Res-HF calculation gives an excellent ground-state correlation energy. There exist excitations due to small vibrational fluctuations of the orbitals and mixing coefficients around their stationary values. They are described by a new approximation called the resonating Hartree-Fock random phase approximation (Res-HF RPA). Matrices of the second-order variation of the Res-HF energy have the same structures as those of the Res-HF RPA's matrices. The quadratic steepest descent of the Res-HF energy in the orbital optimization is considered to include certainly both effects of RPA-type fluctuations up to higher orders and their mode-mode couplings. It is a very important and interesting task to apply the Res-HF RPA to the Lipkin model with the use of the stationary values and to prove the above argument. It turns out that the Res-HF RPA works far better than the usual HF RPA and the renormalized one. We also show some important features of the Res-HF RPA. (orig.)
Many-body perturbation theory using the density-functional concept: beyond the GW approximation
Bruneval, Fabien; Sottile, Francesco; Olevano, Valerio; Del Sole, Rodolfo; Reining, Lucia
2005-01-01
We propose an alternative formulation of Many-Body Perturbation Theory that uses the density-functional concept. Instead of the usual four-point integral equation for the polarizability, we obtain a two-point one, that leads to excellent optical absorption and energy loss spectra. The corresponding three-point vertex function and self-energy are then simply calculated via an integration, for any level of approximation. Moreover, we show the direct impact of this formulation on the time-depend...
Lobanov, Yu.Yu.; Shidkov, E.P.
1987-01-01
The method for numerical evaluation of path integrals in Eucledean quantum mechanics without lattice discretization is elaborated. The method is based on the representation of these integrals in the form of functional integrals with respect to the conditional Wiener measure and on the use of the derived approximate exact on a class of polynomial functionals of a given degree. By the computations of non-perturbative characteristics, concerned the topological structure of vacuum, the advantages of this method versus lattice Monte-Carlo calculations are demonstrated
Theory and application of an approximate model of saltwater upconing in aquifers
McElwee, C.; Kemblowski, M.
1990-01-01
Motion and mixing of salt water and fresh water are vitally important for water-resource development throughout the world. An approximate model of saltwater upconing in aquifers is developed, which results in three non-linear coupled equations for the freshwater zone, the saltwater zone, and the transition zone. The description of the transition zone uses the concept of a boundary layer. This model invokes some assumptions to give a reasonably tractable model, considerably better than the sharp interface approximation but considerably simpler than a fully three-dimensional model with variable density. We assume the validity of the Dupuit-Forchheimer approximation of horizontal flow in each layer. Vertical hydrodynamic dispersion into the base of the transition zone is assumed and concentration of the saltwater zone is assumed constant. Solute in the transition zone is assumed to be moved by advection only. Velocity and concentration are allowed to vary vertically in the transition zone by using shape functions. Several numerical techniques can be used to solve the model equations, and simple analytical solutions can be useful in validating the numerical solution procedures. We find that the model equations can be solved with adequate accuracy using the procedures presented. The approximate model is applied to the Smoky Hill River valley in central Kansas. This model can reproduce earlier sharp interface results as well as evaluate the importance of hydrodynamic dispersion for feeding salt water to the river. We use a wide range of dispersivity values and find that unstable upconing always occurs. Therefore, in this case, hydrodynamic dispersion is not the only mechanism feeding salt water to the river. Calculations imply that unstable upconing and hydrodynamic dispersion could be equally important in transporting salt water. For example, if groundwater flux to the Smoky Hill River were only about 40% of its expected value, stable upconing could exist where
Bricka, M.
1962-03-01
This report addresses the problem of determination of neutron spectrum by using a set of detectors. The spectrum approximation method based on a polygonal function is more particularly studied. The author shows that the coefficients of the usual mathematical model can be simply formulated and assessed. The study of spectra approximation by a polygonal function shows that dose can be expressed by a linear function of the activity of the different detectors [fr
Exact solutions for fermionic Green's functions in the Bloch-Nordsieck approximation of QED
Kernemann, A.; Stefanis, N.G.
1989-01-01
A set of new closed-form solutions for fermionic Green's functions in the Bloch-Nordsieck approximation of QED is presented. A manifestly covariant phase-space path-integral method is applied for calculating the n-fermion Green's function in a classical external field. In the case of one and two fermions, explicit expressions for the full Green's functions are analytically obtained, with renormalization carried out in the modified minimal subtraction scheme. The renormalization constants and the corresponding anomalous dimensions are determined. The mass-shell behavior of the two-fermion Green's function is investigated in detail. No assumptions are made concerning the structure of asymptotic states and no IR cutoff is used in the calculations
Otero, F A; Frontini, G L; Elicabe, G E
2011-01-01
An analytic model for the scattering of a spherical particle with spherical inclusions has been proposed under the RG approximation. The model can be used without limitations to describe an X-ray scattering experiment. However, for light scattering several conditions must be fulfilled. Based on this model an inverse methodology is proposed to estimate the radii of host particle and inclusions, the number of inclusions and the Distance Distribution Functions (DDF's) of the distances between inclusions and the distances between inclusions and the origin of coordinates. The methodology is numerically tested in a light scattering example in which the host particle is eliminated by matching the refractive indices of host particle and medium. The results obtained for this cluster particle are very satisfactory.
Collective excitations in the Penson-Kolb model: A generalized random-phase-approximation study
Roy, G.K.; Bhattacharyya, B.
1997-01-01
The evolution of the superconducting ground state of the half-filled Penson-Kolb model is examined as a function of the coupling constant using a mean-field approach and the generalized random phase approximation (RPA) in two and three dimensions. On-site singlet pairs hop to compete against single-particle motion in this model, giving the coupling constant a strong momentum dependence. There is a pronounced bandwidth enhancement effect that converges smoothly to a finite value in the strong-coupling (Bose) regime. The low-lying collective excitations evaluated in generalized RPA show a linear dispersion and a gradual crossover from the weak-coupling (BCS) limit to the Bose regime; the mode velocity increases monotonically in sharp contrast to the attractive Hubbard model. Analytical results are derived in the asymptotic limits. copyright 1997 The American Physical Society
On Approximation of Hyper-geometric Function Values of a Special Class
P. L. Ivankov
2017-01-01
Full Text Available Investigations of arithmetic properties of the hyper-geometric function values make it possible to single out two trends, namely, Siegel’s method and methods based on the effective construction of a linear approximating form. There are also methods combining both approaches mentioned. The Siegel’s method allows obtaining the most general results concerning the abovementioned problems. In many cases it was used to establish the algebraic independence of the values of corresponding functions. Although the effective methods do not allow obtaining propositions of such generality they have nevertheless some advantages. Among these advantages one can distinguish at least two: a higher precision of the quantitative results obtained by effective methods and a possibility to study the hyper-geometric functions with irrational parameters.In this paper we apply the effective construction to estimate a measure of the linear independence of the hyper-geometric function values over the imaginary quadratic field. The functions themselves were chosen by a special way so that it could be possible to demonstrate a new approach to the effective construction of a linear approximating form. This approach makes it possible also to extend the well-known effective construction methods of the linear approximating forms for poly-logarithms to the functions of more general type.To obtain the arithmetic result we had to establish a linear independence of the functions under consideration over the field of rational functions. It is apparently impossible to apply directly known theorems containing sufficient (and in some cases needful and sufficient conditions for the system of functions appearing in the theorems mentioned. For this reason, a special technique has been developed to solve this problem.The paper presents the obtained arithmetic results concerning the values of integral functions, but, with appropriate alterations, the theorems proved can be adapted to
An Emulator Toolbox to Approximate Radiative Transfer Models with Statistical Learning
Juan Pablo Rivera
2015-07-01
Full Text Available Physically-based radiative transfer models (RTMs help in understanding the processes occurring on the Earth’s surface and their interactions with vegetation and atmosphere. When it comes to studying vegetation properties, RTMs allows us to study light interception by plant canopies and are used in the retrieval of biophysical variables through model inversion. However, advanced RTMs can take a long computational time, which makes them unfeasible in many real applications. To overcome this problem, it has been proposed to substitute RTMs through so-called emulators. Emulators are statistical models that approximate the functioning of RTMs. Emulators are advantageous in real practice because of the computational efficiency and excellent accuracy and flexibility for extrapolation. We hereby present an “Emulator toolbox” that enables analysing multi-output machine learning regression algorithms (MO-MLRAs on their ability to approximate an RTM. The toolbox is included in the free-access ARTMO’s MATLAB suite for parameter retrieval and model inversion and currently contains both linear and non-linear MO-MLRAs, namely partial least squares regression (PLSR, kernel ridge regression (KRR and neural networks (NN. These MO-MLRAs have been evaluated on their precision and speed to approximate the soil vegetation atmosphere transfer model SCOPE (Soil Canopy Observation, Photochemistry and Energy balance. SCOPE generates, amongst others, sun-induced chlorophyll fluorescence as the output signal. KRR and NN were evaluated as capable of reconstructing fluorescence spectra with great precision. Relative errors fell below 0.5% when trained with 500 or more samples using cross-validation and principal component analysis to alleviate the underdetermination problem. Moreover, NN reconstructed fluorescence spectra about 50-times faster and KRR about 800-times faster than SCOPE. The Emulator toolbox is foreseen to open new opportunities in the use of advanced
Hozejowski Leszek
2012-04-01
Full Text Available The paper is devoted to a computational problem of predicting a local heat transfer coefficient from experimental temperature data. The experimental part refers to boiling flow of a refrigerant in a minichannel. Heat is dissipated from heating alloy to the flowing liquid due to forced convection. The mathematical model of the problem consists of the governing Poisson equation and the proper boundary conditions. For accurate results it is required to smooth the measurements which was obtained by using Trefftz functions. The measurements were approximated with a linear combination of Trefftz functions. Due to the computational procedure in which the measurement errors are known, it was possible to smooth the data and also to reduce the residuals of approximation on the boundaries.
Schneiderbauer, Simon; Saeedipour, Mahdi
2018-02-01
Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].
Combi, Carlo; Mantovani, Matteo; Sabaini, Alberto; Sala, Pietro; Amaddeo, Francesco; Moretti, Ugo; Pozzi, Giuseppe
2015-07-01
Functional dependencies (FDs) typically represent associations over facts stored by a database, such as "patients with the same symptom get the same therapy." In more recent years, some extensions have been introduced to represent both temporal constraints (temporal functional dependencies - TFDs), as "for any given month, patients with the same symptom must have the same therapy, but their therapy may change from one month to the next one," and approximate properties (approximate functional dependencies - AFDs), as "patients with the same symptomgenerallyhave the same therapy." An AFD holds most of the facts stored by the database, enabling some data to deviate from the defined property: the percentage of data which violate the given property is user-defined. According to this scenario, in this paper we introduce approximate temporal functional dependencies (ATFDs) and use them to mine clinical data. Specifically, we considered the need for deriving new knowledge from psychiatric and pharmacovigilance data. ATFDs may be defined and measured either on temporal granules (e.g.grouping data by day, week, month, year) or on sliding windows (e.g.a fixed-length time interval which moves over the time axis): in this regard, we propose and discuss some specific and efficient data mining techniques for ATFDs. We also developed two running prototypes and showed the feasibility of our proposal by mining two real-world clinical data sets. The clinical interest of the dependencies derived considering the psychiatry and pharmacovigilance domains confirms the soundness and the usefulness of the proposed techniques. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computational modeling of fully-ionized, magnetized plasmas using the fluid approximation
Schnack, Dalton
2005-10-01
Strongly magnetized plasmas are rich in spatial and temporal scales, making a computational approach useful for studying these systems. The most accurate model of a magnetized plasma is based on a kinetic equation that describes the evolution of the distribution function for each species in six-dimensional phase space. However, the high dimensionality renders this approach impractical for computations for long time scales in relevant geometry. Fluid models, derived by taking velocity moments of the kinetic equation [1] and truncating (closing) the hierarchy at some level, are an approximation to the kinetic model. The reduced dimensionality allows a wider range of spatial and/or temporal scales to be explored. Several approximations have been used [2-5]. Successful computational modeling requires understanding the ordering and closure approximations, the fundamental waves supported by the equations, and the numerical properties of the discretization scheme. We review and discuss several ordering schemes, their normal modes, and several algorithms that can be applied to obtain a numerical solution. The implementation of kinetic parallel closures is also discussed [6].[1] S. Chapman and T.G. Cowling, ``The Mathematical Theory of Non-Uniform Gases'', Cambridge University Press, Cambridge, UK (1939).[2] R.D. Hazeltine and J.D. Meiss, ``Plasma Confinement'', Addison-Wesley Publishing Company, Redwood City, CA (1992).[3] L.E. Sugiyama and W. Park, Physics of Plasmas 7, 4644 (2000).[4] J.J. Ramos, Physics of Plasmas, 10, 3601 (2003).[5] P.J. Catto and A.N. Simakov, Physics of Plasmas, 11, 90 (2004).[6] E.D. Held et al., Phys. Plasmas 11, 2419 (2004)
Eikonal Approximation in AdS/CFT From Shock Waves to Four-Point Functions
Cornalba, L; Costa, Miguel S; Penedones, Joao; Cornalba, Lorenzo; Costa, M S; Penedones, J; Schiappa, Ricardo
2007-01-01
We initiate a program to generalize the standard eikonal approximation to compute amplitudes in Anti-de Sitter spacetimes. Inspired by the shock wave derivation of the eikonal amplitude in flat space, we study the two-point function E ~ _{shock} in the presence of a shock wave in Anti-de Sitter, where O_1 is a scalar primary operator in the dual conformal field theory. At tree level in the gravitational coupling, we relate the shock two-point function E to the discontinuity across a kinematical branch cut of the conformal field theory four-point function A ~ , where O_2 creates the shock geometry in Anti-de Sitter. Finally, we extend the above results by computing E in the presence of shock waves along the horizon of Schwarzschild BTZ black holes. This work gives new tools for the study of Planckian physics in Anti-de Sitter spacetimes.
Spherical Bessel transform via exponential sum approximation of spherical Bessel function
Ikeno, Hidekazu
2018-02-01
A new algorithm for numerical evaluation of spherical Bessel transform is proposed in this paper. In this method, the spherical Bessel function is approximately represented as an exponential sum with complex parameters. This is obtained by expressing an integral representation of spherical Bessel function in complex plane, and discretizing contour integrals along steepest descent paths and a contour path parallel to real axis using numerical quadrature rule with the double-exponential transformation. The number of terms in the expression is reduced using the modified balanced truncation method. The residual part of integrand is also expanded by exponential functions using Prony-like method. The spherical Bessel transform can be evaluated analytically on arbitrary points in half-open interval.
The phase transition lines in pair approximation for the basic reinfection model SIRI
Stollenwerk, Nico; Martins, Jose; Pinto, Alberto
2007-01-01
For a spatial stochastic epidemic model we investigate in the pair approximation scheme the differential equations for the moments. The basic reinfection model of susceptible-infected-recovered-reinfected or SIRI type is analysed, its phase transition lines calculated analytically in this pair approximation
Guliyev , Namig; Ismailov , Vugar
2016-01-01
The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
Mejia-Rodriguez, Daniel; Trickey, S. B.
2017-11-01
We explore the simplification of widely used meta-generalized-gradient approximation (mGGA) exchange-correlation functionals to the Laplacian level of refinement by use of approximate kinetic-energy density functionals (KEDFs). Such deorbitalization is motivated by the prospect of reducing computational cost while recovering a strictly Kohn-Sham local potential framework (rather than the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been rather successful in solid simulations proves to be inadequate for deorbitalization, but we produce other forms which, with parametrization to Kohn-Sham results (not experimental data) on a small training set, yield rather good results on standard molecular test sets when used to deorbitalize the meta-GGA made very simple, Tao-Perdew-Staroverov-Scuseria, and strongly constrained and appropriately normed functionals. We also study the difference between high-fidelity and best-performing deorbitalizations and discuss possible implications for use in ab initio molecular dynamics simulations of complicated condensed phase systems.
Sanz, Luis; Alonso, Juan Antonio
2017-12-01
In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.
Four-quadrant propeller modeling: A low-order harmonic approximation
Haeusler, A.J; Saccon, A.; Hauser, J; Pascoal, A.M.; Aguiar, A.P.
. We explore the connection between the propeller thrust, torque, and efficiency curves and the lift and drag curves of the propeller blades. The model originates from a well-known four-quadrant model, based on a sinusoidal approximation...
Fall with linear drag and Wien's displacement law: approximate solution and Lambert function
Vial, Alexandre
2012-01-01
We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for undergraduate students, as they show that some transcendental equations found in physics may be solved without purely numerical methods. Moreover, as will be seen in the case of Wien's displacement law, solutions based on series expansion can be very accurate even with few terms. (paper)
A Method of Approximating Expectations of Functions of Sums of Independent Random Variables
Klass, Michael J.
1981-01-01
Let $X_1, X_2, \\cdots$ be a sequence of independent random variables with $S_n = \\sum^n_{i = 1} X_i$. Fix $\\alpha > 0$. Let $\\Phi(\\cdot)$ be a continuous, strictly increasing function on $\\lbrack 0, \\infty)$ such that $\\Phi(0) = 0$ and $\\Phi(cx) \\leq c^\\alpha\\Phi(x)$ for all $x > 0$ and all $c \\geq 2$. Suppose $a$ is a real number and $J$ is a finite nonempty subset of the positive integers. In this paper we are interested in approximating $E \\max_{j \\in J} \\Phi(|a + S_j|)$. We construct a nu...
Approximate Stream Function wavemaker theory for highly non-linear waves in wave flumes
Zhang, H.W.; Schäffer, Hemming Andreas
2007-01-01
An approximate Stream Function wavemaker theory for highly non-linear regular waves in flumes is presented. This theory is based on an ad hoe unified wave-generation method that combines linear fully dispersive wavemaker theory and wave generation for non-linear shallow water waves. This is done...... by applying a dispersion correction to the paddle position obtained for non-linear long waves. The method is validated by a number of wave flume experiments while comparing with results of linear wavemaker theory, second-order wavemaker theory and Cnoidal wavemaker theory within its range of application....
Druskin, V.; Lee, Ping [Schlumberger-Doll Research, Ridgefield, CT (United States); Knizhnerman, L. [Central Geophysical Expedition, Moscow (Russian Federation)
1996-12-31
There is now a growing interest in the area of using Krylov subspace approximations to compute the actions of matrix functions. The main application of this approach is the solution of ODE systems, obtained after discretization of partial differential equations by method of lines. In the event that the cost of computing the matrix inverse is relatively inexpensive, it is sometimes attractive to solve the ODE using the extended Krylov subspaces, originated by actions of both positive and negative matrix powers. Examples of such problems can be found frequently in computational electromagnetics.
Garza, Alejandro J.
Perhaps the most important approximations to the electronic structure problem in quantum chemistry are those based on coupled cluster and density functional theories. Coupled cluster theory has been called the ``gold standard'' of quantum chemistry due to the high accuracy that it achieves for weakly correlated systems. Kohn-Sham density functionals based on semilocal approximations are, without a doubt, the most widely used methods in chemistry and material science because of their high accuracy/cost ratio. The root of the success of coupled cluster and density functionals is their ability to efficiently describe the dynamic part of the electron correlation. However, both traditional coupled cluster and density functional approximations may fail catastrophically when substantial static correlation is present. This severely limits the applicability of these methods to a plethora of important chemical and physical problems such as, e.g., the description of bond breaking, transition states, transition metal-, lanthanide- and actinide-containing compounds, and superconductivity. In an attempt to tackle this problem, nonstandard (single-reference) coupled cluster-based techniques that aim to describe static correlation have been recently developed: pair coupled cluster doubles (pCCD) and singlet-paired coupled cluster doubles (CCD0). The ability to describe static correlation in pCCD and CCD0 comes, however, at the expense of important amounts of dynamic correlation so that the high accuracy of standard coupled cluster becomes unattainable. Thus, the reliable and efficient description of static and dynamic correlation in a simultaneous manner remains an open problem for quantum chemistry and many-body theory in general. In this thesis, different ways to combine pCCD and CCD0 with density functionals in order to describe static and dynamic correlation simultaneously (and efficiently) are explored. The combination of wavefunction and density functional methods has a long
Impaired neural networks for approximate calculation in dyscalculic children: a functional MRI study
Dosch Mengia
2006-09-01
Full Text Available Abstract Background Developmental dyscalculia (DD is a specific learning disability affecting the acquisition of mathematical skills in children with otherwise normal general intelligence. The goal of the present study was to examine cerebral mechanisms underlying DD. Methods Eighteen children with DD aged 11.2 ± 1.3 years and twenty age-matched typically achieving schoolchildren were investigated using functional magnetic resonance imaging (fMRI during trials testing approximate and exact mathematical calculation, as well as magnitude comparison. Results Children with DD showed greater inter-individual variability and had weaker activation in almost the entire neuronal network for approximate calculation including the intraparietal sulcus, and the middle and inferior frontal gyrus of both hemispheres. In particular, the left intraparietal sulcus, the left inferior frontal gyrus and the right middle frontal gyrus seem to play crucial roles in correct approximate calculation, since brain activation correlated with accuracy rate in these regions. In contrast, no differences between groups could be found for exact calculation and magnitude comparison. In general, fMRI revealed similar parietal and prefrontal activation patterns in DD children compared to controls for all conditions. Conclusion In conclusion, there is evidence for a deficient recruitment of neural resources in children with DD when processing analog magnitudes of numbers.
Bayesian leave-one-out cross-validation approximations for Gaussian latent variable models
Vehtari, Aki; Mononen, Tommi; Tolvanen, Ville
2016-01-01
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation. In this article, we consider Gaussian latent variable models where the integration over the latent values is approximated using the Laplace method or expectation propagation (EP). We study...... the properties of several Bayesian leave-one-out (LOO) cross-validation approximations that in most cases can be computed with a small additional cost after forming the posterior approximation given the full data. Our main objective is to assess the accuracy of the approximative LOO cross-validation estimators...
Hinuma, Yoyo; Hayashi, Hiroyuki; Kumagai, Yu; Tanaka, Isao; Oba, Fumiyasu
2017-09-01
High-throughput first-principles calculations based on density functional theory (DFT) are a powerful tool in data-oriented materials research. The choice of approximation to the exchange-correlation functional is crucial as it strongly affects the accuracy of DFT calculations. This study compares performance of seven approximations, six of which are based on Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) with and without Hubbard U and van der Waals corrections (PBE, PBE+U, PBED3, PBED3+U, PBEsol, and PBEsol+U), and the strongly constrained and appropriately normed (SCAN) meta-GGA on the energetics and crystal structure of elementary substances and binary oxides. For the latter, only those with closed-shell electronic structures are considered, examples of which include C u2O , A g2O , MgO, ZnO, CdO, SnO, PbO, A l2O3 , G a2O3 , I n2O3 , L a2O3 , B i2O3 , Si O2 , Sn O2 , Pb O2 , Ti O2 , Zr O2 , Hf O2 , V2O5 , N b2O5 , T a2O5 , Mo O3 , and W O3 . Prototype crystal structures are selected from the Inorganic Crystal Structure Database (ICSD) and cation substitution is used to make a set of existing and hypothetical oxides. Two indices are proposed to quantify the extent of lattice and internal coordinate relaxation during a calculation. The former is based on the second invariant and determinant of the transformation matrix of basis vectors from before relaxation to after relaxation, and the latter is derived from shifts of internal coordinates of atoms in the unit cell. PBED3, PBEsol, and SCAN reproduce experimental lattice parameters of elementary substances and oxides well with few outliers. Notably, PBEsol and SCAN predict the lattice parameters of low dimensional structures comparably well with PBED3, even though these two functionals do not explicitly treat van der Waals interactions. SCAN gives formation enthalpies and Gibbs free energies closest to experimental data, with mean errors (MEs) of 0.01 and -0.04 eV, respectively, and root
Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann
2017-04-01
Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas
Aft-body loading function for penetrators based on the spherical cavity-expansion approximation.
Longcope, Donald B., Jr.; Warren, Thomas Lynn; Duong, Henry
2009-12-01
In this paper we develop an aft-body loading function for penetration simulations that is based on the spherical cavity-expansion approximation. This loading function assumes that there is a preexisting cavity of radius a{sub o} before the expansion occurs. This causes the radial stress on the cavity surface to be less than what is obtained if the cavity is opened from a zero initial radius. This in turn causes less resistance on the aft body as it penetrates the target which allows for greater rotation of the penetrator. Results from simulations are compared with experimental results for oblique penetration into a concrete target with an unconfined compressive strength of 23 MPa.
Random phase approximations for the screening function in high Tc superconductors
Lopez-Aguilar, F.; Costa-Quintana, J.; Sanchez, A.; Puig, T.; Aurell, M.T.; Martinez, L.M.; Munoz, J.S.
1990-01-01
This paper reports on the electronic transferences from the CuO 2 sheets toward the CuO 3 linear chain, which locate electrons in the orbitals p y /p z of O4/O1 and d z 2 -y 2 of Cu1, and holes in the orbitals d x 2 -y 2 - P z /p y of Cu2 - P2/O3. These holes states present large interatomic overlapping. In this paper, we determine the screening function within the random phase approximation applied to the high-T c superconductors. This screening function is vanishing for determined values of the frequency which correspond to renormalized plasmon frequencies. These frequencies depends on the band parameters and their knowledge is essential for determining the self energy. This self energy is deduced and it contain independent terms for each of the channels for the localization
Analytic number theory, approximation theory, and special functions in honor of Hari M. Srivastava
Rassias, Michael
2014-01-01
This book, in honor of Hari M. Srivastava, discusses essential developments in mathematical research in a variety of problems. It contains thirty-five articles, written by eminent scientists from the international mathematical community, including both research and survey works. Subjects covered include analytic number theory, combinatorics, special sequences of numbers and polynomials, analytic inequalities and applications, approximation of functions and quadratures, orthogonality, and special and complex functions. The mathematical results and open problems discussed in this book are presented in a simple and self-contained manner. The book contains an overview of old and new results, methods, and theories toward the solution of longstanding problems in a wide scientific field, as well as new results in rapidly progressing areas of research. The book will be useful for researchers and graduate students in the fields of mathematics, physics, and other computational and applied sciences.
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Nonparametric Transfer Function Models
Liu, Jun M.; Chen, Rong; Yao, Qiwei
2009-01-01
In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584
Galatolo, Stefano; Monge, Maurizio; Nisoli, Isaia
2016-01-01
We study the problem of the rigorous computation of the stationary measure and of the rate of convergence to equilibrium of an iterated function system described by a stochastic mixture of two or more dynamical systems that are either all uniformly expanding on the interval, either all contracting. In the expanding case, the associated transfer operators satisfy a Lasota–Yorke inequality, we show how to compute a rigorous approximations of the stationary measure in the L "1 norm and an estimate for the rate of convergence. The rigorous computation requires a computer-aided proof of the contraction of the transfer operators for the maps, and we show that this property propagates to the transfer operators of the IFS. In the contracting case we perform a rigorous approximation of the stationary measure in the Wasserstein–Kantorovich distance and rate of convergence, using the same functional analytic approach. We show that a finite computation can produce a realistic computation of all contraction rates for the whole parameter space. We conclude with a description of the implementation and numerical experiments. (paper)
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Collapse of the random-phase approximation: Examples and counter-examples from the shell model
Johnson, Calvin W.; Stetcu, Ionel
2009-01-01
The Hartree-Fock approximation to the many-fermion problem can break exact symmetries, and in some cases by changing a parameter in the interaction one can drive the Hartree-Fock minimum from a symmetry-breaking state to a symmetry-conserving state (also referred to as a 'phase transition' in the literature). The order of the transition is important when one applies the random-phase approximation (RPA) to the of the Hartree-Fock wave function: if first order, RPA is stable through the transition, but if second-order, then the RPA amplitudes become large and lead to unphysical results. The latter is known as 'collapse' of the RPA. While the difference between first- and second-order transitions in the RPA was first pointed out by Thouless, we present for the first time nontrivial examples of both first- and second-order transitions in a uniform model, the interacting shell-model, where we can compare to exact numerical results.
Wang, Jie; Chen, Li; Yu, Zhongbo
2018-02-01
Rainfall infiltration on hillslopes is an important issue in hydrology, which is related to many environmental problems, such as flood, soil erosion, and nutrient and contaminant transport. This study aimed to improve the quantification of infiltration on hillslopes under both steady and unsteady rainfalls. Starting from Darcy's law, an analytical integral infiltrability equation was derived for hillslope infiltration by use of the flux-concentration relation. Based on this equation, a simple scaling relation linking the infiltration times on hillslopes and horizontal planes was obtained which is applicable for both small and large times and can be used to simplify the solution procedure of hillslope infiltration. The infiltrability equation also improved the estimation of ponding time for infiltration under rainfall conditions. For infiltration after ponding, the time compression approximation (TCA) was applied together with the infiltrability equation. To improve the computational efficiency, the analytical integral infiltrability equation was approximated with a two-term power-like function by nonlinear regression. Procedures of applying this approach to both steady and unsteady rainfall conditions were proposed. To evaluate the performance of the new approach, it was compared with the Green-Ampt model for sloping surfaces by Chen and Young (2006) and Richards' equation. The proposed model outperformed the sloping Green-Ampt, and both ponding time and infiltration predictions agreed well with the solutions of Richards' equation for various soil textures, slope angles, initial water contents, and rainfall intensities for both steady and unsteady rainfalls.
Kwato-Njock, K
2002-01-01
A search is conducted for the determination of expectation values of r sup q between Dirac and quasirelativistic radial wave functions in the quantum-defect approximation. The phenomenological and supersymmetry-inspired quantum-defect models which have proven so far to yield accurate results are used. The recursive structure of formulae derived on the basis of the hypervirial theorem enables us to develop explicit relations for arbitrary values of q. Detailed numerical calculations concerning alkali-metal-like ions of the Li-, Na- and Cu-iso electronic sequences confirm the superiority of supersymmetry-based quantum-defect theory over quantum-defect orbital and exact orbital quantum number approximations. It is also shown that relativistic rather than quasirelativistic treatment may be used for consistent inclusion of relativistic effects.
Kwato-Njock, M G; Oumarou, B
2002-01-01
A search is conducted for the determination of expectation values of $r^q$ between Dirac and quasirelativistic radial wave functions in the quantum-defect approximation. The phenomenological and supersymmetry-inspired quantum-defect models which have proven so far to yield accurate results are used. The recursive structure of formulae derived on the basis of the hypervirial theorem enables us to develop explicit relations for arbitrary values of $q$. Detailed numerical calculations concerning alkali-metal-like ions of the Li-, Na- and Cu-iso electronic sequences confirm the superiority of supersymmetry-based quantum-defect theory over quantum-defect orbital and exact orbital quantum number approximations. It is also shown that relativistic rather than quasirelativistic treatment may be used for consistent inclusion of relativistic effects.
The Bogolubov Representation of the Polaron Model and Its Completely Integrable RPA-Approximation
Bogolubov, Nikolai N. Jr.; Prykarpatsky, Yarema A.; Ghazaryan, Anna A.
2009-12-01
The polaron model in ionic crystal is studied in the N. Bogolubov representation using a special RPA-approximation. A new exactly solvable approximated polaron model is derived and described in detail. Its free energy at finite temperature is calculated analytically. The polaron free energy in the constant magnetic field at finite temperature is also discussed. Based on the structure of the N. Bogolubov unitary transformed polaron Hamiltonian a very important new result is stated: the full polaron model is exactly solvable. (author)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
Kaplan, T.; Gray, L.J.
1984-01-01
The self-consistent approximation of Kaplan, Leath, Gray, and Diehl is applied to models for substitutional random alloys with muffin-tin potentials. The particular advantage of this approximation is that, in addition to including cluster scattering, the muffin-tin potentials in the alloy can depend on the occupation of the surrounding sites (i.e., environmental disorder is included)
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
On function classes related pertaining to strong approximation of double Fourier series
Baituyakova, Zhuldyz
2015-09-01
The investigation of embedding of function classes began a long time ago. After Alexits [1], Leindler [2], and Gogoladze[3] investigated estimates of strong approximation by Fourier series in 1965, G. Freud[4] raised the corresponding saturation problem in 1969. The list of the authors dealing with embedding problems partly is also very long. It suffices to mention some names: V. G. Krotov, W. Lenski, S. M. Mazhar, J. Nemeth, E. M. Nikisin, K. I. Oskolkov, G. Sunouchi, J. Szabados, R. Taberski and V. Totik. Study on this topic has since been carried on over a decade, but it seems that most of the results obtained are limited to the case of one dimension. In this paper, embedding results are considered which arise in the strong approximation by double Fourier series. We prove theorem on the interrelation between the classes Wr1,r2HS,M ω and H(λ, p, r1, r2, ω(δ1, δ2)), in the one-dimensional case proved by L. Leindler.
Christer Dalen
2017-10-01
Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.
Salajegheh, Maral; Nejad, S. Mohammad Moosavi; Khanpour, Hamzeh; Tehrani, S. Atashbar
2018-05-01
In this paper, we present SMKA18 analysis, which is a first attempt to extract the set of next-to-next-leading-order (NNLO) spin-dependent parton distribution functions (spin-dependent PDFs) and their uncertainties determined through the Laplace transform technique and Jacobi polynomial approach. Using the Laplace transformations, we present an analytical solution for the spin-dependent Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution equations at NNLO approximation. The results are extracted using a wide range of proton g1p(x ,Q2) , neutron g1n(x ,Q2) , and deuteron g1d(x ,Q2) spin-dependent structure functions data set including the most recent high-precision measurements from COMPASS16 experiments at CERN, which are playing an increasingly important role in global spin-dependent fits. The careful estimations of uncertainties have been done using the standard Hessian error propagation. We will compare our results with the available spin-dependent inclusive deep inelastic scattering data set and other results for the spin-dependent PDFs in literature. The results obtained for the spin-dependent PDFs as well as spin-dependent structure functions are clearly explained both in the small and large values of x .
Wyatt, Robert E.; Kouri, Donald J.; Hoffman, David K.
2000-01-01
The quantum trajectory method (QTM) was recently developed to solve the hydrodynamic equations of motion in the Lagrangian, moving-with-the-fluid, picture. In this approach, trajectories are integrated for N fluid elements (particles) moving under the influence of both the force from the potential surface and from the quantum potential. In this study, distributed approximating functionals (DAFs) are used on a uniform grid to compute the necessary derivatives in the equations of motion. Transformations between the physical grid where the particle coordinates are defined and the uniform grid are handled through a Jacobian, which is also computed using DAFs. A difficult problem associated with computing derivatives on finite grids is the edge problem. This is handled effectively by using DAFs within a least squares approach to extrapolate from the known function region into the neighboring regions. The QTM-DAF is then applied to wave packet transmission through a one-dimensional Eckart potential. Emphasis is placed upon computation of the transmitted density and wave function. A problem that develops when part of the wave packet reflects back into the reactant region is avoided in this study by introducing a potential ramp to sweep the reflected particles away from the barrier region. (c) 2000 American Institute of Physics
Pin, F.G.
1993-11-01
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.
Pin, F.G.
1993-01-01
Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ''minimal model'' for accomplishing given tasks and proposes to utilize only the minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept
Shvets', D.V.
2009-01-01
By the first approximation analyzing stability conditions of unperturbed solution of one-dimensional dynamic model with magnetic interaction between two superconducting rings obtained. The stability region in the frozen magnetic flux parameters space was constructed.
Cosmological models in globally geodesic coordinates. II. Near-field approximation
Liu Hongya
1987-01-01
A near-field approximation dealing with the cosmological field near a typical freely falling observer is developed within the framework established in the preceding paper [J. Math. Phys. 28, xxxx(1987)]. It is found that for the matter-dominated era the standard cosmological model of general relativity contains the Newtonian cosmological model, proposed by Zel'dovich, as its near-field approximation in the observer's globally geodesic coordinate system
Dalmasse, Kevin; Nychka, Douglas W.; Gibson, Sarah E.; Fan, Yuhong; Flyer, Natasha
2016-01-01
The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 and 10798 lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analog. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.
Computational Modeling of Proteins based on Cellular Automata: A Method of HP Folding Approximation.
Madain, Alia; Abu Dalhoum, Abdel Latif; Sleit, Azzam
2018-06-01
The design of a protein folding approximation algorithm is not straightforward even when a simplified model is used. The folding problem is a combinatorial problem, where approximation and heuristic algorithms are usually used to find near optimal folds of proteins primary structures. Approximation algorithms provide guarantees on the distance to the optimal solution. The folding approximation approach proposed here depends on two-dimensional cellular automata to fold proteins presented in a well-studied simplified model called the hydrophobic-hydrophilic model. Cellular automata are discrete computational models that rely on local rules to produce some overall global behavior. One-third and one-fourth approximation algorithms choose a subset of the hydrophobic amino acids to form H-H contacts. Those algorithms start with finding a point to fold the protein sequence into two sides where one side ignores H's at even positions and the other side ignores H's at odd positions. In addition, blocks or groups of amino acids fold the same way according to a predefined normal form. We intend to improve approximation algorithms by considering all hydrophobic amino acids and folding based on the local neighborhood instead of using normal forms. The CA does not assume a fixed folding point. The proposed approach guarantees one half approximation minus the H-H endpoints. This lower bound guaranteed applies to short sequences only. This is proved as the core and the folds of the protein will have two identical sides for all short sequences.
On an elastic dissipation model as continuous approximation for discrete media
I. V. Andrianov
2006-01-01
Full Text Available Construction of an accurate continuous model for discrete media is an important topic in various fields of science. We deal with a 1D differential-difference equation governing the behavior of an n-mass oscillator with linear relaxation. It is known that a string-type approximation is justified for low part of frequency spectra of a continuous model, but for free and forced vibrations a solution of discrete and continuous models can be quite different. A difference operator makes analysis difficult due to its nonlocal form. Approximate equations can be obtained by replacing the difference operators via a local derivative operator. Although application of a model with derivative of more than second order improves the continuous model, a higher order of approximated differential equation seriously complicates a solution of continuous problem. It is known that accuracy of the approximation can dramatically increase using Padé approximations. In this paper, one- and two-point Padé approximations suitable for justify choice of structural damping models are used.
Numerical approximations for speeding up mcmc inference in the infinite relational model
Schmidt, Mikkel Nørgaard; Albers, Kristoffer Jon
2015-01-01
The infinite relational model (IRM) is a powerful model for discovering clusters in complex networks; however, the computational speed of Markov chain Monte Carlo inference in the model can be a limiting factor when analyzing large networks. We investigate how using numerical approximations...
Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.
2013-01-01
We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561
Askar, S.S.; Alnowibet, K.
2016-01-01
Isoelastic demand function have been used in literature to study the dynamic features of systems constructed based on economic market structure. In this paper, we adopt the so-called Cobb–Douglas production function and study its impact on the steady state of an oligopolistic game that consists of four oligopolistic competitors or firms. Briefly, the paper handles three different scenarios. The first scenario introduces four oligopolistic firms who plays rational against each other in market. The firms use the myopic mechanism (or bounded rational) to update their production in the next time unit. The steady state of the obtained system in this scenario, which is the Nash equilibrium, is unique and its characteristics are investigated. Based on a local monopolistic approximation (LMA) strategy, one competitor prefers to play against the three rational firms and this is illustrated in the second scenario. The last scenario discusses the case when three competitors use the LMA strategy against a rational one. For all scenarios discrete dynamical systems are used to describe the game introduced in all scenarios. The stability analysis of the Nash equilibrium is investigated analytically and some numerical simulations are used to confirm the obtained analytical results.
A point-value enhanced finite volume method based on approximate delta functions
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Lorenzana, J.; Grynberg, M.D.; Yu, L.; Yonemitsu, K.; Bishop, A.R.
1992-11-01
The ground state energy, and static and dynamic correlation functions are investigated in the inhomogeneous Hartree-Fock (HF) plus random phase approximation (RPA) approach applied to a one-dimensional spinless fermion model showing self-trapped doping states at the mean field level. Results are compared with homogeneous HF and exact diagonalization. RPA fluctuations added to the generally inhomogeneous HF ground state allows the computation of dynamical correlation functions that compare well with exact diagonalization results. The RPA correction to the ground state energy agrees well with the exact results at strong and weak coupling limits. We also compare it with a related quasi-boson approach. The instability towards self-trapped behaviour is signaled by a RPA mode with frequency approaching zero. (author). 21 refs, 10 figs
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Approximating Behavioral Equivalence of Models Using Top-K Policy Paths
Zeng, Yifeng; Chen, Yingke; Prashant, Doshi
2011-01-01
Decision making and game play in multiagent settings must often contend with behavioral models of other agents in order to predict their actions. One approach that reduces the complexity of the unconstrained model space is to group models that tend to be behaviorally equivalent. In this paper, we...... seek to further compress the model space by introducing an approximate measure of behavioral equivalence and using it to group models....
Fernando Racimo
2014-11-01
Full Text Available Quantifying the proportion of polymorphic mutations that are deleterious or neutral is of fundamental importance to our understanding of evolution, disease genetics and the maintenance of variation genome-wide. Here, we develop an approximation to the distribution of fitness effects (DFE of segregating single-nucleotide mutations in humans. Unlike previous methods, we do not assume that synonymous mutations are neutral or not strongly selected, and we do not rely on fitting the DFE of all new nonsynonymous mutations to a single probability distribution, which is poorly motivated on a biological level. We rely on a previously developed method that utilizes a variety of published annotations (including conservation scores, protein deleteriousness estimates and regulatory data to score all mutations in the human genome based on how likely they are to be affected by negative selection, controlling for mutation rate. We map this and other conservation scores to a scale of fitness coefficients via maximum likelihood using diffusion theory and a Poisson random field model on SNP data. Our method serves to approximate the deleterious DFE of mutations that are segregating, regardless of their genomic consequence. We can then compare the proportion of mutations that are negatively selected or neutral across various categories, including different types of regulatory sites. We observe that the distribution of intergenic polymorphisms is highly peaked at neutrality, while the distribution of nonsynonymous polymorphisms has a second peak at [Formula: see text]. Other types of polymorphisms have shapes that fall roughly in between these two. We find that transcriptional start sites, strong CTCF-enriched elements and enhancers are the regulatory categories with the largest proportion of deleterious polymorphisms.
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
Nakayama, Hiromasa
2006-01-01
We give an algorithm to compute the local $b$ function. In this algorithm, we use the Mora division algorithm in the ring of differential operators and an approximate division algorithm in the ring of differential operators with power series coefficient.
Baron, H.E.; Zakrzewski, W.J.
2016-01-01
We investigate the validity of collective coordinate approximations to the scattering of two solitons in several classes of (1+1) dimensional field theory models. We consider models which are deformations of the sine-Gordon (SG) or the nonlinear Schrödinger (NLS) model which posses soliton solutions (which are topological (SG) or non-topological (NLS)). Our deformations preserve their topology (SG), but change their integrability properties, either completely or partially (models become ‘quasi-integrable’). As the collective coordinate approximation does not allow for the radiation of energy out of a system we look, in some detail, at how the approximation fares in models which are ‘quasi-integrable’ and therefore have asymptotically conserved charges (i.e. charges Q(t) for which Q(t→−∞)=Q(t→∞)). We find that our collective coordinate approximation, based on geodesic motion etc, works amazingly well in all cases where it is expected to work. This is true for the physical properties of the solitons and even for their quasi-conserved (or not) charges. The only time the approximation is not very reliable (and even then the qualitative features are reasonable, but some details are not reproduced well) involves the processes when the solitons come very close together (within one width of each other) during their scattering.
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-04-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
Ugarte, Juan P; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping.
Ugarte, Juan P.; Orozco-Duque, Andrés; Tobón, Catalina; Kremen, Vaclav; Novak, Daniel; Saiz, Javier; Oesterlein, Tobias; Schmitt, Clauss; Luik, Armin; Bustamante, John
2014-01-01
There is evidence that rotors could be drivers that maintain atrial fibrillation. Complex fractionated atrial electrograms have been located in rotor tip areas. However, the concept of electrogram fractionation, defined using time intervals, is still controversial as a tool for locating target sites for ablation. We hypothesize that the fractionation phenomenon is better described using non-linear dynamic measures, such as approximate entropy, and that this tool could be used for locating the rotor tip. The aim of this work has been to determine the relationship between approximate entropy and fractionated electrograms, and to develop a new tool for rotor mapping based on fractionation levels. Two episodes of chronic atrial fibrillation were simulated in a 3D human atrial model, in which rotors were observed. Dynamic approximate entropy maps were calculated using unipolar electrogram signals generated over the whole surface of the 3D atrial model. In addition, we optimized the approximate entropy calculation using two real multi-center databases of fractionated electrogram signals, labeled in 4 levels of fractionation. We found that the values of approximate entropy and the levels of fractionation are positively correlated. This allows the dynamic approximate entropy maps to localize the tips from stable and meandering rotors. Furthermore, we assessed the optimized approximate entropy using bipolar electrograms generated over a vicinity enclosing a rotor, achieving rotor detection. Our results suggest that high approximate entropy values are able to detect a high level of fractionation and to locate rotor tips in simulated atrial fibrillation episodes. We suggest that dynamic approximate entropy maps could become a tool for atrial fibrillation rotor mapping. PMID:25489858
Fitting Social Network Models Using Varying Truncation Stochastic Approximation MCMC Algorithm
Jin, Ick Hoon
2013-10-01
The exponential random graph model (ERGM) plays a major role in social network analysis. However, parameter estimation for the ERGM is a hard problem due to the intractability of its normalizing constant and the model degeneracy. The existing algorithms, such as Monte Carlo maximum likelihood estimation (MCMLE) and stochastic approximation, often fail for this problem in the presence of model degeneracy. In this article, we introduce the varying truncation stochastic approximation Markov chain Monte Carlo (SAMCMC) algorithm to tackle this problem. The varying truncation mechanism enables the algorithm to choose an appropriate starting point and an appropriate gain factor sequence, and thus to produce a reasonable parameter estimate for the ERGM even in the presence of model degeneracy. The numerical results indicate that the varying truncation SAMCMC algorithm can significantly outperform the MCMLE and stochastic approximation algorithms: for degenerate ERGMs, MCMLE and stochastic approximation often fail to produce any reasonable parameter estimates, while SAMCMC can do; for nondegenerate ERGMs, SAMCMC can work as well as or better than MCMLE and stochastic approximation. The data and source codes used for this article are available online as supplementary materials. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Approximating Matsubara dynamics using the planetary model: Tests on liquid water and ice
Willatt, Michael J.; Ceriotti, Michele; Althorpe, Stuart C.
2018-03-01
Matsubara dynamics is the quantum-Boltzmann-conserving classical dynamics which remains when real-time coherences are taken out of the exact quantum Liouvillian [T. J. H. Hele et al., J. Chem. Phys. 142, 134103 (2015)]; because of a phase-term, it cannot be used as a practical method without further approximation. Recently, Smith et al. [J. Chem. Phys. 142, 244112 (2015)] developed a "planetary" model dynamics which conserves the Feynman-Kleinert (FK) approximation to the quantum-Boltzmann distribution. Here, we show that for moderately anharmonic potentials, the planetary dynamics gives a good approximation to Matsubara trajectories on the FK potential surface by decoupling the centroid trajectory from the locally harmonic Matsubara fluctuations, which reduce to a single phase-less fluctuation particle (the "planet"). We also show that the FK effective frequency can be approximated by a direct integral over these fluctuations, obviating the need to solve iterative equations. This modification, together with use of thermostatted ring-polymer molecular dynamics, allows us to test the planetary model on water (gas-phase, liquid, and ice) using the q-TIP4P/F potential surface. The "planetary" fluctuations give a poor approximation to the rotational/librational bands in the infrared spectrum, but a good approximation to the bend and stretch bands, where the fluctuation lineshape is found to be motionally narrowed by the vibrations of the centroid.
Finite approximations in discrete-time stochastic control quantized models and asymptotic optimality
Saldi, Naci; Yüksel, Serdar
2018-01-01
In a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems. This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original mo...
Bulk and interface dielectric functions: New results within the tight-binding approximation
Elvira, V.D.; Duran, J.C.
1991-01-01
A tight-binding approach is used to analyze the dielectric behaviour of bulk semiconductors and semiconductor interfaces. This time interactions between second nearest neighbours are taken into account and several electrostatic models are proposed for the induced charge density around the atoms. The bulk dielectric function of different semiconductors (Si, Ge, GaAs and AlAs) are obtained and compared with other theoretical and experimental results. Finally, the energy band offset for GaAs-AlAs(1,0,0) interface is obtained and related to bulk properties of both semiconductors. The results presented in this paper show how the use of very simple but more realistic electrostatic models improve the analysis of the screening properties in semiconductors, giving a new support to the consistent tight-binding method for studying characteristics related to those properties. (Author)
Relaxation and Numerical Approximation of a Two-Fluid Two-Pressure Diphasic Model
Ambroso, A.; Chalons, Ch.; Galie, Th.; Chalons, Ch.; Coquel, F.; Coquel, F.
2009-01-01
This paper is concerned with the numerical approximation of the solutions of a two-fluid two-pressure model used in the modelling of two-phase flows. We present a relaxation strategy for easily dealing with both the nonlinearities associated with the pressure laws and the nonconservative terms that are inherently present in the set of convective equations and that couple the two phases. In particular, the proposed approximate Riemann solver is given by explicit formulas, preserves the natural phase space, and exactly captures the coupling waves between the two phases. Numerical evidences are given to corroborate the validity of our approach. (authors)
Short-distance behavior of the Bethe--Salpeter wave function in the ladder approximation
Guth, A.H.; Soper, D.E.
1975-01-01
We investigate the short-distance behavior of the (Wick-rotated) Bethe--Salpeter wave function for the two spin-1/2 quarks bound by the exchange of a massive vector meson. We use the ladder-model kernel, which has the same p -4 scaling behavior as the true kernel in a theory with a fixed point of the renormalization group at g not equal to 0. For a bound state with the quantum numbers of the pion, the leading asymptotic behavior is chi (q/sup μ/) approx. cq/sup -4 + epsilon(g)/γ 5 , where epsilon (g) =1- (1-g 2 /π 2 ) 1 / 2 . Our method also provides the full asymptotic series, although it should be noted that the nonleading terms will depend on the nonleading behavior of the ladder-model kernel. A general term has the form cq - /sup a/(lnq)/sup n/phi (q/sup μ/), where c is an unknown constant, a may be integral or nonintegral, n is an integer, and phi (q/sup μ/) is a representation function of the rotation group in four dimensions
Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models
Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas
2017-02-01
A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally
Effects of model approximations for electron, hole, and photon transport in swift heavy ion tracks
Rymzhanov, R.A. [Joint Institute for Nuclear Research, Joliot-Curie 6, 141980 Dubna, Moscow Region (Russian Federation); Medvedev, N.A., E-mail: nikita.medvedev@fzu.cz [Department of Radiation and Chemical Physics, Institute of Physics, Czech Academy of Sciences, Na Slovance 2, 182 21 Prague 8 (Czech Republic); Laser Plasma Department, Institute of Plasma Physics, Czech Academy of Sciences, Za Slovankou 3, 182 00 Prague 8 (Czech Republic); Volkov, A.E. [Joint Institute for Nuclear Research, Joliot-Curie 6, 141980 Dubna, Moscow Region (Russian Federation); National Research Centre ‘Kurchatov Institute’, Kurchatov Sq. 1, 123182 Moscow (Russian Federation); Lebedev Physical Institute of the Russian Academy of Sciences, Leninskij pr., 53,119991 Moscow (Russian Federation); National University of Science and Technology MISiS, Leninskij pr., 4, 119049 Moscow (Russian Federation); National Research Nuclear University MEPhI, Kashirskoye sh., 31, 115409 Moscow (Russian Federation)
2016-12-01
The event-by-event Monte Carlo code, TREKIS, was recently developed to describe excitation of the electron subsystems of solids in the nanometric vicinity of a trajectory of a nonrelativistic swift heavy ion (SHI) decelerated in the electronic stopping regime. The complex dielectric function (CDF) formalism was applied in the used cross sections to account for collective response of a matter to excitation. Using this model we investigate effects of the basic assumptions on the modeled kinetics of the electronic subsystem which ultimately determine parameters of an excited material in an SHI track. In particular, (a) effects of different momentum dependencies of the CDF on scattering of projectiles on the electron subsystem are investigated. The ‘effective one-band’ approximation for target electrons produces good coincidence of the calculated electron mean free paths with those obtained in experiments in metals. (b) Effects of collective response of a lattice appeared to dominate in randomization of electron motion. We study how sensitive these effects are to the target temperature. We also compare results of applications of different model forms of (quasi-) elastic cross sections in simulations of the ion track kinetics, e.g. those calculated taking into account optical phonons in the CDF form vs. Mott’s atomic cross sections. (c) It is demonstrated that the kinetics of valence holes significantly affects redistribution of the excess electronic energy in the vicinity of an SHI trajectory as well as its conversion into lattice excitation in dielectrics and semiconductors. (d) It is also shown that induced transport of photons originated from radiative decay of core holes brings the excess energy faster and farther away from the track core, however, the amount of this energy is relatively small.
Stuchbery, A. E.; Ryan, C. G.; Bolotin, H. H.; Morrison, I.; Sie, S. H.
1981-07-01
The enhanced transient hyperfine field manifest at the nuclei of swiftly recoiling ions traversing magnetized ferromagnetic materials was utilized to measure the gyromagnetic ratios of the 2 +1, 2 +2 and 4 +1 states in 198Pt by the thin-foil technique. The states of interest were populated by Coulomb excitation using a beam of 220 MeV 58Ni ions. The results obtained were: g(2 +1) = 0.324 ± 0.026; g(2 +2) = 0.34 ± 0.06; g(4 +1) = 0.34 ± 0.06. In addition, these measurements served to discriminate between the otherwise essentially equally probable values previously reported for the E2/M1 ratio of the 2 +2 → 2 +1 transition in 198Pt. We also performed interacting boson approximation (IBA) model-based calculations in the O(6) limit symmetry, with and without inclusion of a small degree of symmetry breaking, and employed the M1 operator in both first- and second-order to obtain M1 selection rules and to calculate gyromagnetic ratios of levels. When O(6) symmetry is broken, there is a predicted departure from constancy of the g-factors which provides a good test of the nuclear wave function. Evaluative comparisons are made between these experimental and predicted g-factors.
Yun Wang
2016-01-01
Full Text Available Gamma Gaussian inverse Wishart cardinalized probability hypothesis density (GGIW-CPHD algorithm was always used to track group targets in the presence of cluttered measurements and missing detections. A multiple models GGIW-CPHD algorithm based on best-fitting Gaussian approximation method (BFG and strong tracking filter (STF is proposed aiming at the defect that the tracking error of GGIW-CPHD algorithm will increase when the group targets are maneuvering. The best-fitting Gaussian approximation method is proposed to implement the fusion of multiple models using the strong tracking filter to correct the predicted covariance matrix of the GGIW component. The corresponding likelihood functions are deduced to update the probability of multiple tracking models. From the simulation results we can see that the proposed tracking algorithm MM-GGIW-CPHD can effectively deal with the combination/spawning of groups and the tracking error of group targets in the maneuvering stage is decreased.
Liu Yang
2017-01-01
Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.
Palma, Daniel A.; Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C.
2008-01-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function ψ(x,ξ) to determine the resonance self-shielding factors in the epithermal range G epi (τ,ξ). Two new analytical approximations for the Doppler broadening function ψ(x,ξ) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the ψ(x,ξ) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G epi (τ,ξ). The results obtained provided satisfactory accuracy. (authors)
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.
2014-01-01
of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov
Higher order saddlepoint approximations in the Vasicek portfolio credit loss model
Huang, X.; Oosterlee, C.W.; van der Weide, J.A.M.
2006-01-01
This paper utilizes the saddlepoint approximation as an efficient tool to estimate the portfolio credit loss distribution in the Vasicek model. Value at Risk (VaR), the risk measure chosen in the Basel II Accord for the evaluation of capital requirement, can then be found by inverting the loss
Loglinear Approximate Solutions to Real-Business-Cycle Models: Some Observations
Lau, Sau-Him Paul; Ng, Philip Hoi-Tak
2007-01-01
Following the analytical approach suggested in Campbell, the authors consider a baseline real-business-cycle (RBC) model with endogenous labor supply. They observe that the coefficients in the loglinear approximation of the dynamic equations characterizing the equilibrium are related to the fundamental parameters in a relatively simple manner.…
Guedes, J.M.; Rodrigues, H.C.; Bendsøe, Martin P.
2003-01-01
This paper describes a computational model, based on inverse homogenization and topology design, for approximating energy bounds for two-phase composites under multiple load cases. The approach allows for the identification of possible single-scale cellular materials that give rise to the optimal...
High-intensity ionization approximations: test of convergence in a one-dimensional model
Antunes Neto, H.S.; Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro); Davidovich, L.; Marchesin, D.
1983-06-01
By solving numerically a one-dimensional model, the range of validity of some non-perturbative treatments proposed for the problem of atomic ionization by strong laser fields is examined. Some scalling properties of the ionization probability are stablished and a new approximation, which converges to the exact results in the limit of very strong fields is proposed. (Author) [pt
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Development of nodal interface conditions for a PN approximation nodal model
Feiz, M.
1993-01-01
A relation was developed for approximating higher order odd-moments from lower order odd-moments at the nodal interfaces of a Legendre polynomial nodal model. Two sample problems were tested using different order P N expansions in adjacent nodes. The developed relation proved to be adequate and matched the nodal interface flux accurately. The development allows the use of different order expansions in adjacent nodes, and will be used in a hybrid diffusion-transport nodal model. (author)
Molecular Model of a Quantum Dot Beyond the Constant Interaction Approximation
Temirov, Ruslan; Green, Matthew F. B.; Friedrich, Niklas; Leinen, Philipp; Esat, Taner; Chmielniak, Pawel; Sarwar, Sidra; Rawson, Jeff; Kögerler, Paul; Wagner, Christian; Rohlfing, Michael; Tautz, F. Stefan
2018-05-01
We present a physically intuitive model of molecular quantum dots beyond the constant interaction approximation. It accurately describes their charging behavior and allows the extraction of important molecular properties that are otherwise experimentally inaccessible. The model is applied to data recorded with a noncontact atomic force microscope on three different molecules that act as a quantum dot when attached to the microscope tip. The results are in excellent agreement with first-principles simulations.
A local adaptive method for the numerical approximation in seismic wave modelling
Galuzzi Bruno G.
2017-12-01
Full Text Available We propose a new numerical approach for the solution of the 2D acoustic wave equation to model the predicted data in the field of active-source seismic inverse problems. This method consists in using an explicit finite difference technique with an adaptive order of approximation of the spatial derivatives that takes into account the local velocity at the grid nodes. Testing our method to simulate the recorded seismograms in a marine seismic acquisition, we found that the low computational time and the low approximation error of the proposed approach make it suitable in the context of seismic inversion problems.
Bayesian model comparison using Gauss approximation on multicomponent mass spectra from CH4 plasma
Kang, H.D.; Dose, V.
2004-01-01
We performed Bayesian model comparison on mass spectra from CH4 rf process plasmas to detect radicals produced in the plasma. The key ingredient for its implementation is the high-dimensional evidence integral. We apply Gauss approximation to evaluate the evidence. The results were compared with those calculated by the thermodynamic integration method using Markov Chain Monte Carlo technique. In spite of very large difference in the computation time between two methods a very good agreement was obtained. Alternatively, a Monte Carlo integration method based on the approximated Gaussian posterior density is presented. Its applicability to the problem of mass spectrometry is discussed
Approximating the Ising model on fractal lattices of dimension less than two
Codello, Alessandro; Drach, Vincent; Hietanen, Ari
2015-01-01
We construct periodic approximations to the free energies of Ising models on fractal lattices of dimension smaller than two, in the case of a zero external magnetic field, based on the combinatorial method of Feynman and Vdovichenko. We show that the procedure is applicable to any fractal obtained...... with, possibly, arbitrary accuracy and paves the way for determination Tc of any fractal of dimension less than two. Critical exponents are more diffcult to determine since the free energy of any periodic approximation still has a logarithmic singularity at the critical point implying α = 0. We also...
RCS estimation of linear and planar dipole phased arrays approximate model
Singh, Hema; Jha, Rakesh Mohan
2016-01-01
In this book, the RCS of a parallel-fed linear and planar dipole array is derived using an approximate method. The signal propagation within the phased array system determines the radar cross section (RCS) of phased array. The reflection and transmission coefficients for a signal at different levels of the phased-in scattering array system depend on the impedance mismatch and the design parameters. Moreover the mutual coupling effect in between the antenna elements is an important factor. A phased array system comprises of radiating elements followed by phase shifters, couplers, and terminating load impedance. These components lead to respective impedances towards the incoming signal that travels through them before reaching receive port of the array system. In this book, the RCS is approximated in terms of array factor, neglecting the phase terms. The mutual coupling effect is taken into account. The dependence of the RCS pattern on the design parameters is analyzed. The approximate model is established as a...
Evaluation of rate law approximations in bottom-up kinetic models of metabolism
Du, Bin; Zielinski, Daniel C.; Kavvas, Erol S.
2016-01-01
mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law......Background: The mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws....... These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction...
Prudhomme, Serge; Bryant, Corey M.
2015-01-01
Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.
Prudhomme, Serge
2015-09-17
Parameter estimation for complex models using Bayesian inference is usually a very costly process as it requires a large number of solves of the forward problem. We show here how the construction of adaptive surrogate models using a posteriori error estimates for quantities of interest can significantly reduce the computational cost in problems of statistical inference. As surrogate models provide only approximations of the true solutions of the forward problem, it is nevertheless necessary to control these errors in order to construct an accurate reduced model with respect to the observables utilized in the identification of the model parameters. Effectiveness of the proposed approach is demonstrated on a numerical example dealing with the Spalart–Allmaras model for the simulation of turbulent channel flows. In particular, we illustrate how Bayesian model selection using the adapted surrogate model in place of solving the coupled nonlinear equations leads to the same quality of results while requiring fewer nonlinear PDE solves.
Larin, S.A.; Ritbergen, T. van; Vermaseren, J.A.M.
1993-12-01
We obtain the analytic next-next-to-leading perturbative QCD corrections in the leading twist approximation for the moments N = 2, 4, 6, 8 of the non-singlet deep inelastic structure functions F{sub 2} and F{sub L}. We calculate the three-loop anomalous dimensions of the corresponding non-singlet operators and the three-loop coefficient functions of the structure function F{sub L}. (orig.).
Larin, S.A.; Ritbergen, T. van; Vermaseren, J.A.M.
1993-12-01
We obtain the analytic next-next-to-leading perturbative QCD corrections in the leading twist approximation for the moments N = 2, 4, 6, 8 of the non-singlet deep inelastic structure functions F 2 and F L . We calculate the three-loop anomalous dimensions of the corresponding non-singlet operators and the three-loop coefficient functions of the structure function F L . (orig.)
Hybrid diffusion and two-flux approximation for multilayered tissue light propagation modeling
Yudovsky, Dmitry; Durkin, Anthony J.
2011-07-01
Accurate and rapid estimation of fluence, reflectance, and absorbance in multilayered biological media has been essential in many biophotonics applications that aim to diagnose, cure, or model in vivo tissue. The radiative transfer equation (RTE) rigorously models light transfer in absorbing and scattering media. However, analytical solutions to the RTE are limited even in simple homogeneous or plane media. Monte Carlo simulation has been used extensively to solve the RTE. However, Monte Carlo simulation is computationally intensive and may not be practical for applications that demand real-time results. Instead, the diffusion approximation has been shown to provide accurate estimates of light transport in strongly scattering tissue. The diffusion approximation is a greatly simplified model and produces analytical solutions for the reflectance and absorbance in tissue. However, the diffusion approximation breaks down if tissue is strongly absorbing, which is common in the visible part of the spectrum or in applications that involve darkly pigmented skin and/or high local volumes of blood such as port-wine stain therapy or reconstructive flap monitoring. In these cases, a model of light transfer that can accommodate both strongly and weakly absorbing regimes is required. Here we present a model of light transfer through layered biological media that represents skin with two strongly scattering and one strongly absorbing layer.
Target-mediated drug disposition model and its approximations for antibody-drug conjugates.
Gibiansky, Leonid; Gibiansky, Ekaterina
2014-02-01
Antibody-drug conjugate (ADC) is a complex structure composed of an antibody linked to several molecules of a biologically active cytotoxic drug. The number of ADC compounds in clinical development now exceeds 30, with two of them already on the market. However, there is no rigorous mechanistic model that describes pharmacokinetic (PK) properties of these compounds. PK modeling of ADCs is even more complicated than that of other biologics as the model should describe distribution, binding, and elimination of antibodies with different toxin load, and also the deconjugation process and PK of the released toxin. This work extends the target-mediated drug disposition (TMDD) model to describe ADCs, derives the rapid binding (quasi-equilibrium), quasi-steady-state, and Michaelis-Menten approximations of the TMDD model as applied to ADCs, derives the TMDD model and its approximations for ADCs with load-independent properties, and discusses further simplifications of the system under various assumptions. The developed models are shown to describe data simulated from the available clinical population PK models of trastuzumab emtansine (T-DM1), one of the two currently approved ADCs. Identifiability of model parameters is also discussed and illustrated on the simulated T-DM1 examples.
Gabor, A.F.; Ommeren, van J.C.W.
2006-01-01
In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a (1+e,1)-reduction of the facility
Gabor, Adriana F.; van Ommeren, Jan C.W.
2006-01-01
In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present three facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\varepsilon, 1)$-reduction of
Approximation algorithms for facility location problems with discrete subadditive cost functions
Gabor, A.F.; van Ommeren, Jan C.W.
2005-01-01
In this article we focus on approximation algorithms for facility location problems with subadditive costs. As examples of such problems, we present two facility location problems with stochastic demand and exponential servers, respectively inventory. We present a $(1+\\epsilon,1)$- reduction of the
Tang Xiangyang; Hsieh Jiang
2007-01-01
A cone-angle-based window function is defined in this manuscript for image reconstruction using helical cone beam filtered backprojection (CB-FBP) algorithms. Rather than defining the window boundaries in a two-dimensional detector acquiring projection data for computed tomographic imaging, the cone-angle-based window function deals with data redundancy by selecting rays with the smallest cone angle relative to the reconstruction plane. To be computationally efficient, an asymptotic approximation of the cone-angle-based window function is also given and analyzed in this paper. The benefit of using such an asymptotic approximation also includes the avoidance of functional discontinuities that cause artifacts in reconstructed tomographic images. The cone-angle-based window function and its asymptotic approximation provide a way, equivalent to the Tam-Danielsson-window, for helical CB-FBP reconstruction algorithms to deal with data redundancy, regardless of where the helical pitch is constant or dynamically variable during a scan. By taking the cone-parallel geometry as an example, a computer simulation study is conducted to evaluate the proposed window function and its asymptotic approximation for helical CB-FBP reconstruction algorithm to handle data redundancy. The computer simulated Forbild head and thorax phantoms are utilized in the performance evaluation, showing that the proposed cone-angle-based window function and its asymptotic approximation can deal with data redundancy very well in cone beam image reconstruction from projection data acquired along helical source trajectories. Moreover, a numerical study carried out in this paper reveals that the proposed cone-angle-based window function is actually equivalent to the Tam-Danielsson-window, and rigorous mathematical proofs are being investigated
Palma, Daniel A.P. [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis/RJ (CEFET), RJ (Brazil)]. E-mail: dpalma@cefeteq.br; Martinez, Aquilino S.; Silva, Fernando C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br; fernando@lmn.con.ufrj.br
2005-07-01
An analytical approximation of the Doppler broadening function {psi}(x,{xi}) is proposed. This approximation is based on the solution of the differential equation for {psi}(x,{xi}) using the methods of Frobenius and the parameters variation. The analytical form derived for {psi}(x,{xi}) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)
On the derivation of approximations to cellular automata models and the assumption of independence.
Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V
2014-07-01
Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.
Malpetti, Daniele; Roscilde, Tommaso
2017-02-01
The mean-field approximation is at the heart of our understanding of complex systems, despite its fundamental limitation of completely neglecting correlations between the elementary constituents. In a recent work [Phys. Rev. Lett. 117, 130401 (2016), 10.1103/PhysRevLett.117.130401], we have shown that in quantum many-body systems at finite temperature, two-point correlations can be formally separated into a thermal part and a quantum part and that quantum correlations are generically found to decay exponentially at finite temperature, with a characteristic, temperature-dependent quantum coherence length. The existence of these two different forms of correlation in quantum many-body systems suggests the possibility of formulating an approximation, which affects quantum correlations only, without preventing the correct description of classical fluctuations at all length scales. Focusing on lattice boson and quantum Ising models, we make use of the path-integral formulation of quantum statistical mechanics to introduce such an approximation, which we dub quantum mean-field (QMF) approach, and which can be readily generalized to a cluster form (cluster QMF or cQMF). The cQMF approximation reduces to cluster mean-field theory at T =0 , while at any finite temperature it produces a family of systematically improved, semi-classical approximations to the quantum statistical mechanics of the lattice theory at hand. Contrary to standard MF approximations, the correct nature of thermal critical phenomena is captured by any cluster size. In the two exemplary cases of the two-dimensional quantum Ising model and of two-dimensional quantum rotors, we study systematically the convergence of the cQMF approximation towards the exact result, and show that the convergence is typically linear or sublinear in the boundary-to-bulk ratio of the clusters as T →0 , while it becomes faster than linear as T grows. These results pave the way towards the development of semiclassical numerical
Probabilistic image processing by means of the Bethe approximation for the Q-Ising model
Tanaka, Kazuyuki; Inoue, Jun-ichi; Titterington, D M
2003-01-01
The framework of Bayesian image restoration for multi-valued images by means of the Q-Ising model with nearest-neighbour interactions is presented. Hyperparameters in the probabilistic model are determined so as to maximize the marginal likelihood. A practical algorithm is described for multi-valued image restoration based on the Bethe approximation. The algorithm corresponds to loopy belief propagation in artificial intelligence. We conclude that, in real world grey-level images, the Q-Ising model can give us good results
Kainen, P.C.; Kůrková, Věra; Sanguineti, M.
2012-01-01
Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary-based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012
Opper, Manfred; Winther, Ole
2001-01-01
We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....
Frank Technow
Full Text Available Genomic selection, enabled by whole genome prediction (WGP methods, is revolutionizing plant breeding. Existing WGP methods have been shown to deliver accurate predictions in the most common settings, such as prediction of across environment performance for traits with additive gene effects. However, prediction of traits with non-additive gene effects and prediction of genotype by environment interaction (G×E, continues to be challenging. Previous attempts to increase prediction accuracy for these particularly difficult tasks employed prediction methods that are purely statistical in nature. Augmenting the statistical methods with biological knowledge has been largely overlooked thus far. Crop growth models (CGMs attempt to represent the impact of functional relationships between plant physiology and the environment in the formation of yield and similar output traits of interest. Thus, they can explain the impact of G×E and certain types of non-additive gene effects on the expressed phenotype. Approximate Bayesian computation (ABC, a novel and powerful computational procedure, allows the incorporation of CGMs directly into the estimation of whole genome marker effects in WGP. Here we provide a proof of concept study for this novel approach and demonstrate its use with synthetic data sets. We show that this novel approach can be considerably more accurate than the benchmark WGP method GBLUP in predicting performance in environments represented in the estimation set as well as in previously unobserved environments for traits determined by non-additive gene effects. We conclude that this proof of concept demonstrates that using ABC for incorporating biological knowledge in the form of CGMs into WGP is a very promising and novel approach to improving prediction accuracy for some of the most challenging scenarios in plant breeding and applied genetics.
Prudhomme, Serge
2015-01-07
The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.
Prudhomme, Serge
2015-01-01
The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.
Soneson, Joshua E
2017-04-01
Wide-angle parabolic models are commonly used in geophysics and underwater acoustics but have seen little application in medical ultrasound. Here, a wide-angle model for continuous-wave high-intensity ultrasound beams is derived, which approximates the diffraction process more accurately than the commonly used Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation without increasing implementation complexity or computing time. A method for preventing the high spatial frequencies often present in source boundary conditions from corrupting the solution is presented. Simulations of shallowly focused axisymmetric beams using both the wide-angle and standard parabolic models are compared to assess the accuracy with which they model diffraction effects. The wide-angle model proposed here offers improved focusing accuracy and less error throughout the computational domain than the standard parabolic model, offering a facile method for extending the utility of existing KZK codes.
Johnson, E.
1977-01-01
A theory for site-site pair distribution functions of molecular fluids is derived from the Ornstein-Zernike equation. Atom-atom pair distribution functions of this theory which were obtained by using different approximations for the Percus-Yevick site-site direct correlation functions are compared
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Numerical approximation for HIV infection of CD4+ T cells mathematical model
Vineet K. Srivastava
2014-06-01
Full Text Available A dynamical model of HIV infection of CD4+ T cells is solved numerically using an approximate analytical method so-called the differential transform method (DTM. The solution obtained by the method is an infinite power series for appropriate initial condition, without any discretization, transformation, perturbation, or restrictive conditions. A comparative study between the present method, the classical Euler’s and Runge–Kutta fourth order (RK4 methods is also carried out.
Matrix model approximations of fuzzy scalar field theories and their phase diagrams
Tekel, Juraj [Department of Theoretical Physics, Faculty of Mathematics, Physics and Informatics, Comenius University, Mlynska Dolina, Bratislava, 842 48 (Slovakia)
2015-12-29
We present an analysis of two different approximations to the scalar field theory on the fuzzy sphere, a nonperturbative and a perturbative one, which are both multitrace matrix models. We show that the former reproduces a phase diagram with correct features in a qualitative agreement with the previous numerical studies and that the latter gives a phase diagram with features not expected in the phase diagram of the field theory.
Modeling shock waves in an ideal gas: combining the Burnett approximation and Holian's conjecture.
He, Yi-Guang; Tang, Xiu-Zhang; Pu, Yi-Kang
2008-07-01
We model a shock wave in an ideal gas by combining the Burnett approximation and Holian's conjecture. We use the temperature in the direction of shock propagation rather than the average temperature in the Burnett transport coefficients. The shock wave profiles and shock thickness are compared with other theories. The results are found to agree better with the nonequilibrium molecular dynamics (NEMD) and direct simulation Monte Carlo (DSMC) data than the Burnett equations and the modified Navier-Stokes theory.
Picard Approximation of Stochastic Differential Equations and Application to LIBOR Models
Papapantoleon, Antonis; Skovmand, David
The aim of this work is to provide fast and accurate approximation schemes for the Monte Carlo pricing of derivatives in LIBOR market models. Standard methods can be applied to solve the stochastic differential equations of the successive LIBOR rates but the methods are generally slow. Our...... exponential to quadratic using truncated expansions of the product terms. We include numerical illustrations of the accuracy and speed of our method pricing caplets, swaptions and forward rate agreements....
Many particle approximation of the Aw-Rascle-Zhang second order model for vehicular traffic.
Francesco, Marco Di; Fagioli, Simone; Rosini, Massimiliano D
2017-02-01
We consider the follow-the-leader approximation of the Aw-Rascle-Zhang (ARZ) model for traffic flow in a multi population formulation. We prove rigorous convergence to weak solutions of the ARZ system in the many particle limit in presence of vacuum. The result is based on uniform BV estimates on the discrete particle velocity. We complement our result with numerical simulations of the particle method compared with some exact solutions to the Riemann problem of the ARZ system.
Self-consistent Random Phase Approximation applied to a schematic model of the field theory
Bertrand, Thierry
1998-01-01
The self-consistent Random Phase Approximation (SCRPA) is a method allowing in the mean-field theory inclusion of the correlations in the ground and excited states. It has the advantage of not violating the Pauli principle in contrast to RPA, that is based on the quasi-bosonic approximation; in addition, numerous applications in different domains of physics, show a possible variational character. However, the latter should be formally demonstrated. The first model studied with SCRPA is the anharmonic oscillator in the region where one of its symmetries is spontaneously broken. The ground state energy is reproduced by SCRPA more accurately than RPA, with no violation of the Ritz variational principle, what is not the case for the latter approximation. The success of SCRPA is the the same in case of ground state energy for a model mixing bosons and fermions. At the transition point the SCRPA is correcting RPA drastically, but far from this region the correction becomes negligible, both methods being of similar precision. In the deformed region in the case of RPA a spurious mode occurred due to the microscopical character of the model.. The SCRPA may also reproduce this mode very accurately and actually it coincides with an excitation in the exact spectrum
Gaussian and Affine Approximation of Stochastic Diffusion Models for Interest and Mortality Rates
Marcus C. Christiansen
2013-10-01
Full Text Available In the actuarial literature, it has become common practice to model future capital returns and mortality rates stochastically in order to capture market risk and forecasting risk. Although interest rates often should and mortality rates always have to be non-negative, many authors use stochastic diffusion models with an affine drift term and additive noise. As a result, the diffusion process is Gaussian and, thus, analytically tractable, but negative values occur with positive probability. The argument is that the class of Gaussian diffusions would be a good approximation of the real future development. We challenge that reasoning and study the asymptotics of diffusion processes with affine drift and a general noise term with corresponding diffusion processes with an affine drift term and an affine noise term or additive noise. Our study helps to quantify the error that is made by approximating diffusive interest and mortality rate models with Gaussian diffusions and affine diffusions. In particular, we discuss forward interest and forward mortality rates and the error that approximations cause on the valuation of life insurance claims.
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Song Lina; Wang Weiguo
2010-01-01
In this Letter, an enhanced Adomian decomposition method which introduces the h-curve of the homotopy analysis method into the standard Adomian decomposition method is proposed. Some examples prove that this method can derive successfully approximate rational Jacobi elliptic function solutions of the fractional differential equations.
Lublinsky, M.
2004-01-01
A simple analytic expression for the non-singlet structure function fns is given. The expression is derived from the result of B. I. Ermolaev et al. (1996) obtained by low x resummation of the quark ladder diagrams in the double logarithmic approximation of perturbative QCD. (orig.)
Large-scale parameter extraction in electrocardiology models through Born approximation
He, Yuan
2012-12-04
One of the main objectives in electrocardiology is to extract physical properties of cardiac tissues from measured information on electrical activity of the heart. Mathematically, this is an inverse problem for reconstructing coefficients in electrocardiology models from partial knowledge of the solutions of the models. In this work, we consider such parameter extraction problems for two well-studied electrocardiology models: the bidomain model and the FitzHugh-Nagumo model. We propose a systematic reconstruction method based on the Born approximation of the original nonlinear inverse problem. We describe a two-step procedure that allows us to reconstruct not only perturbations of the unknowns, but also the backgrounds around which the linearization is performed. We show some numerical simulations under various conditions to demonstrate the performance of our method. We also introduce a parameterization strategy using eigenfunctions of the Laplacian operator to reduce the number of unknowns in the parameter extraction problem. © 2013 IOP Publishing Ltd.
Csordás, András; Graham, Robert; Szépfalusy, Péter
1997-01-01
The Bogoliubov equations of the quasi-particle excitations in a weakly interacting trapped Bose-condensate are solved in the WKB approximation in an isotropic harmonic trap, determining the discrete quasi-particle energies and wave functions by torus (Bohr-Sommerfeld) quantization of the integrable classical quasi-particle dynamics. The results are used to calculate the position and strengths of the peaks in the dynamic structure function which can be observed by off-resonance inelastic light...
Uniform approximations of Bernoulli and Euler polynomials in terms of hyperbolic functions
J.L. López; N.M. Temme (Nico)
1998-01-01
textabstractBernoulli and Euler polynomials are considered for large values of the order. Convergent expansions are obtained for $B_n(nz+1/2)$ and $E_n(nz+1/2)$ in powers of $n^{-1$, with coefficients being rational functions of $z$ and hyperbolic functions of argument $1/2z$. These expansions are
Evaluation of high-level waste pretreatment processes with an approximate reasoning model
Bott, T.F.; Eisenhawer, S.W.; Agnew, S.F.
1999-01-01
The development of an approximate-reasoning (AR)-based model to analyze pretreatment options for high-level waste is presented. AR methods are used to emulate the processes used by experts in arriving at a judgment. In this paper, the authors first consider two specific issues in applying AR to the analysis of pretreatment options. They examine how to combine quantitative and qualitative evidence to infer the acceptability of a process result using the example of cesium content in low-level waste. They then demonstrate the use of simple physical models to structure expert elicitation and to produce inferences consistent with a problem involving waste particle size effects
Time-dependent Hartree approximation and time-dependent harmonic oscillator model
Blaizot, J.P.
1982-01-01
We present an analytically soluble model for studying nuclear collective motion within the framework of the time-dependent Hartree (TDH) approximation. The model reduces the TDH equations to the Schroedinger equation of a time-dependent harmonic oscillator. Using canonical transformations and coherent states we derive a few properties of the time-dependent harmonic oscillator which are relevant for applications. We analyse the role of the normal modes in the time evolution of a system governed by TDH equations. We show how these modes couple together due to the anharmonic terms generated by the non-linearity of the theory. (orig.)
Ahmed, Hafiz; Salgado, Ivan; Ríos, Héctor
2018-02-01
Robust synchronization of master slave chaotic systems are considered in this work. First an approximate model of the error system is obtained using the ultra-local model concept. Then a Continuous Singular Terminal Sliding-Mode (CSTSM) Controller is designed for the purpose of synchronization. The proposed approach is output feedback-based and uses fixed-time higher order sliding-mode (HOSM) differentiator for state estimation. Numerical simulation and experimental results are given to show the effectiveness of the proposed technique. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Elwardani, Ahmed Elsaid
2013-09-01
Modelling of gasoline fuel droplet heating and evaporation processes is investigated using several approximations of this fuel. These are quasi-components used in the quasi-discrete model and the approximations of these quasi-components (Surrogate I (molar fractions: 83.0% n-C 6H14 + 15.6% n-C10H22 + 1.4% n-C14H30) and Surrogate II (molar fractions: 83.0% n-C7H16 + 15.6% n-C11H24 + 1.4% n-C15H32)). Also, we have used Surrogate A (molar fractions: 56% n-C7H16 + 28% iso-C8H 18 + 17% C7H8) and Surrogate B (molar fractions: 63% n-C7H16 + 20% iso-C8H 18 + 17% C7H8), originally introduced based on the closeness of the ignition delay of surrogates to that of gasoline fuel. The predictions of droplet radii and temperatures based on three quasi-components and their approximations (Surrogates I and II) are shown to be much more accurate than the predictions using Surrogates A and B. © 2013 Elsevier Ltd. All rights reserved.
Ottoboni, A; Parenti-Castelli, V; Sancisi, N; Belvedere, C; Leardini, A
2010-01-01
In-depth comprehension of human joint function requires complex mathematical models, which are particularly necessary in applications of prosthesis design and surgical planning. Kinematic models of the knee joint, based on one-degree-of-freedom equivalent mechanisms, have been proposed to replicate the passive relative motion between the femur and tibia, i.e., the joint motion in virtually unloaded conditions. In the mechanisms analysed in the present work, some fibres within the anterior and posterior cruciate and medial collateral ligaments were taken as isometric during passive motion, and articulating surfaces as rigid. The shapes of these surfaces were described with increasing anatomical accuracy, i.e. from planar to spherical and general geometry, which consequently led to models with increasing complexity. Quantitative comparison of the results obtained from three models, featuring an increasingly accurate approximation of the articulating surfaces, was performed by using experimental measurements of joint motion and anatomical structure geometries of four lower-limb specimens. Corresponding computer simulations of joint motion were obtained from the different models. The results revealed a good replication of the original experimental motion by all models, although the simulations also showed that a limit exists beyond which description of the knee passive motion does not benefit considerably from further approximation of the articular surfaces.
Lister, G G; Sheverev, V A; Uhrlandt, D
2002-01-01
The applicability of 'fluid' models based on analytic approximations of the electron energy distribution function (EEDF) and of kinetic models for low-pressure discharge light sources is discussed. Traditionally, 'fluid' models of fluorescent lamps assume that the EEDF is Maxwellian up to the energy of the first excited state. It is shown that such an approach is sufficiently accurate in most cases of conventional as well as of 'highly loaded' fluorescent lamps. However, this assumption is strongly violated for many rare gas glow discharges for mercury free light sources. As an example, a neon dc discharge is studied. The densities of the four lowest excited states and the electric field have been measured. The experimental results can be fairly well reproduced by a kinetic positive column model. This article was scheduled to appear in issue 14 of J. Phys. D: Appl. Phys. To access this special issue please follow this link: http://stacks.iop.org/0022-3727/35/i=14/
Single-particle properties of the Hubbard model in a novel three-pole approximation
Di Ciolo, Andrea; Avella, Adolfo
2018-05-01
We study the 2D Hubbard model using the Composite Operator Method within a novel three-pole approximation. Motivated by the long-standing experimental puzzle of the single-particle properties of the underdoped cuprates, we include in the operatorial basis, together with the usual Hubbard operators, a field describing the electronic transitions dressed by the nearest-neighbor spin fluctuations, which play a crucial role in the unconventional behavior of the Fermi surface and of the electronic dispersion. Then, we adopt this approximation to study the single-particle properties in the strong coupling regime and find an unexpected behavior of the van Hove singularity that can be seen as a precursor of a pseudogap regime.
Hopping system control with an approximated dynamics model and upper-body motion
Lee, Hyang Jun; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)
2015-11-15
A hopping system is highly non-linear due to the nature of its dynamics, which has alternating phases in a cycle, flight and stance phases and related transitions. Every control method that stabilizes the hopping system satisfies the Poincaré stability condition. At the Poincaré section, a hopping system cycle is considered as discrete sectional data set. By controlling the sectional data in a discrete control form, we can generate a stable hopping cycle. We utilize phase-mapping matrices to build a Poincaré return map by approximating the dynamics of the hopping system with SLIP model. We can generate various Poincaré stable gait patterns with the approximated discrete control form which uses upper-body motions as inputs.
Jung, J.; Alvarellos, J.E.; Garcia-Gonzalez, P.; Godby, R.W.
2004-01-01
The complex nature of electron-electron correlations is made manifest in the very simple but nontrivial problem of two electrons confined within a sphere. The description of highly nonlocal correlation and self-interaction effects by widely used local and semilocal exchange-correlation energy density functionals is shown to be unsatisfactory in most cases. Even the best such functionals exhibit significant errors in the Kohn-Sham potentials and density profiles
Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets
Zhang, Bohai
2014-01-01
Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.
Palma, Daniel A. [CEFET QUIMICA de Nilopolis/RJ, Rio de Janeiro (Brazil); Goncalves, Alessandro C.; Martinez, Aquilino S.; Silva, Fernando C. [COPPE/UFRJ - Programa de Engenharia Nuclear, Rio de Janeiro (Brazil)
2008-07-01
The activation technique allows much more precise measurements of neutron intensity, relative or absolute. The technique requires the knowledge of the Doppler broadening function psi(x,xi) to determine the resonance self-shielding factors in the epithermal range G{sub epi} (tau,xi). Two new analytical approximations for the Doppler broadening function psi(x,xi) are proposed. The approximations proposed are compared with other methods found in literature for the calculation of the psi(x,xi) function, that is, the 4-pole Pade method and the Frobenius method, when applied to the calculation of G{sub epi} (tau,xi). The results obtained provided satisfactory accuracy. (authors)
Nakano, Masayoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Minami, Takuya, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Fukui, Hitoshi, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Yoneda, Kyohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Shigeta, Yasuteru, E-mail: mnaka@cheng.es.osaka-u.ac.jp; Kishi, Ryohei, E-mail: mnaka@cheng.es.osaka-u.ac.jp [Department of Materials Engineering Science, Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka 560-8531 (Japan); Champagne, Benoît; Botek, Edith [Laboratoire de Chimie Théorique, Facultés Universitaires Notre-Dame de la Paix (FUNDP), rue de Bruxelles, 61, 5000 Namur (Belgium)
2015-01-22
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
Piteľ Ján
2016-01-01
Full Text Available For modelling and simulation of pneumatic muscle actuators the mathematical dependence of the muscle force on the muscle contraction at different pressures in the muscles is necessary to know. For this purpose the static characteristics of the pneumatic artificial muscle type FESTO MAS-20-250N used in the experiments were approximated. In the paper there are shown some simulation results of the pneumatic muscle actuator dynamics using modified Hill's muscle model, in which four different approximations of static characteristics of artificial muscle were used.
Holmquist, Jeffrey G.; Waddle, Terry J.
2013-01-01
We used two-dimensional hydrodynamic models for the assessment of water diversion effects on benthic macroinvertebrates and associated habitat in a montane stream in Yosemite National Park, Sierra Nevada Mountains, CA, USA. We sampled the macroinvertebrate assemblage via Surber sampling, recorded detailed measurements of bed topography and flow, and coupled a two-dimensional hydrodynamic model with macroinvertebrate indicators to assess habitat across a range of low flows in 2010 and representative past years. We also made zero flow approximations to assess response of fauna to extreme conditions. The fauna of this montane reach had a higher percentage of Ephemeroptera, Plecoptera, and Trichoptera (%EPT) than might be expected given the relatively low faunal diversity of the study reach. The modeled responses of wetted area and area-weighted macroinvertebrate metrics to decreasing discharge indicated precipitous declines in metrics as flows approached zero. Changes in area-weighted metrics closely approximated patterns observed for wetted area, i.e., area-weighted invertebrate metrics contributed relatively little additional information above that yielded by wetted area alone. Loss of habitat area in this montane stream appears to be a greater threat than reductions in velocity and depth or changes in substrate, and the modeled patterns observed across years support this conclusion. Our models suggest that step function losses of wetted area may begin when discharge in the Merced falls to 0.02 m3/s; proportionally reducing diversions when this threshold is reached will likely reduce impacts in low flow years.
Fast generation of macro basis functions for LEGO through the adaptive cross approximation
Lancellotti, V.
2015-01-01
We present a method for the fast generation of macro basis functions in the context of the linear embedding via Green's operators approach (LEGO) which is a domain decomposition technique based on the combination of electromagnetic bricks in turn described by means of scattering operators. We show
Approximation of Mixed-Type Functional Equations in Menger PN-Spaces
M. Eshaghi Gordji
2012-01-01
Full Text Available Let X and Y be vector spaces. We show that a function f:X→Y with f(0=0 satisfies Δf(x1,…,xn=0 for all x1,…,xn∈X, if and only if there exist functions C:X×X×X→Y, B:X×X→Y and A:X→Y such that f(x=C(x,x,x+B(x,x+A(x for all x∈X, where the function C is symmetric for each fixed one variable and is additive for fixed two variables, B is symmetric bi-additive, A is additive and Δf(x1,…,xn=∑k=2n(∑i1=2k∑i2=i1+1k+1⋯∑in-k+1=in-k+1nf(∑i=1,i≠i1,…,in-k+1nxi-∑r=1n-k+1xir+f(∑i=1nxi-2n-2∑i=2n(f(x1+xi+f(x1-xi+2n-1(n-2f(x1 (n∈N, n≥3 for all x1,…,xn∈X. Furthermore, we solve the stability problem for a given function f satisfying Δf(x1,…,xn=0, in the Menger probabilistic normed spaces.
Kryven, I.; Röblitz, S; Schütte, C.
2015-01-01
Background: The chemical master equation is the fundamental equation of stochastic chemical kinetics. This differential-difference equation describes temporal evolution of the probability density function for states of a chemical system. A state of the system, usually encoded as a vector, represents
Evaluation of stochastic differential equation approximation of ion channel gating models.
Bruce, Ian C
2009-04-01
Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.
New second order Mumford-Shah model based on Γ-convergence approximation for image processing
Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li
2016-05-01
In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.
Kuznetsov, Alexander M.; Medvedev, Igor G.
2006-01-01
Effects of deviation from the Born-Oppenheimer approximation (BOA) on the non-adiabatic transition probability for the transfer of a quantum particle in condensed media are studied within an exactly solvable model. The particle and the medium are modeled by a set of harmonic oscillators. The dynamic interaction of the particle with a single local mode is treated explicitly without the use of BOA. Two particular situations (symmetric and non-symmetric systems) are considered. It is shown that the difference between the exact solution and the true BOA is negligibly small at realistic parameters of the model. However, the exact results differ considerably from those of the crude Condon approximation (CCA) which is usually considered in the literature as a reference point for BOA (Marcus-Hush-Dogonadze formula). It is shown that the exact rate constant can be smaller (symmetric system) or larger (non-symmetric one) than that obtained in CCA. The non-Condon effects are also studied
Cruz-García, A.; Muné, P; Govea-Alcaide, E.
2008-01-01
Full text: In this paper, is a study of the transport properties in anisotropic polycrystalline superconducting. The presence of certain order of orientation of grains in polycrystalline superconducting (Bi,Pb) 2 Sr 2 Ca 2 Cu 3 O 10+delta , is modeled by introducing a probability of orientation, gamma factor. In addition, is included in the model the concentration c, which characterize the contribution of porosity to the decrease in the conductivity of the Crystal, transparent. Assumes that pores and pimples are ellipsoid flattened with similar dimensions and takes into account the values of conductivity of beads in each direction. The calculation is based on the application of a generalization of the approximation of the effective way to the study of heterogeneous media, which is called coherent potential approximation (APC). The results are compared with an empirical model developed recently for samples of YBa 2 Cu 3 O 7 -delta (YBCO) which enriches its employment and applied to ceramic superconducting in general. (author)
Bilinear Approximate Model-Based Robust Lyapunov Control for Parabolic Distributed Collectors
Elmetennani, Shahrazed
2016-11-09
This brief addresses the control problem of distributed parabolic solar collectors in order to maintain the field outlet temperature around a desired level. The objective is to design an efficient controller to force the outlet fluid temperature to track a set reference despite the unpredictable varying working conditions. In this brief, a bilinear model-based robust Lyapunov control is proposed to achieve the control objectives with robustness to the environmental changes. The bilinear model is a reduced order approximate representation of the solar collector, which is derived from the hyperbolic distributed equation describing the heat transport dynamics by means of a dynamical Gaussian interpolation. Using the bilinear approximate model, a robust control strategy is designed applying Lyapunov stability theory combined with a phenomenological representation of the system in order to stabilize the tracking error. On the basis of the error analysis, simulation results show good performance of the proposed controller, in terms of tracking accuracy and convergence time, with limited measurement even under unfavorable working conditions. Furthermore, the presented work is of interest for a large category of dynamical systems knowing that the solar collector is representative of physical systems involving transport phenomena constrained by unknown external disturbances.
Relaxing the Small Particle Approximation for Dust-grain opacities in Carbon-star Wind Models
Mattsson, Lars; Höfner, Susanne
2010-01-01
We have computed wind models with time-dependent dust formation and grain-size dependent opacities, where (1) the problem is simplified by assuming a fixed dust-grain size, and where (2) the radiation pressure efficiency is approximated using grain sizes based on various means of the actual grain size distribution. It is shown that in critical cases, the effect of grain sizes can be significant. For well-developed winds, however, the effects on the mass-loss rate and the wind speed are small.
Pavelková, Lenka; Jirsa, Ladislav
2017-01-01
Roč. 31, č. 8 (2017), s. 1184-1192 ISSN 0890-6327 R&D Projects: GA MŠk 7D12004 Institutional support: RVO:67985556 Keywords : approximate parameter estimation * ARX model * Bayesian estimation * bounded noise * Kullback-Leibler divergence * parallelotope Subject RIV: BC - Control Systems Theory OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) Impact factor: 1.708, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/pavelkova-0472081.pdf
Studies on a one-dimensional model for the spontaneous emission in the semiclassical approximation
Crestana, S.
1983-01-01
Some generalization are made on the spontaneous emission by a plane of excited atoms, described by two level atom-model, in the Δ1=1, Δm=1, transition and using the semiclassical radiation approximation -both discussed in the text. Initially, the radiation rate of an infinite plane of excited atoms is investigated, using Δ1=0, Δm=0, transition. It is shown that we can observe a limit solution depending on the coupling between field and matter. (author)
Averaging principle for second-order approximation of heterogeneous models with homogeneous models.
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-11-27
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ε(2)) equivalent to the outcome of the corresponding homogeneous model, where ε is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing).
Averaging principle for second-order approximation of heterogeneous models with homogeneous models
Fibich, Gadi; Gavious, Arieh; Solan, Eilon
2012-01-01
Typically, models with a heterogeneous property are considerably harder to analyze than the corresponding homogeneous models, in which the heterogeneous property is replaced by its average value. In this study we show that any outcome of a heterogeneous model that satisfies the two properties of differentiability and symmetry is O(ɛ2) equivalent to the outcome of the corresponding homogeneous model, where ɛ is the level of heterogeneity. We then use this averaging principle to obtain new results in queuing theory, game theory (auctions), and social networks (marketing). PMID:23150569
De Backer, A; Sand, A; Ortiz, C J; Domain, C; Olsson, P; Berthod, E; Becquart, C S
2016-01-01
The damage produced by primary knock-on atoms (PKA) in W has been investigated from the threshold displacement energy (TDE) where it produces one self interstitial atom–vacancy pair to larger energies, up to 100 keV, where a large molten volume is formed. The TDE has been determined in different crystal directions using the Born–Oppenheimer density functional molecular dynamics (DFT-MD). A significant difference has been observed without and with the semi-core electrons. Classical MD has been used with two different empirical potentials characterized as ‘soft’ and ‘hard’ to obtain statistics on TDEs. Cascades of larger energy have been calculated, with these potentials, using a model that accounts for electronic losses (Sand et al 2013 Europhys. Lett. 103 46003). Two other sets of cascades have been produced using the binary collision approximation (BCA): a Monte Carlo BCA using SDTrimSP (Eckstein et al 2011 SDTrimSP: Version 5.00. Report IPP 12/8) (similar to SRIM www.srim.org) and MARLOWE (RSICC Home Page. (https://rsicc.ornl.gov/codes/psr/psr1/psr-137.html) (accessed May, 2014)). The comparison of these sets of cascades gave a recombination distance equal to 12 Å which is significantly larger from the one we reported in Hou et al (2010 J. Nucl. Mater. 403 89) because, here, we used bulk cascades rather than surface cascades which produce more defects (Stoller 2002 J. Nucl. Mater. 307 935, Nordlund et al 1999 Nature 398 49). Investigations on the defect clustering aspect showed that the difference between BCA and MD cascades is considerably reduced after the annealing of the cascade debris at 473 K using our Object Kinetic Monte Carlo model, LAKIMOCA (Domain et al 2004 J. Nucl. Mater. 335 121). (paper)
Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support
Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar
This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.
Yongliang Wang
2015-01-01
Full Text Available Tilting pad bearings offer unique dynamic stability enabling successful deployment of high-speed rotating machinery. The model of dynamic stiffness, damping, and added mass coefficients is often used for rotordynamic analyses, and this method does not suffice to describe the dynamic behaviour due to the nonlinear effects of oil film force under larger shaft vibration or vertical rotor conditions. The objective of this paper is to present a nonlinear oil force model for finite length tilting pad journal bearings. An approximate analytic oil film force model was established by analysing the dynamic characteristic of oil film of a single pad journal bearing using variable separation method under the dynamic π oil film boundary condition. And an oil film force model of a four-tilting-pad journal bearing was established by using the pad assembly technique and considering pad tilting angle. The validity of the model established was proved by analyzing the distribution of oil film pressure and the locus of journal centre for tilting pad journal bearings and by comparing the model established in this paper with the model established using finite difference method.
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
Balenzategui, J. L.
1999-01-01
A new way for the modelling of the charge and discharge processes in electrochemical batteries based on the use of integral equations is presented. The proposed method models the charge curves by the so called fractional or cumulative integrals of a certain objective function f(t) that must be sought. The charge figures can be easily fitted by breaking down this objective function as the addition of two different Lorentz type functions: the first one is associated to the own charge process and the second one to the overcharge process. The method allows calculating the starting voltage for overcharge as the intersection between both functions. The curve fitting of this model to different experimental charge curves, by using the Marquart algorithm, has shown very accurate results. In the case of discharge curves, two possible methods for modelling purposes are suggested, well by using the same kind of integral equations, well by the simple subtraction of an objective function f(t) from a constant value V O D. Many other aspects for the study and analysis of this method in order to improve its results in further developments are also discussed. (Author) 10 refs
Symmetries and modelling functions for diffusion processes
Nikitin, A G; Spichak, S V; Vedula, Yu S; Naumovets, A G
2009-01-01
A constructive approach to the theory of diffusion processes is proposed, which is based on application of both symmetry analysis and the method of modelling functions. An algorithm for construction of the modelling functions is suggested. This algorithm is based on the error function expansion (ERFEX) of experimental concentration profiles. The high-accuracy analytical description of the profiles provided by ERFEX approximation allows a convenient extraction of the concentration dependence of diffusivity from experimental data and prediction of the diffusion process. Our analysis is exemplified by its employment in experimental results obtained for surface diffusion of lithium on the molybdenum (1 1 2) surface precovered with dysprosium. The ERFEX approximation can be directly extended to many other diffusion systems.
Piteľ Ján; Tóthová Mária
2016-01-01
For modelling and simulation of pneumatic muscle actuators the mathematical dependence of the muscle force on the muscle contraction at different pressures in the muscles is necessary to know. For this purpose the static characteristics of the pneumatic artificial muscle type FESTO MAS-20-250N used in the experiments were approximated. In the paper there are shown some simulation results of the pneumatic muscle actuator dynamics using modified Hill's muscle model, in which four different appr...
Implicit approximate Riemann solver for two fluid two phase flow models
Raymond, P.; Toumi, I.; Kumbaro, A.
1993-01-01
This paper is devoted to the description of new numerical methods developed for the numerical treatment of two phase flow models with two velocity fields which are now widely used in nuclear engineering for design or safety calculations. These methods are finite volumes numerical methods and are based on the use of Approximate Riemann Solver's concepts in order to define convective flux versus mean cell quantities. The first part of the communication will describe the numerical method for a three dimensional drift flux model and the extensions which were performed to make the numerical scheme implicit and to have fast running calculations of steady states. Such a scheme is now implemented in the FLICA-4 computer code devoted to 3-D steady state and transient core computations. We will present results obtained for a steady state flow with rod bow effect evaluation and for a Steam Line Break calculation were the 3-D core thermal computation was coupled with a 3-D kinetic calculation and a thermal-hydraulic transient calculation for the four loops of a Pressurized Water Reactor. The second part of the paper will detail the development of an equivalent numerical method based on an approximate Riemann Solver for a two fluid model with two momentum balance equations for the liquid and the gas phases. The main difficulty for these models is due to the existence of differential modelling terms such as added mass effects or interfacial pressure terms which make hyperbolic the model. These terms does not permit to write the balance equations system in a conservative form, and the classical theory for discontinuity propagation for non-linear systems cannot be applied. Meanwhile, the use of non-conservative products theory allows the study of discontinuity propagation for a non conservative model and this will permit the construction of a numerical scheme for two fluid two phase flow model. These different points will be detailed in that section which will be illustrated by
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca
2013-01-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Structure of Even-Even 218-230 Ra Isotopes within the Interacting Boson Approximation Model
Diab S. M.
2008-01-01
Full Text Available A good description of the excited positive and negative parity states of radium nuclei (Z=88, N=130-142 is achieved using the interacting boson approximation model (IBA-1. The potential energy surfaces, energy levels, parity shift, electromagnetic transition rates B(E1, B(E2 and electric monopole strength X(E0/E2 are calculated for each nucleus. The analysis of the eigenvalues of the model Hamiltonian reveals the presence of an interaction between the positive and negative parity bands. Due to this interaction the $Delta I = 1$ staggering effect, between the energies of the ground state band and the negative parity state band, is produced including beat patterns.
Wei Li
2012-01-01
Full Text Available An extended finite element method (XFEM for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN. In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC method, the validation results show the merits and potential of the XFEM for optical imaging.
Eriksen, Janus Juul; Solanko, Lukasz Michal; Nåbo, Lina J.
2014-01-01
2) wave function coupled to PCM, we introduce dynamical PCM solvent effects only in the Random Phase Approximation (RPA) part of the SOPPA response equations while the static solvent contribution is kept in both the RPA terms as well as in the higher order correlation matrix components of the SOPPA...... response equations. By dynamic terms, we refer to contributions that describe a change in environmental polarization which, in turn, reflects a change in the core molecular charge distribution upon an electronic excitation. This new combination of methods is termed PCM-SOPPA/RPA. We apply this newly...... defined method to the challenging cases of solvent effects on the lowest and intense electronic transitions in o-, m- and p-nitroaniline and o-, m- and p-nitrophenol and compare the performance of PCM-SOPPA/RPA with more conventional approaches. Compared to calculations based on time-dependent density...
Default risk modeling beyond the first-passage approximation: Extended Black-Cox model
Katz, Yuri A.; Shokhirev, Nikolai V.
2010-07-01
We develop a generalization of the Black-Cox structural model of default risk. The extended model captures uncertainty related to firm’s ability to avoid default even if company’s liabilities momentarily exceeding its assets. Diffusion in a linear potential with the radiation boundary condition is used to mimic a company’s default process. The exact solution of the corresponding Fokker-Planck equation allows for derivation of analytical expressions for the cumulative probability of default and the relevant hazard rate. Obtained closed formulas fit well the historical data on global corporate defaults and demonstrate the split behavior of credit spreads for bonds of companies in different categories of speculative-grade ratings with varying time to maturity. Introduction of the finite rate of default at the boundary improves valuation of credit risk for short time horizons, which is the key advantage of the proposed model. We also consider the influence of uncertainty in the initial distance to the default barrier on the outcome of the model and demonstrate that this additional source of incomplete information may be responsible for nonzero credit spreads for bonds with very short time to maturity.
Freeze, G.A.; Larson, K.W. [INTERA, Inc., Albuquerque, NM (United States); Davies, P.B. [Sandia National Labs., Albuquerque, NM (United States)
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time
Irvine, Michael A; Hollingsworth, T Déirdre
2018-05-26
Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Jiameng Wu
2018-01-01
Full Text Available The infinite depth free surface Green function (GF and its high order derivatives for diffraction and radiation of water waves are considered. Especially second order derivatives are essential requirements in high-order panel method. In this paper, concerning the classical representation, composed of a semi-infinite integral involving a Bessel function and a Cauchy singularity, not only the GF and its first order derivatives but also second order derivatives are derived from four kinds of analytical series expansion and refined division of whole calculation domain. The approximations of special functions, particularly the hypergeometric function and the algorithmic applicability with different subdomains are implemented. As a result, the computation accuracy can reach 10-9 in whole domain compared with conventional methods based on direct numerical integration. Furthermore, numerical efficiency is almost equivalent to that with the classical method.
Yanqi Hao
2015-07-01
Full Text Available Alternative splicing acts on transcripts from almost all human multi-exon genes. Notwithstanding its ubiquity, fundamental ramifications of splicing on protein expression remain unresolved. The number and identity of spliced transcripts that form stably folded proteins remain the sources of considerable debate, due largely to low coverage of experimental methods and the resulting absence of negative data. We circumvent this issue by developing a semi-supervised learning algorithm, positive unlabeled learning for splicing elucidation (PULSE; http://www.kimlab.org/software/pulse, which uses 48 features spanning various categories. We validated its accuracy on sets of bona fide protein isoforms and directly on mass spectrometry (MS spectra for an overall AU-ROC of 0.85. We predict that around 32% of “exon skipping” alternative splicing events produce stable proteins, suggesting that the process engenders a significant number of previously uncharacterized proteins. We also provide insights into the distribution of positive isoforms in various functional classes and into the structural effects of alternative splicing.
Tyynelae, Jani; Nousiainen, Timo; Goeke, Sabine; Muinonen, Karri
2009-01-01
We study the applicability of the discrete-dipole approximation by modeling centimeter (C-band) radar echoes for hydrometeors, and compare the results to exact theories. We use ice and water particles of various shapes with varying water-content to investigate how the backscattering, extinction, and absorption cross sections change as a function of particle radius. We also compute radar parameters, such as the differential reflectivity, the linear depolarization ratio, and the copolarized correlation coefficient. We find that using discrete-dipole approximation (DDA) to model pure ice and pure water particles at the C-band, is a lot more accurate than particles containing both ice and water. For coated particles, a large grid-size is recommended so that the coating is modeled adequately. We also find that the absorption cross section is significantly less accurate than the scattering and backscattering cross sections. The accuracy of DDA can be increased by increasing the number of dipoles, but also by using the filtered coupled dipole-option for the polarizability. This halved the relative errors in cross sections.
Sergeev, A.; Alharbi, F. H.; Jovanovic, R.; Kais, S.
2016-04-01
The gradient expansion of the kinetic energy density functional, when applied to atoms or finite systems, usually grossly overestimates the energy in the fourth order and generally diverges in the sixth order. We avoid the divergence of the integral by replacing the asymptotic series including the sixth order term in the integrand by a rational function. Padé approximants show moderate improvements in accuracy in comparison with partial sums of the series. The results are discussed for atoms and Hooke’s law model for two-electron atoms.
Yang Pei; Li Zhibin; Chen Yong
2010-01-01
In this paper, the short-wave model equations are investigated, which are associated with the Camassa-Holm (CH) and Degasperis-Procesi (DP) shallow-water wave equations. Firstly, by means of the transformation of the independent variables and the travelling wave transformation, the partial differential equation is reduced to an ordinary differential equation. Secondly, the equation is solved by homotopy analysis method. Lastly, by the transformations back to the original independent variables, the solution of the original partial differential equation is obtained. The two types of solutions of the short-wave models are obtained in parametric form, one is one-cusp soliton for the CH equation while the other one is one-loop soliton for the DP equation. The approximate analytic solutions expressed by a series of exponential functions agree well with the exact solutions. It demonstrates the validity and great potential of homotopy analysis method for complicated nonlinear solitary wave problems. (general)
Approximate critical surface of the bond-mixed square-lattice Ising model
Levy, S.V.F.; Tsallis, C.; Curado, E.M.F.
1979-09-01
The critical surface of the quenched bond-mixed square-lattice spin-1/2 first-neighbour-interaction ferromagnetic Ising model (with exchange interactions J 1 and J 2 ) has been investigated. Through renormalization group and heuristical procedures, a very accurate (error inferior to 3x10 -4 in the variables t sub(i) = th (J sub(i)/k sub(b)T)) approximate numerical proposal for all points of this surface is presented. This proposal simultaneously satisfies all the available exact results concerning the surface, namely P sub(c) = 1/2, t sub(c) = √2 - 1, both limiting slopes in these points, and t 2 = (1-t 1 )/(1+t 1 ) for p = 1/2. Furthemore an analytic approximation (namely (1 - p) 1n(1 + t 1 ) + p 1n(1 + t 2 ) =(1/2)1n 2) is also proposed. In what concerns the available exact results, it only fails in reproducing one of the two limiting slopes, where there is an error of 1% in the derivative: these facts result in an estimated error less than 10 -3 (in the t-variables) for any points in the surface. (Author) [pt
When is the Anelastic Approximation a Valid Model for Compressible Convection?
Alboussiere, T.; Curbelo, J.; Labrosse, S.; Ricard, Y. R.; Dubuffet, F.
2017-12-01
Compressible convection is ubiquitous in large natural systems such Planetary atmospheres, stellar and planetary interiors. Its modelling is notoriously more difficult than the case when the Boussinesq approximation applies. One reason for that difficulty has been put forward by Ogura and Phillips (1961): the compressible equations generate sound waves with very short time scales which need to be resolved. This is why they introduced an anelastic model, based on an expansion of the solution around an isentropic hydrostatic profile. How accurate is that anelastic model? What are the conditions for its validity? To answer these questions, we have developed a numerical model for the full set of compressible equations and compared its solutions with those of the corresponding anelastic model. We considered a simple rectangular 2D Rayleigh-Bénard configuration and decided to restrict the analysis to infinite Prandtl numbers. This choice is valid for convection in the mantles of rocky planets, but more importantly lead to a zero Mach number. So we got rid of the question of the interference of acoustic waves with convection. In that simplified context, we used the entropy balances (that of the full set of equations and that of the anelastic model) to investigate the differences between exact and anelastic solutions. We found that the validity of the anelastic model is dictated by two conditions: first, the superadiabatic temperature difference must be small compared with the adiabatic temperature difference (as expected) ɛ = Δ TSA / delta Ta << 1, and secondly that the product of ɛ with the Nusselt number must be small.
Duan, Jinli; Jiao, Feng; Zhang, Qishan; Lin, Zhibin
2017-08-06
The sharp increase of the aging population has raised the pressure on the current limited medical resources in China. To better allocate resources, a more accurate prediction on medical service demand is very urgently needed. This study aims to improve the prediction on medical services demand in China. To achieve this aim, the study combines Taylor Approximation into the Grey Markov Chain model, and develops a new model named Taylor-Markov Chain GM (1,1) (T-MCGM (1,1)). The new model has been tested by adopting the historical data, which includes the medical service on treatment of diabetes, heart disease, and cerebrovascular disease from 1997 to 2015 in China. The model provides a predication on medical service demand of these three types of disease up to 2022. The results reveal an enormous growth of urban medical service demand in the future. The findings provide practical implications for the Health Administrative Department to allocate medical resources, and help hospitals to manage investments on medical facilities.
Approximate symmetries in atomic nuclei from a large-scale shell-model perspective
Launey, K. D.; Draayer, J. P.; Dytrych, T.; Sun, G.-H.; Dong, S.-H.
2015-05-01
In this paper, we review recent developments that aim to achieve further understanding of the structure of atomic nuclei, by capitalizing on exact symmetries as well as approximate symmetries found to dominate low-lying nuclear states. The findings confirm the essential role played by the Sp(3, ℝ) symplectic symmetry to inform the interaction and the relevant model spaces in nuclear modeling. The significance of the Sp(3, ℝ) symmetry for a description of a quantum system of strongly interacting particles naturally emerges from the physical relevance of its generators, which directly relate to particle momentum and position coordinates, and represent important observables, such as, the many-particle kinetic energy, the monopole operator, the quadrupole moment and the angular momentum. We show that it is imperative that shell-model spaces be expanded well beyond the current limits to accommodate particle excitations that appear critical to enhanced collectivity in heavier systems and to highly-deformed spatial structures, exemplified by the second 0+ state in 12C (the challenging Hoyle state) and 8Be. While such states are presently inaccessible by large-scale no-core shell models, symmetry-based considerations are found to be essential.
Frishman, A.; Hoffman, D.K.; Kouri, D.J.
1997-01-01
We report a distributed approximating functional (DAF) fit of the ab initio potential-energy data of Liu [J. Chem. Phys. 58, 1925 (1973)] and Siegbahn and Liu [ibid. 68, 2457 (1978)]. The DAF-fit procedure is based on a variational principle, and is systematic and general. Only two adjustable parameters occur in the DAF leading to a fit which is both accurate (to the level inherent in the input data; RMS error of 0.2765 kcal/mol) and smooth (open-quotes well-tempered,close quotes in DAF terminology). In addition, the LSTH surface of Truhlar and Horowitz based on this same data [J. Chem. Phys. 68, 2466 (1978)] is itself approximated using only the values of the LSTH surface on the same grid coordinate points as the ab initio data, and the same DAF parameters. The purpose of this exercise is to demonstrate that the DAF delivers a well-tempered approximation to a known function that closely mimics the true potential-energy surface. As is to be expected, since there is only roundoff error present in the LSTH input data, even more significant figures of fitting accuracy are obtained. The RMS error of the DAF fit, of the LSTH surface at the input points, is 0.0274 kcal/mol, and a smooth fit, accurate to better than 1cm -1 , can be obtained using more than 287 input data points. copyright 1997 American Institute of Physics
Heßelmann, Andreas
2015-04-14
Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.
A stochastic model for immunological feedback in carcinogenesis analysis and approximations
Dubin, Neil
1976-01-01
Stochastic processes often pose the difficulty that, as soon as a model devi ates from the simplest kinds of assumptions, the differential equations obtained for the density and the generating functions become mathematically formidable. Worse still, one is very often led to equations which have no known solution and don't yield to standard analytical methods for differential equations. In the model considered here, one for tumor growth with an immunological re sponse from the normal tissue, a nonlinear term in the transition probability for the death of a tumor cell leads to the above-mentioned complications. Despite the mathematical disadvantages of this nonlinearity, we are able to consider a more sophisticated model biologically. Ultimately, in order to achieve a more realistic representation of a complicated phenomenon, it is necessary to examine mechanisms which allow the model to deviate from the more mathematically tractable linear format. Thus far, stochastic models for tumor growth have almost ex...
Dutta, Aritra
2017-07-02
Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.
Interacting-fermion approximation in the two-dimensional ANNNI model
Grynberg, M.D.; Ceva, H.
1990-12-01
We investigate the effect of including domain-walls interactions in the two-dimensional axial next-nearest-neighbor Ising or ANNNI model. At low temperatures this problem is reduced to a one-dimensional system of interacting fermions which can be treated exactly. It is found that the critical boundaries of the low-temperature phases are in good agreement with those obtained using a free-fermion approximation. In contrast with the monotonic behavior derived from the free-fermion approach, the wall density or wave number displays reentrant phenomena when the ratio of the next-nearest-neighbor and nearest-neighbor interactions is greater than one-half. (author). 17 refs, 2 figs
Dutta, Aritra; Li, Xin; Richtarik, Peter
2017-01-01
Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.
Gauss-Arnoldi quadrature for -1φ,φ> and rational Pade-type approximation for Markov-type functions
Knizhnerman, L A
2008-01-01
The efficiency of Gauss-Arnoldi quadrature for the calculation of the quantity -1 φ,φ> is studied, where A is a bounded operator in a Hilbert space and φ is a non-trivial vector in this space. A necessary and a sufficient conditions are found for the efficiency of the quadrature in the case of a normal operator. An example of a non-normal operator for which this quadrature is inefficient is presented. It is shown that Gauss-Arnoldi quadrature is related in certain cases to rational Pade-type approximation (with the poles at Ritz numbers) for functions of Markov type and, in particular, can be used for the localization of the poles of a rational perturbation. Error estimates are found, which can also be used when classical Pade approximation does not work or it may not be efficient. Theoretical results and conjectures are illustrated by numerical experiments. Bibliography: 44 titles
Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki
2018-06-01
A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.
Olague, N.E.; Price, L.L.
1991-01-01
The greater confinement disposal (GCD) project is an ongoing project examining the disposal of orphan wastes in Area 5 of the Nevada Test Site. One of the major tasks for the project is performance assessment. With regard to performance assessment, a preliminary conceptual model for ground-water flow and radionuclide transport to the accessible environment at the GCD facilities has been developed. One of the transport pathways that has been postulated is diffusion of radionuclides in the liquid phase upward to the land surface. This pathway is not usually considered in a performance assessment, but is included in the GCD conceptual model because of relatively low recharge estimates at the GCD site and the proximity of the waste to the land surface. These low recharge estimates indicate that convective flow downward to the water table may be negligible; thus, diffusion upward to the land surface may then become important. As part of a preliminary performance assessment which considered a basecase scenario and a climate-change scenario, a first approximation for modeling the liquid-diffusion pathway was formulated. The model includes an analytical solution that incorporates both diffusion and radioactivity decay. Overall, these results indicate that, despite the configuration of the GCD facilities that establishes the need for considering the liquid-diffusion pathway, the GCD disposal concept appears to be a technically feasible method for disposing of orphan wastes. Future analyses will consist of investigating the underlying assumptions of the liquid-diffusion model, refining the model is necessary, and reducing uncertainty in the input parameters. 11 refs., 6 figs
Approximating a retarded-advanced differential equation that models human phonation
Teodoro, M. Filomena
2017-11-01
In [1, 2, 3] we have got the numerical solution of a linear mixed type functional differential equation (MTFDE) introduced initially in [4], considering the autonomous and non-autonomous case by collocation, least squares and finite element methods considering B-splines basis set. The present work introduces a numerical scheme using least squares method (LSM) and Gaussian basis functions to solve numerically a nonlinear mixed type equation with symmetric delay and advance which models human phonation. The preliminary results are promising. We obtain an accuracy comparable with the previous results.
Single-site approximation for the s-f model of antiferromagnetic semiconductors
Takahashi, Masao; Nolting, Wolfgang
2001-01-01
For the s-f model of an antiferromagnetic semiconductor, the effect of the antiferromagnetic ordering of the localized spins on the conduction-electron state is investigated over a wide range of exchange strengths by combining the effective-medium approach with the Green's function in the 2x2 sublattice Bloch function representation. The band splitting due to the reduced magnetic Brillouin zone occurs below the Neel temperature. There is a marked effect of the thermal fluctuation of the antiferromagnetically ordered localized spins on the conduction electron at the energies near the top (bottom) of the lower- (higher-) energy subband
Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E
2012-03-01
In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Extended sudden approximation model for high-energy nucleon removal reactions
Carstoiu, F.; Sauvan, E.; Orr, N.A. [Caen Univ., Lab. de Physique Corpusculaire, Institut des Sciences de la Matiere et du Rayonnement, IN2P3-CNRS ISMRA, 14 (France); Carstoiu, F. [IFIN-HH, Bucharest-Magurele (Romania); Bonaccorso, A. [Istituto Nazionale di Fisica Nucleare, Pisa (Italy)
2004-04-01
A model based on the sudden approximation has been developed to describe high energy single nucleon removal reactions. Within this approach, which takes as its starting point the formalism of Hansen, the nucleon-removal cross section and the full 3-dimensional momentum distributions of the core fragments including absorption, diffraction, Coulomb and nuclear-Coulomb interference amplitudes, have been calculated. The Coulomb breakup has been treated to all orders for the dipole interaction. The model has been compared to experimental data for a range of light, neutron-rich psd-shell nuclei. Good agreement was found for both the inclusive cross sections and momentum distributions. In the case of {sup 17}C, comparison is also made with the results of calculations using the transfer-to-the-continuum model. The calculated 3-dimensional momentum distributions exhibit longitudinal and transverse momentum components that are strongly coupled by the reaction for s-wave states, whilst no such effect is apparent for d-waves. Incomplete detection of transverse momenta arising from limited experimental acceptances thus leads to a narrowing of the longitudinal distributions for nuclei with significant s-wave valence neutron configurations, as confirmed by the data. Asymmetries in the longitudinal momentum distributions attributed to diffractive dissociation are also explored. (authors)
Extended sudden approximation model for high-energy nucleon removal reactions
Carstoiu, F.; Sauvan, E.; Orr, N.A.; Carstoiu, F.; Bonaccorso, A.
2004-04-01
A model based on the sudden approximation has been developed to describe high energy single nucleon removal reactions. Within this approach, which takes as its starting point the formalism of Hansen, the nucleon-removal cross section and the full 3-dimensional momentum distributions of the core fragments including absorption, diffraction, Coulomb and nuclear-Coulomb interference amplitudes, have been calculated. The Coulomb breakup has been treated to all orders for the dipole interaction. The model has been compared to experimental data for a range of light, neutron-rich psd-shell nuclei. Good agreement was found for both the inclusive cross sections and momentum distributions. In the case of 17 C, comparison is also made with the results of calculations using the transfer-to-the-continuum model. The calculated 3-dimensional momentum distributions exhibit longitudinal and transverse momentum components that are strongly coupled by the reaction for s-wave states, whilst no such effect is apparent for d-waves. Incomplete detection of transverse momenta arising from limited experimental acceptances thus leads to a narrowing of the longitudinal distributions for nuclei with significant s-wave valence neutron configurations, as confirmed by the data. Asymmetries in the longitudinal momentum distributions attributed to diffractive dissociation are also explored. (authors)
A model of spontaneous CP violation and neutrino phenomenology with approximate LμLτ symmetry
Adhikary, Biswajit
2013-01-01
We introduce a model where CP and Z 2 symmetry violate spontaneously. CP and Z 2 violate spontaneously through a singlet complex scalar S which obtains vacuum expectation value with phase S = Ve iα /2 and this is the only source of CP violation in this model. Low energy CP violation in the leptonic sector is connected to the large scale phase by three generations of left and right handed singlet fermions in the inverse see-saw like structure of model. We have considered approximate LμL τ symmetry to study neutrino phenomenology. Considering two mass square differences and three mixing angles including non zero θ 13 to their experimental 3σ limit, we have restricted the Lagrangian parameters for reasonably small value of L μ L τ symmetry breaking parameters. We have predicted the three masses, Dirac phase and two Majorana phases. We also evaluate CP violating parameter J CP , sum-mass and effective mass parameter involved in neutrino less double beta decay. (author)
Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.
2015-04-01
Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from
Approximate Bayesian computation.
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Vega Corona, Antonio; Zárate Banda, Magdalena; Barron Adame, Jose Miguel; Martínez Celorio, René Alfredo; Andina de la Fuente, Diego
2008-01-01
The present study describes the design of an Artificial Neural Network to synthesize the Approximation Function of a Pedometer for the Healthy Life Style Promotion. Experimentally, the approximation function is synthesized using three basic digital pedometers of low cost, these pedometers were calibrated with an advanced pedometer that calculates calories consumed and computes distance travelled with personal stride input. The synthesized approximation function by means of the designed neural...
Kutzler, F.W.; Painter, G.S.
1992-01-01
A fully self-consistent series of nonlocal (gradient) density-functional calculations has been carried out using the augmented-Gaussian-orbital method to determine the magnitude of gradient corrections to the potential-energy curves of the first-row diatomics, Li 2 through F 2 . Both the Langreth-Mehl-Hu and the Perdew-Wang gradient-density functionals were used in calculations of the binding energy, bond length, and vibrational frequency for each dimer. Comparison with results obtained in the local-spin-density approximation (LSDA) using the Vosko-Wilk-Nusair functional, and with experiment, reveals that bond lengths and vibrational frequencies are rather insensitive to details of the gradient functionals, including self-consistency effects, but the gradient corrections reduce the overbinding commonly observed in the LSDA calculations of first-row diatomics (with the exception of Li 2 , the gradient-functional binding-energy error is only 50--12 % of the LSDA error). The improved binding energies result from a large differential energy lowering, which occurs in open-shell atoms relative to the diatomics. The stabilization of the atom arises from the use of nonspherical charge and spin densities in the gradient-functional calculations. This stabilization is negligibly small in LSDA calculations performed with nonspherical densities
Bakker, Mark
2001-05-01
An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.
Dhaundiyal Alok
2017-12-01
Full Text Available This paper describes the influence of some parameters significant to biomass pyrolysis on the numerical solutions of the non-isothermal nth order distributed activation energy model (DAEM using the Gamma distribution and discusses the special case for the positive integer value of the scale parameter (λ, i.e. the Erlang distribution. Investigated parameters are the integral upper limit, the frequency factor, the heating rate, the reaction order, and the shape and rate parameters of the Gamma distribution. Influence of these parameters has been considered for the determination of the kinetic parameters of the non-isothermal nth order Gamma distribution from the experimentally derived thermoanalytical data of biomass pyrolysis. Mathematically, the effect of parameters on numerical solution is also used for predicting the behaviour of the unpyrolysized fraction of biomass with respect to temperature. Analysis of the mathematical model is based upon asymptotic expansions, which leads to the systematic methods for efficient way to determine the accurate approximations. The proposed method, therefore, provides a rapid and highly effective way for estimating the kinetic parameters and the distribution of activation energies.
Calegari, E J; Lausmann, A C; Magalhaes, S G; Chaves, C M; Troper, A
2015-01-01
In this work the specific heat of a two-dimensional Hubbard model, suitable to discuss high-T c superconductors (HTSC), is studied taking into account hopping to first (t) and second (t 2 ) nearest neighbors. Experimental results for the specific heat of HTSC's, for instance, the YBCO and LSCO, indicate a close relation between the pseudogap and the specific heat. In the present work, we investigate the specific heat by the Green's function method within a n-pole approximation. The specific heat is calculated on the pseudogap and on the superconducting regions. In the present scenario, the pseudogap emerges when the antiferromagnetic (AF) fluctuations become sufficiently strong. The specific heat jump coefficient Δγ decreases when the total occupation per site (n T ) reaches a given value. Such behavior of Δγ indicates the presence of a pseudogap in the regime of high occupation
Calegari, E. J.; Lausmann, A. C.; Magalhaes, S. G.; Chaves, C. M.; Troper, A.
2015-03-01
In this work the specific heat of a two-dimensional Hubbard model, suitable to discuss high-Tc superconductors (HTSC), is studied taking into account hopping to first (t) and second (t2) nearest neighbors. Experimental results for the specific heat of HTSC's, for instance, the YBCO and LSCO, indicate a close relation between the pseudogap and the specific heat. In the present work, we investigate the specific heat by the Green's function method within a n-pole approximation. The specific heat is calculated on the pseudogap and on the superconducting regions. In the present scenario, the pseudogap emerges when the antiferromagnetic (AF) fluctuations become sufficiently strong. The specific heat jump coefficient Δγ decreases when the total occupation per site (nT) reaches a given value. Such behavior of Δγ indicates the presence of a pseudogap in the regime of high occupation.
Yonemitsu, K.; Bishop, A.R.
1992-01-01
As a convenient qualitative approach to strongly correlated electronic systems, an inhomogeneous Hartree-Fock plus random-phase approximation is applied to response functions for the two-dimensional multiband Hubbard model for cuprate superconductors. A comparison of the results with those obtained by exact diagonalization by Wagner, Hanke, and Scalapino [Phys. Rev. B 43, 10 517 (1991)] shows that overall structures in optical and magnetic particle-hole excitation spectra are well reproduced by this method. This approach is computationally simple, retains conceptual clarity, and can be calibrated by comparison with exact results on small systems. Most importantly, it is easily extended to larger systems and straightforward to incorporate additional terms in the Hamiltonian, such as electron-phonon interactions, which may play a crucial role in high-temperature superconductivity
Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.
2018-01-01
We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
A.L. Kataev
2018-01-01
Full Text Available We calculate the Adler D-function for N=1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N=1 SQCD is found in this scheme to the order O(αs2. The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.
Ribeiro, M., E-mail: ribeiro.jr@oorbit.com.br [Office of Operational Research for Business Intelligence and Technology, Principal Office, Buffalo, Wyoming 82834 (United States)
2015-06-21
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost.
Ribeiro, M.
2015-01-01
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Suboptimal control of pressurized water reactor power plant using approximate model-following method
Tsuji, Masashi; Ogawa, Yuichi
1987-01-01
We attempted to develop an effective control system that can successfully manage the nuclear steam supply (NSS) system of a PWR power plant in an operational mode requiring relatively small variations of power. A procedure is proposed for synthesizing control system that is a simple, yet practiced, suboptimal control system. The suboptimal control system is designed in two steps; application of the optimal control theory, based on the linear state-feedback control and the use of an approximate model-following method. This procedure can appreciably reduce the complexity of the structure of the controller by accepting a slight deviation from the optimality and by the use of the output-feedback control. This eliminates the engineering difficulty caused by an incompletely state-feedback that is sometimes encountered in practical applications of the optimal state-feedback control theory to complex large-scale dynamical systems. Digital simulations and graphical studies based on the Bode-diagram demonstrate the effectiveness of the suboptimal control, and the applicability of the proposed design method as well. (author)
Modeling of pseudoacoustic P-waves in orthorhombic media with a low-rank approximation
Song, Xiaolei
2013-06-04
Wavefield extrapolation in pseudoacoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We use the dispersion relation for scalar wave propagation in pseudoacoustic orthorhombic media to model acoustic wavefields. The wavenumber-domain application of the Laplacian operator allows us to propagate the P-waves exclusively, without imposing any conditions on the parameter range of stability. It also allows us to avoid dispersion artifacts commonly associated with evaluating the Laplacian operator in space domain using practical finite-difference stencils. To handle the corresponding space-wavenumber mixed-domain operator, we apply the low-rank approximation approach. Considering the number of parameters necessary to describe orthorhombic anisotropy, the low-rank approach yields space-wavenumber decomposition of the extrapolator operator that is dependent on space location regardless of the parameters, a feature necessary for orthorhombic anisotropy. Numerical experiments that the proposed wavefield extrapolator is accurate and practically free of dispersion. Furthermore, there is no coupling of qSv and qP waves because we use the analytical dispersion solution corresponding to the P-wave.
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Khakzad, Nima; Khan, Faisal; Amyotte, Paul
2015-07-01
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.
Liu Yang; Yao Xiong; Xiao-jiao Tong
2017-01-01
We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...
Allen, Steve
2000-10-01
Dans cette these nous presentons une nouvelle methode non perturbative pour le calcul des proprietes d'un systeme de fermions. Notre methode generalise l'approximation auto-coherente a deux particules proposee par Vilk et Tremblay pour le modele de Hubbard repulsif. Notre methode peut s'appliquer a l'etude du comportement pre-critique lorsque la symetrie du parametre d'ordre est suffisamment elevee. Nous appliquons la methode au probleme du pseudogap dans le modele de Hubbard attractif. Nos resultats montrent un excellent accord avec les donnees Monte Carlo pour de petits systemes. Nous observons que le regime ou apparait le pseudogap dans le poids spectral a une particule est un regime classique renormalise caracterise par une frequence caracteristique des fluctuations supraconductrices inferieure a la temperature. Une autre caracteristique est la faible densite de superfluide de cette phase demontrant que nous ne sommes pas en presence de paires preformees. Les resultats obtenus semblent montrer que la haute symetrie du parametre d'ordre et la bidimensionalite du systeme etudie elargissent le domaine de temperature pour lequel le regime pseudogap est observe. Nous argumentons que ce resultat est transposable aux supraconducteurs a haute temperature critique ou le pseudogap apparait a des' temperatures beaucoup plus grandes que la temperature critique. La forte symetrie dans ces systemes pourraient etre reliee a la theorie SO(5) de Zhang. En annexe, nous demontrons un resultat tout recent qui permettrait d'assurer l'auto-coherence entre les proprietes a une et a deux particules par l'ajout d'une dynamique au vertex irreductible. Cet ajout laisse entrevoir la possibilite d'etendre la methode au cas d'une forte interaction.
Heng, Kevin; Mendonça, João M.; Lee, Jae-Min, E-mail: kevin.heng@csh.unibe.ch, E-mail: joao.mendonca@csh.unibe.ch, E-mail: lee@physik.uzh.ch [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012 Bern (Switzerland)
2014-11-01
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior), and solutions for the temperature-pressure profiles. Generally, the problem is mathematically underdetermined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat, and the properties of scattering in both the optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing, and incoming fluxes in the convective regime.
Supersonic beams at high particle densities: model description beyond the ideal gas approximation.
Christen, Wolfgang; Rademann, Klaus; Even, Uzi
2010-10-28
Supersonic molecular beams constitute a very powerful technique in modern chemical physics. They offer several unique features such as a directed, collision-free flow of particles, very high luminosity, and an unsurpassed strong adiabatic cooling during the jet expansion. While it is generally recognized that their maximum flow velocity depends on the molecular weight and the temperature of the working fluid in the stagnation reservoir, not a lot is known on the effects of elevated particle densities. Frequently, the characteristics of supersonic beams are treated in diverse approximations of an ideal gas expansion. In these simplified model descriptions, the real gas character of fluid systems is ignored, although particle associations are responsible for fundamental processes such as the formation of clusters, both in the reservoir at increased densities and during the jet expansion. In this contribution, the various assumptions of ideal gas treatments of supersonic beams and their shortcomings are reviewed. It is shown in detail that a straightforward thermodynamic approach considering the initial and final enthalpy is capable of characterizing the terminal mean beam velocity, even at the liquid-vapor phase boundary and the critical point. Fluid properties are obtained using the most accurate equations of state available at present. This procedure provides the opportunity to naturally include the dramatic effects of nonideal gas behavior for a large variety of fluid systems. Besides the prediction of the terminal flow velocity, thermodynamic models of isentropic jet expansions permit an estimate of the upper limit of the beam temperature and the amount of condensation in the beam. These descriptions can even be extended to include spinodal decomposition processes, thus providing a generally applicable tool for investigating the two-phase region of high supersaturations not easily accessible otherwise.
Rights, Jason D; Sterba, Sonya K
2016-11-01
Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.
Mroz, T A
1999-10-01
This paper contains a Monte Carlo evaluation of estimators used to control for endogeneity of dummy explanatory variables in continuous outcome regression models. When the true model has bivariate normal disturbances, estimators using discrete factor approximations compare favorably to efficient estimators in terms of precision and bias; these approximation estimators dominate all the other estimators examined when the disturbances are non-normal. The experiments also indicate that one should liberally add points of support to the discrete factor distribution. The paper concludes with an application of the discrete factor approximation to the estimation of the impact of marriage on wages.
Modified model of neutron resonance widths distribution. Results of total gamma-widths approximation
Sukhovoj, A.M.; Khitrov, V.A.
2011-01-01
Functional dependences of probability to observe given Γ n 0 value and algorithms for determination of the most probable magnitudes of the modified model of resonance parameter distributions were used for analysis of the experimental data on the total radiative widths of neutron resonances. As in the case of neutron widths, precise description of the Γ γ spectra requires a superposition of three and more probability distributions for squares of the random normally distributed values with different nonzero average and nonunit dispersion. This result confirms the preliminary conclusion obtained earlier at analysis of Γ n 0 that practically in all 56 tested sets of total gamma widths there are several groups noticeably differing from each other by the structure of their wave functions. In addition, it was determined that radiative widths are much more sensitive than the neutron ones to resonance wave functions structure. Analysis of early obtained neutron reduced widths distribution parameters for 157 resonance sets in the mass region of nuclei 35 ≤ A ≤ 249 was also performed. It was shown that the experimental values of widths can correspond with high probability to superposition of several expected independent distributions with their nonzero mean values and nonunit dispersion
The varying cosmological constant: a new approximation to the Friedmann equations and universe model
Öztaş, Ahmet M.; Dil, Emre; Smith, Michael L.
2018-05-01
We investigate the time-dependent nature of the cosmological constant, Λ, of the Einstein Field Equation (EFE). Beginning with the Einstein-Hilbert action as our fundamental principle we develop a modified version of the EFE allowing the value of Λ to vary as a function of time, Λ(t), indirectly, for an expanding universe. We follow the evolving Λ presuming four-dimensional space-time and a flat universe geometry and present derivations of Λ(t) as functions of the Hubble constant, matter density, and volume changes which can be traced back to the radiation epoch. The models are more detailed descriptions of the Λ dependence on cosmological factors than previous, allowing calculations of the important parameters, Ωm and Ωr, to deep lookback times. Since we derive these without the need for extra dimensions or other special conditions our derivations are useful for model evaluation with astronomical data. This should aid resolution of several difficult problems of astronomy such as the best value for the Hubble constant at present and at recombination.
$O(N)$ model in Euclidean de Sitter space: beyond the leading infrared approximation
Nacir, Diana López; Trombetta, Leonardo G
2016-01-01
We consider an $O(N)$ scalar field model with quartic interaction in $d$-dimensional Euclidean de Sitter space. In order to avoid the problems of the standard perturbative calculations for light and massless fields, we generalize to the $O(N)$ theory a systematic method introduced previously for a single field, which treats the zero modes exactly and the nonzero modes perturbatively. We compute the two-point functions taking into account not only the leading infrared contribution, coming from the self-interaction of the zero modes, but also corrections due to the interaction of the ultraviolet modes. For the model defined in the corresponding Lorentzian de Sitter spacetime, we obtain the two-point functions by analytical continuation. We point out that a partial resummation of the leading secular terms (which necessarily involves nonzero modes) is required to obtain a decay at large distances for massless fields. We implement this resummation along with a systematic double expansion in an effective coupling c...
Smorodin, F.K.; Druzhinin, G.V.
1991-01-01
A mathematical model is proposed which describes the fracture behavior of amorphous materials during laser cutting. The model, which is based on boundary layer equations, is reduced to ordinary differential equations with the corresponding boundary conditions. The reduced model is used to develop an approximate method for calculating the fracture characteristics of nonmetallic materials.
Functional model of biological neural networks.
Lo, James Ting-Ho
2010-12-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
Senjean, Bruno; Knecht, Stefan; Jensen, Hans Jørgen Aa
2015-01-01
Gross-Oliveira-Kohn density-functional theory (GOK-DFT) for ensembles is, in principle, very attractive but has been hard to use in practice. A practical model based on GOK-DFT for the calculation of electronic excitation energies is discussed. The model relies on two modifications of GOK-DFT: use...... promising results have been obtained for both single (including charge transfer) and double excitations with spin-independent short-range local and semilocal functionals. Even at the Kohn-Sham ensemble DFT level, which is recovered when the range-separation parameter is set to 0, LIM performs better than...
Sivers function in constituent quark models
Scopetta, S.; Fratini, F.; Vento, V.
2008-01-01
A formalism to evaluate the Sivers function, developed for calculations in constituent quark models, is applied to the Isgur-Karl model. A non-vanishing Sivers asymmetry, with opposite signs for the u and d flavor, is found; the Burkardt sum rule is fulfilled up to 2 %. Nuclear effects in the extraction of neutron single spin asymmetries in semi-inclusive deep inelastic scattering off 3He are also evaluated. In the kinematics of JLab, it is found that the nuclear effects described by an Impulse Approximation approach are under control.
A conceptual approach to approximate tree root architecture in infinite slope models
Schmaltz, Elmar; Glade, Thomas
2016-04-01
Vegetation-related properties - particularly tree root distribution and coherent hydrologic and mechanical effects on the underlying soil mantle - are commonly not considered in infinite slope models. Indeed, from a geotechnical point of view, these effects appear to be difficult to be reproduced reliably in a physically-based modelling approach. The growth of a tree and the expansion of its root architecture are directly connected with both intrinsic properties such as species and age, and extrinsic factors like topography, availability of nutrients, climate and soil type. These parameters control four main issues of the tree root architecture: 1) Type of rooting; 2) maximum growing distance to the tree stem (radius r); 3) maximum growing depth (height h); and 4) potential deformation of the root system. Geometric solids are able to approximate the distribution of a tree root system. The objective of this paper is to investigate whether it is possible to implement root systems and the connected hydrological and mechanical attributes sufficiently in a 3-dimensional slope stability model. Hereby, a spatio-dynamic vegetation module should cope with the demands of performance, computation time and significance. However, in this presentation, we focus only on the distribution of roots. The assumption is that the horizontal root distribution around a tree stem on a 2-dimensional plane can be described by a circle with the stem located at the centroid and a distinct radius r that is dependent on age and species. We classified three main types of tree root systems and reproduced the species-age-related root distribution with three respective mathematical solids in a synthetic 3-dimensional hillslope ambience. Thus, two solids in an Euclidian space were distinguished to represent the three root systems: i) cylinders with radius r and height h, whilst the dimension of latter defines the shape of a taproot-system or a shallow-root-system respectively; ii) elliptic
Caricato, Marco
2018-04-01
We report the theory and the implementation of the linear response function of the coupled cluster (CC) with the single and double excitations method combined with the polarizable continuum model of solvation, where the correlation solvent response is approximated with the perturbation theory with energy and singles density (PTES) scheme. The singles name is derived from retaining only the contribution of the CC single excitation amplitudes to the correlation density. We compare the PTES working equations with those of the full-density (PTED) method. We then test the PTES scheme on the evaluation of excitation energies and transition dipoles of solvated molecules, as well as of the isotropic polarizability and specific rotation. Our results show a negligible difference between the PTED and PTES schemes, while the latter affords a significantly reduced computational cost. This scheme is general and can be applied to any solvation model that includes mutual solute-solvent polarization, including explicit models. Therefore, the PTES scheme is a competitive approach to compute response properties of solvated systems using CC methods.
Al-Hawat, Sh; Naddaf, M
2005-01-01
The electron energy distribution function (EEDF) was determined from the second derivative of the I-V Langmuir probe characteristics and, thereafter, theoretically calculated by solving the plasma kinetic equation, using the black wall (BW) approximation, in the positive column of a neon glow discharge. The pressure has been varied from 0.5 to 4 Torr and the current from 10 to 30 mA. The measured electron temperature, density and electric field strength were used as input data for solving the kinetic equation. Comparisons were made between the EEDFs obtained from experiment, the BW approach, the Maxwellian distribution and the Rutcher solution of the kinetic equation in the elastic energy range. The best conditions for the BW approach are found to be under the discharge conditions: current density j d = 4.45 mA cm -2 and normalized electric field strength E/p = 1.88 V cm -1 Torr -1
Al-Hawat, Sh; Naddaf, M.
2005-04-01
The electron energy distribution function (EEDF) was determined from the second derivative of the I-V Langmuir probe characteristics and, thereafter, theoretically calculated by solving the plasma kinetic equation, using the black wall (BW) approximation, in the positive column of a neon glow discharge. The pressure has been varied from 0.5 to 4 Torr and the current from 10 to 30 mA. The measured electron temperature, density and electric field strength were used as input data for solving the kinetic equation. Comparisons were made between the EEDFs obtained from experiment, the BW approach, the Maxwellian distribution and the Rutcher solution of the kinetic equation in the elastic energy range. The best conditions for the BW approach are found to be under the discharge conditions: current density jd = 4.45 mA cm-2 and normalized electric field strength E/p = 1.88 V cm-1 Torr-1.
Correlation functions of two-matrix models
Bonora, L.; Xiong, C.S.
1993-11-01
We show how to calculate correlation functions of two matrix models without any approximation technique (except for genus expansion). In particular we do not use any continuum limit technique. This allows us to find many solutions which are invisible to the latter technique. To reach our goal we make full use of the integrable hierarchies and their reductions which were shown in previous papers to naturally appear in multi-matrix models. The second ingredient we use, even though to a lesser extent, are the W-constraints. In fact an explicit solution of the relevant hierarchy, satisfying the W-constraints (string equation), underlies the explicit calculation of the correlation functions. The correlation functions we compute lend themselves to a possible interpretation in terms of topological field theories. (orig.)
Madsen Per
2007-07-01
Full Text Available Abstract In a stochastic simulation study of a dairy cattle population three multitrait models for estimation of genetic parameters and prediction of breeding values were compared. The first model was an approximate multitrait model using a two-step procedure. The first step was a single trait model for all traits. The solutions for fixed effects from these analyses were subtracted from the phenotypes. A multitrait model only containing an overall mean, an additive genetic and a residual term was applied on these preadjusted data. The second model was similar to the first model, but the multitrait model also contained a year effect. The third model was a full multitrait model. Genetic trends for total merit and for the individual traits in the breeding goal were compared for the three scenarios to rank the models. The full multitrait model gave the highest genetic response, but was not significantly better than the approximate multitrait model including a year effect. The inclusion of a year effect into the second step of the approximate multitrait model significantly improved the genetic trend for total merit. In this study, estimation of genetic parameters for breeding value estimation using models corresponding to the ones used for prediction of breeding values increased the accuracy on the breeding values and thereby the genetic progress.
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
Models of few optical cycle solitons beyond the slowly varying envelope approximation
Leblond, H.; Mihalache, D.
2013-01-01
In the past years there was a huge interest in experimental and theoretical studies in the area of few-optical-cycle pulses and in the broader fast growing field of the so-called extreme nonlinear optics. This review concentrates on theoretical studies performed in the past decade concerning the description of few optical cycle solitons beyond the slowly varying envelope approximation (SVEA). Here we systematically use the powerful reductive expansion method (alias multiscale analysis) in order to derive simple integrable and nonintegrable evolution models describing both nonlinear wave propagation and interaction of ultrashort (femtosecond) pulses. To this aim we perform the multiple scale analysis on the Maxwell–Bloch equations and the corresponding Schrödinger–von Neumann equation for the density matrix of two-level atoms. We analyze in detail both long-wave and short-wave propagation models. The propagation of ultrashort few-optical-cycle solitons in quadratic and cubic nonlinear media are adequately described by generic integrable and nonintegrable nonlinear evolution equations such as the Korteweg–de Vries equation, the modified Korteweg–de Vries equation, the complex modified Korteweg–de Vries equation, the sine–Gordon equation, the cubic generalized Kadomtsev–Petviashvili equation, and the two-dimensional sine–Gordon equation. Moreover, we consider the propagation of few-cycle optical solitons in both (1+1)- and (2+1)-dimensional physical settings. A generalized modified Korteweg–de Vries equation is introduced in order to describe robust few-optical-cycle dissipative solitons. We investigate in detail the existence and robustness of both linearly polarized and circularly polarized few-cycle solitons, that is, we also take into account the effect of the vectorial nature of the electric field. Some of these results concerning the systematic use of the reductive expansion method beyond the SVEA can be relatively easily extended to few
Nobile, F.
2015-10-30
In this work we provide a convergence analysis for the quasi-optimal version of the sparse-grids stochastic collocation method we presented in a previous work: “On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods” (Beck et al., Math Models Methods Appl Sci 22(09), 2012). The construction of a sparse grid is recast into a knapsack problem: a profit is assigned to each hierarchical surplus and only the most profitable ones are added to the sparse grid. The convergence rate of the sparse grid approximation error with respect to the number of points in the grid is then shown to depend on weighted summability properties of the sequence of profits. This is a very general argument that can be applied to sparse grids built with any uni-variate family of points, both nested and non-nested. As an example, we apply such quasi-optimal sparse grids to the solution of a particular elliptic PDE with stochastic diffusion coefficients, namely the “inclusions problem”: we detail the convergence estimates obtained in this case using polynomial interpolation on either nested (Clenshaw–Curtis) or non-nested (Gauss–Legendre) abscissas, verify their sharpness numerically, and compare the performance of the resulting quasi-optimal grids with a few alternative sparse-grid construction schemes recently proposed in the literature.
Nobile, F.; Tamellini, L.; Tempone, Raul
2015-01-01
In this work we provide a convergence analysis for the quasi-optimal version of the sparse-grids stochastic collocation method we presented in a previous work: “On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods” (Beck et al., Math Models Methods Appl Sci 22(09), 2012). The construction of a sparse grid is recast into a knapsack problem: a profit is assigned to each hierarchical surplus and only the most profitable ones are added to the sparse grid. The convergence rate of the sparse grid approximation error with respect to the number of points in the grid is then shown to depend on weighted summability properties of the sequence of profits. This is a very general argument that can be applied to sparse grids built with any uni-variate family of points, both nested and non-nested. As an example, we apply such quasi-optimal sparse grids to the solution of a particular elliptic PDE with stochastic diffusion coefficients, namely the “inclusions problem”: we detail the convergence estimates obtained in this case using polynomial interpolation on either nested (Clenshaw–Curtis) or non-nested (Gauss–Legendre) abscissas, verify their sharpness numerically, and compare the performance of the resulting quasi-optimal grids with a few alternative sparse-grid construction schemes recently proposed in the literature.
Statistical modelling with quantile functions
Gilchrist, Warren
2000-01-01
Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...
Composite spectral functions for solving Volterra's population model
Ramezani, M.; Razzaghi, M.; Dehghan, M.
2007-01-01
An approximate method for solving Volterra's population model for population growth of a species in a closed system is proposed. Volterra's model is a nonlinear integro-differential equation, where the integral term represents the effect of toxin. The approach is based upon composite spectral functions approximations. The properties of composite spectral functions consisting of few terms of orthogonal functions are presented and are utilized to reduce the solution of the Volterra's model to the solution of a system of algebraic equations. The method is easy to implement and yields very accurate result
Sato, Shunsuke A. [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba 305-8571 (Japan); Taniguchi, Yasutaka [Center for Computational Science, University of Tsukuba, Tsukuba 305-8571 (Japan); Department of Medical and General Sciences, Nihon Institute of Medical Science, 1276 Shimogawara, Moroyama-Machi, Iruma-Gun, Saitama 350-0435 (Japan); Shinohara, Yasushi [Max Planck Institute of Microstructure Physics, 06120 Halle (Germany); Yabana, Kazuhiro [Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba 305-8571 (Japan); Center for Computational Science, University of Tsukuba, Tsukuba 305-8571 (Japan)
2015-12-14
We develop methods to calculate electron dynamics in crystalline solids in real-time time-dependent density functional theory employing exchange-correlation potentials which reproduce band gap energies of dielectrics; a meta-generalized gradient approximation was proposed by Tran and Blaha [Phys. Rev. Lett. 102, 226401 (2009)] (TBm-BJ) and a hybrid functional was proposed by Heyd, Scuseria, and Ernzerhof [J. Chem. Phys. 118, 8207 (2003)] (HSE). In time evolution calculations employing the TB-mBJ potential, we have found it necessary to adopt the predictor-corrector step for a stable time evolution. We have developed a method to evaluate electronic excitation energy without referring to the energy functional which is unknown for the TB-mBJ potential. For the HSE functional, we have developed a method for the operation of the Fock-like term in Fourier space to facilitate efficient use of massive parallel computers equipped with graphic processing units. We compare electronic excitations in silicon and germanium induced by femtosecond laser pulses using the TB-mBJ, HSE, and a simple local density approximation (LDA). At low laser intensities, electronic excitations are found to be sensitive to the band gap energy: they are close to each other using TB-mBJ and HSE and are much smaller in LDA. At high laser intensities close to the damage threshold, electronic excitation energies do not differ much among the three cases.