Approximation errors during variance propagation
International Nuclear Information System (INIS)
Dinsmore, Stephen
1986-01-01
Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
International Nuclear Information System (INIS)
Mikhin, V.I.; Matukhin, N.M.
2000-01-01
The approach to generalization of the non-stationary heat exchange data for the central zones of the nuclear reactor fuel assemblies and the approximate thermal-model-testing criteria are proposed. The fuel assemblies of fast and water-cooled reactors with different fuel compositions have been investigated. The reason of the non-stationary heat exchange is the fuel-energy-release time dependence. (author)
Approximate calculation method for integral of mean square value of nonstationary response
International Nuclear Information System (INIS)
Aoki, Shigeru; Fukano, Azusa
2010-01-01
The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.
Pauls-Worm, K.G.J.; Hendrix, E.M.T.; Haijema, R.; Vorst, van der J.G.A.J.
2014-01-01
We study the practical production planning problem of a food producer facing a non-stationary erratic demand for a perishable product with a fixed life time. In meeting the uncertain demand, the food producer uses a FIFO issuing policy. The food producer aims at meeting a certain service level at
On the dipole approximation with error estimates
Boßmann, Lea; Grummt, Robert; Kolb, Martin
2018-01-01
The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.
Sang, Huiyan
2011-12-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models. Our method allows for a nonseparable and nonstationary cross-covariance structure. We also present a covariance approximation approach to facilitate the computation in the modeling and analysis of very large multivariate spatial data sets. The covariance approximation consists of two parts: a reduced-rank part to capture the large-scale spatial dependence, and a sparse covariance matrix to correct the small-scale dependence error induced by the reduced rank approximation. We pay special attention to the case that the second part of the approximation has a block-diagonal structure. Simulation results of model fitting and prediction show substantial improvement of the proposed approximation over the predictive process approximation and the independent blocks analysis. We then apply our computational approach to the joint statistical modeling of multiple climate model errors. © 2012 Institute of Mathematical Statistics.
Analyzing the errors of DFT approximations for compressed water systems
International Nuclear Information System (INIS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-01-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the
Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...
Indian Academy of Sciences (India)
Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...
Error Estimates for the Approximation of the Effective Hamiltonian
International Nuclear Information System (INIS)
Camilli, Fabio; Capuzzo Dolcetta, Italo; Gomes, Diogo A.
2008-01-01
We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting
Fractal image coding by an approximation of the collage error
Salih, Ismail; Smith, Stanley H.
1998-12-01
In fractal image compression an image is coded as a set of contractive transformations, and is guaranteed to generate an approximation to the original image when iteratively applied to any initial image. In this paper we present a method for mapping similar regions within an image by an approximation of the collage error; that is, range blocks can be approximated by a linear combination of domain blocks.
Reducing Approximation Error in the Fourier Flexible Functional Form
Directory of Open Access Journals (Sweden)
Tristan D. Skolrud
2017-12-01
Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.
Agarwal, Mukul
2018-01-01
It is proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution of the Markoff chain and thus, is also equal to the rate-distortion function of the Markoff chain.
Errors due to the cylindrical cell approximation in lattice calculations
Energy Technology Data Exchange (ETDEWEB)
Newmarch, D A [Reactor Development Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)
1960-06-15
It is shown that serious errors in fine structure calculations may arise through the use of the cylindrical cell approximation together with transport theory methods. The effect of this approximation is to overestimate the ratio of the flux in the moderator to the flux in the fuel. It is demonstrated that the use of the cylindrical cell approximation gives a flux in the moderator which is considerably higher than in the fuel, even when the cell dimensions in units of mean free path tend to zero; whereas, for the case of real cells (e.g. square or hexagonal), the flux ratio must tend to unity. It is also shown that, for cylindrical cells of any size, the ratio of the flux in the moderator to flux in the fuel tends to infinity as the total neutron cross section in the moderator tends to zero; whereas the ratio remains finite for real cells. (author)
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....
Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2016-01-01
posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations
Transient error approximation in a Lévy queue
Mathijsen, B.; Zwart, A.P.
2017-01-01
Motivated by a capacity allocation problem within a finite planning period, we conduct a transient analysis of a single-server queue with Lévy input. From a cost minimization perspective, we investigate the error induced by using stationary congestion measures as opposed to time-dependent measures.
Bryant, C. M.; Prudhomme, S.; Wildey, T.
2015-01-01
In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.
Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.
Wei, Qinglai; Li, Benkai; Song, Ruizhuo
2018-04-01
In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.
Sandberg, Mattias
2015-01-07
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.
Hall, Eric
2016-01-09
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.
An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2015-01-01
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system
An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Peer Jesper
2015-01-07
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.
An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems
Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2015-01-01
This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.
Hall, Eric; Haakon, Hoel; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul
2016-01-01
lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible
Sandberg, Mattias
2015-01-01
log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke
2014-01-01
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Maximum error-bounded Piecewise Linear Representation for online stream approximation
Xie, Qing
2014-04-04
Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.
Hall, Eric Joseph
2016-12-08
We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.
Estimating the approximation error when fixing unessential factors in global sensitivity analysis
Energy Technology Data Exchange (ETDEWEB)
Sobol' , I.M. [Institute for Mathematical Modelling of the Russian Academy of Sciences, Moscow (Russian Federation); Tarantola, S. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre of the European Commission, TP361, Institute of the Protection and Security of the Citizen, Via E. Fermi 1, 21020 Ispra (Italy)]. E-mail: debora.gatelli@jrc.it; Kucherenko, S.S. [Imperial College London (United Kingdom); Mauntz, W. [Department of Biochemical and Chemical Engineering, Dortmund University (Germany)
2007-07-15
One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential factors. A simple function where analytical solutions are available is used to illustrate the theorem. The numerical estimation of small sensitivity indices is discussed.
Approximate damped oscillatory solutions and error estimates for the perturbed Klein–Gordon equation
International Nuclear Information System (INIS)
Ye, Caier; Zhang, Weiguo
2015-01-01
Highlights: • Analyze the dynamical behavior of the planar dynamical system corresponding to the perturbed Klein–Gordon equation. • Present the relations between the properties of traveling wave solutions and the perturbation coefficient. • Obtain all explicit expressions of approximate damped oscillatory solutions. • Investigate error estimates between exact damped oscillatory solutions and the approximate solutions and give some numerical simulations. - Abstract: The influence of perturbation on traveling wave solutions of the perturbed Klein–Gordon equation is studied by applying the bifurcation method and qualitative theory of dynamical systems. All possible approximate damped oscillatory solutions for this equation are obtained by using undetermined coefficient method. Error estimates indicate that the approximate solutions are meaningful. The results of numerical simulations also establish our analysis
Directory of Open Access Journals (Sweden)
Lee HyunYoung
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
First error bounds for the porous media approximation of the Poisson-Nernst-Planck equations
Energy Technology Data Exchange (ETDEWEB)
Schmuck, Markus [Imperial College, London (United Kingdom). Depts. of Chemical Engineering and Mathematics
2012-04-15
We study the well-accepted Poisson-Nernst-Planck equations modeling transport of charged particles. By formal multiscale expansions we rederive the porous media formulation obtained by two-scale convergence in [42, 43]. The main result is the derivation of the error which occurs after replacing a highly heterogeneous solid-electrolyte composite by a homogeneous one. The derived estimates show that the approximation errors for both, the ion densities quantified in L{sup 2}-norm and the electric potential measured in H{sup 1}-norm, are of order O(s{sup 1/2}). (orig.)
Capacitor Mismatch Error Cancellation Technique for a Successive Approximation A/D Converter
DEFF Research Database (Denmark)
Zheng, Zhiliang; Moon, Un-Ku; Steensgaard-Madsen, Jesper
1999-01-01
An error cancellation technique is described for suppressing capacitor mismatch in a successive approximation A/D converter. At the cost of a 50% increase in conversion time, the first-order capacitor mismatch error is cancelled. Methods for achieving top-plate parasitic insensitive operation...... are described, and the use of a gain- and offset-compensated opamp is explained. SWITCAP simulation results show that the proposed 16-bit SAR ADC can achieve an SNDR of over 91 dB under non-ideal conditions, including 1% 3 sigma nominal capacitor mismatch, 10-20% randomized parasitic capacitors, 66 dB opamp...
Optimized implementations of rational approximations for the Voigt and complex error function
International Nuclear Information System (INIS)
Schreier, Franz
2011-01-01
Rational functions are frequently used as efficient yet accurate numerical approximations for real and complex valued functions. For the complex error function w(x+iy), whose real part is the Voigt function K(x,y), code optimizations of rational approximations are investigated. An assessment of requirements for atmospheric radiative transfer modeling indicates a y range over many orders of magnitude and accuracy better than 10 -4 . Following a brief survey of complex error function algorithms in general and rational function approximations in particular the problems associated with subdivisions of the x, y plane (i.e., conditional branches in the code) are discussed and practical aspects of Fortran and Python implementations are considered. Benchmark tests of a variety of algorithms demonstrate that programming language, compiler choice, and implementation details influence computational speed and there is no unique ranking of algorithms. A new implementation, based on subdivision of the upper half-plane in only two regions, combining Weideman's rational approximation for small |x|+y<15 and Humlicek's rational approximation otherwise is shown to be efficient and accurate for all x, y.
Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials
Finster, Felix; Smoller, Joel
2010-09-01
A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
International Nuclear Information System (INIS)
Jakeman, J.D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation
Nucleation theory - Is replacement free energy needed?. [error analysis of capillary approximation
Doremus, R. H.
1982-01-01
It has been suggested that the classical theory of nucleation of liquid from its vapor as developed by Volmer and Weber (1926) needs modification with a factor referred to as the replacement free energy and that the capillary approximation underlying the classical theory is in error. Here, the classical nucleation equation is derived from fluctuation theory, Gibb's result for the reversible work to form a critical nucleus, and the rate of collision of gas molecules with a surface. The capillary approximation is not used in the derivation. The chemical potential of small drops is then considered, and it is shown that the capillary approximation can be derived from thermodynamic equations. The results show that no corrections to Volmer's equation are needed.
The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks
Afify, Laila H.
2015-08-18
Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.
The Influence of Gaussian Signaling Approximation on Error Performance in Cellular Networks
Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2015-01-01
Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which abstracts many important wireless communication aspects. Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP) and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this paper, we propose an approximate yet accurate framework, that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiD analysis in both downlink and uplink cellular networks scenarios.
Directory of Open Access Journals (Sweden)
E. Castelli
2016-11-01
Full Text Available MIPAS (Michelson Interferometer for Passive Atmospheric Sounding is a mid-infrared limb emission sounder that operated on board the polar satellite ENVISAT from 2002 to 2012. The retrieval algorithm used by the European Space Agency to process MIPAS measurements exploits the assumption that the atmosphere is horizontally homogeneous. However, previous studies highlighted how this assumption causes significant errors on the retrieved profiles of some MIPAS target species.In this paper we quantify the errors induced by this assumption and evaluate the performances of three different algorithms that can be used to mitigate the problem. We generate synthetic observations with a high spatial resolution atmospheric model and carry out the retrievals with four alternative methods. The first assumes horizontal homogeneity (1-D retrieval, the second includes a model of the horizontal gradient of atmospheric temperature (1-D plus temperature gradient retrieval, the third accounts for an horizontal gradient of temperature and composition (1-D plus temperature and composition gradient retrieval, while the fourth is the full two-dimensional (2-D inversion approach.Our results highlight that the 1-D retrieval implies errors that are significant for averages of profiles. Furthermore, for some targets (e.g. T, CH4 and N2O below 10 hPa the error induced by the 1-D approximation also becomes visible in the individual retrieved profiles. The inclusion of any kind of horizontal variability model improves all the targets with respect to the horizontal homogeneity assumption. For temperature, HNO3 and CFC-11, the inclusion of an horizontal temperature gradient leads to a significant reduction of the error. For other targets, such as H2O, O3, N2O, CH4, the improvements due to the inclusion of an horizontal temperature gradient are minor. In these cases, the inclusion of a gradient in the target volume mixing ratio leads to significant improvements. Among all the
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0application of min-plus algebra. The approach can be applied in most of existing machine learning methods, including methods of data approximation and regularized and sparse regression, leading to the improvement in the computational cost/accuracy trade-off. We demonstrate that on synthetic and real-life datasets PQSQ-based machine learning methods achieve orders of magnitude faster computational performance than the corresponding state-of-the-art methods, having similar or better approximation accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Minimization of the effect of errors in approximate radiation view factors
International Nuclear Information System (INIS)
Clarksean, R.; Solbrig, C.
1993-01-01
The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Energy Technology Data Exchange (ETDEWEB)
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
International Nuclear Information System (INIS)
Kulakovskij, M.Ya.; Savitskij, V.I.
1981-01-01
The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other
DEFF Research Database (Denmark)
Picchini, Umberto; Forman, Julie Lyng
2016-01-01
a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm......In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...
For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.
Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie
2013-08-01
The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.
Energy Technology Data Exchange (ETDEWEB)
Ju, Lili; Tian, Li; Wang, Desheng
2008-10-31
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
A simple nonstationary-volatility robust panel unit root test
Demetrescu, Matei; Hanck, Christoph
2012-01-01
We propose an IV panel unit root test robust to nonstationary error volatility. Its finite-sample performance is convincing even for many units and strong cross-correlation. An application to GDP prices illustrates the inferential impact of nonstationary volatility. (C) 2012 Elsevier B.V. All rights
Directory of Open Access Journals (Sweden)
Hyun Young Lee
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
Sang, Huiyan; Jun, Mikyoung; Huang, Jianhua Z.
2011-01-01
This paper investigates the cross-correlations across multiple climate model errors. We build a Bayesian hierarchical model that accounts for the spatial dependence of individual models as well as cross-covariances across different climate models
Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar
2012-01-01
In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.
Use and Subtleties of Saddlepoint Approximation for Minimum Mean-Square Error Estimation
DEFF Research Database (Denmark)
Beierholm, Thomas; Nuttall, Albert H.; Hansen, Lars Kai
2008-01-01
integral representation. However, the examples also demonstrate that when two saddle points are close or coalesce, then saddle-point approximation based on isolated saddle points is not valid. A saddle-point approximation based on two close or coalesced saddle points is derived and in the examples......, the validity and accuracy of the derivation is demonstrated...
The refractive index in electron microscopy and the errors of its approximations
Energy Technology Data Exchange (ETDEWEB)
Lentzen, M.
2017-05-15
In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.
The refractive index in electron microscopy and the errors of its approximations
International Nuclear Information System (INIS)
Lentzen, M.
2017-01-01
In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.
Nonstationary quantum mechanics
International Nuclear Information System (INIS)
Todorov, N.S.
1981-01-01
Some peculiarities of the results of nonstationary perturbation theory in the presence of a degenerate continuous energy spectrum are considered. Their relevance to the ideology of the preceding articles in this series is discussed. (author)
Schur Complement Reduction in the Mixed-Hybrid Approximation of Darcy's Law: Rounding Error Analysis
Czech Academy of Sciences Publication Activity Database
Maryška, Jiří; Rozložník, Miroslav; Tůma, Miroslav
2000-01-01
Roč. 117, - (2000), s. 159-173 ISSN 0377-0427 R&D Projects: GA AV ČR IAA2030706; GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : potential fluid flow problem * symmetric indefinite linear systems * Schur complement reduction * iterative methods * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000
Minimum mean square error estimation and approximation of the Bayesian update
Litvinenko, Alexander; Matthies, Hermann G.; Zander, Elmar
2015-01-01
Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.
Minimum mean square error estimation and approximation of the Bayesian update
Litvinenko, Alexander
2015-01-07
Given: a physical system modeled by a PDE or ODE with uncertain coefficient q(w), a measurement operator Y (u(q); q), where u(q; w) uncertain solution. Aim: to identify q(w). The mapping from parameters to observations is usually not invertible, hence this inverse identification problem is generally ill-posed. To identify q(w) we derived non-linear Bayesian update from the variational problem associated with conditional expectation. To reduce cost of the Bayesian update we offer a functional approximation, e.g. polynomial chaos expansion (PCE). New: We derive linear, quadratic etc approximation of full Bayesian update.
A FEM approximation of a two-phase obstacle problem and its a posteriori error estimate
Czech Academy of Sciences Publication Activity Database
Bozorgnia, F.; Valdman, Jan
2017-01-01
Roč. 73, č. 3 (2017), s. 419-432 ISSN 0898-1221 R&D Projects: GA ČR(CZ) GF16-34894L; GA MŠk(CZ) 7AMB16AT015 Institutional support: RVO:67985556 Keywords : A free boundary problem * A posteriori error analysis * Finite element method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.531, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/valdman-0470507.pdf
Frolov, Maxim; Chistiakova, Olga
2017-06-01
Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.
Nonstationary quantum mechanics
International Nuclear Information System (INIS)
Todorov, N.S.
1981-01-01
It is shown that the nonstationary Schroedinger equation does not satisfy a well-known adiabatical principle in thermodynamics. A ''renormalization procedure'' based on the possible existence of a time-irreversible basic evolution equation is proposed with the help of which one comes to agreement in a variety of specific cases of an adiabatic inclusion of a perturbing potential. The ideology of the present article rests essentially on the ideology of the preceding articles, in particular article I. (author)
International Nuclear Information System (INIS)
Lin, Chang Sheng; Chiang, Dar Yun
2012-01-01
Modal identification is considered from response data of structural system under nonstationary ambient vibration. In a previous paper, we showed that by assuming the ambient excitation to be nonstationary white noise in the form of a product model, the nonstationary response signals can be converted into free-vibration data via the correlation technique. In the present paper, if the ambient excitation can be modeled as a nonstationary white noise in the form of a product model, then the nonstationary cross random decrement signatures of structural response evaluated at any fixed time instant are shown theoretically to be proportional to the nonstationary cross-correlation functions. The practical problem of insufficient data samples available for evaluating nonstationary random decrement signatures can be approximately resolved by first extracting the amplitude-modulating function from the response and then transforming the nonstationary responses into stationary ones. Modal-parameter identification can then be performed using the Ibrahim time-domain technique, which is effective at identifying closely spaced modes. The theory proposed can be further extended by using the filtering concept to cover the case of nonstationary color excitations. Numerical simulations confirm the validity of the proposed method for identification of modal parameters from nonstationary ambient response data
Nonstationary quantum mechanics. 5. Nonstationary quantum models of scattering
Energy Technology Data Exchange (ETDEWEB)
Todorov, N S [Low Temperature Department of the Institute of Solid State Physics of the Bulgarian Academy of Sciences, Sofia
1981-05-01
Some peculiarities of the results of nonstationary perturbation theory in the presence of a degenerate continuous energy spectrum are considered. Their relevance to the ideology of the preceding articles in this series is discussed.
Nonstationary quantum mechanics v. nonstationary quantum models of scattering
Energy Technology Data Exchange (ETDEWEB)
Todorov, N S
1981-05-01
Some pecularities of the results of nonstationary pertubation theory in the presence of a degenerate continuous energy spectrum are considered. Their relevance to the ideology of the preceding articles in this series is discussed.
Directory of Open Access Journals (Sweden)
2017-01-01
Full Text Available The article deals with a new approximation method for enhanced accuracy measurement system errors distribu- tion. The method is based upon the mistie analysis of this system and a more robust design data. The method is considered on the example of comparison of Automatic Dependent Surveillance - Broadcast (ADS-B with ground radar warning sys- tem used at present. The peculiarity of the considered problem is that the target parameter (aircraft swerve value may dras- tically change in the scale of both measurement systems errors during observation. That is why it is impossible to determine the position of the aircraft by repeatedly observing it with ground radar warning system. It is only possible to compare the systems’ one-shot measurements, which are called errors here. The article considers that the distribution of robust meas- urement system errors probability density (the system that has been continuously in operation is known, the histogram of errors is given and it is needed to obtain an asymptotic estimate of errors occurrence distribution for a new improved meas- urement system.This approach is based on cumulant analysis of measurement systems error distribution functions. The approach allows us to carry out the reduction of corresponding infinite series properly. The author shows that due to measurement systems independency, their errors distribution cumulants are connected by a simple ratio, which allow to calculate the val- ues easily. To reconstruct distribution initial form one should use Edgeworth’s asymptotic series, where a normal distribu- tion derivative is used as a basis function. The latter is proportional to Hermitian polynomial, thus the series can be consid- ered as an orthogonal decomposition.The author reveals the results of coordinate error component distribution calculation; the error is measured when the normal line lies towards aircraft path, using error statistics experimental information obtained in ”RI of
Non-Stationary Internal Tides Observed with Satellite Altimetry
Ray, Richard D.; Zaron, E. D.
2011-01-01
Temporal variability of the internal tide is inferred from a 17-year combined record of Topex/Poseidon and Jason satellite altimeters. A global sampling of along-track sea-surface height wavenumber spectra finds that non-stationary variance is generally 25% or less of the average variance at wavenumbers characteristic of mode-l tidal internal waves. With some exceptions the non-stationary variance does not exceed 0.25 sq cm. The mode-2 signal, where detectable, contains a larger fraction of non-stationary variance, typically 50% or more. Temporal subsetting of the data reveals interannual variability barely significant compared with tidal estimation error from 3-year records. Comparison of summer vs. winter conditions shows only one region of noteworthy seasonal changes, the northern South China Sea. Implications for the anticipated SWOT altimeter mission are briefly discussed.
Nonstationary Narrow-Band Response and First-Passage Probability
DEFF Research Database (Denmark)
Krenk, Steen
1979-01-01
The notion of a nonstationary narrow-band stochastic process is introduced without reference to a frequency spectrum, and the joint distribution function of two consecutive maxima is approximated by use of an envelope. Based on these definitions the first passage problem is treated as a Markov po...
Information retrieval for nonstationary data records
Su, M. Y.
1971-01-01
A review and a critical discussion are made on the existing methods for analysis of nonstationary time series, and a new algorithm for splitting nonstationary time series, is applied to the analysis of sunspot data.
Parametric modelling of nonstationary platform deck motions
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.
with fast Fourier transform spectra and show good agreement. However, the higher order maximum entropy model can be used for better representation of nonstationary motions. This method also reduces long time series of nonstationary offshore data into a few...
Photorefraction in crystals with nonstationary photovoltaic current
International Nuclear Information System (INIS)
Volk, T.R.; Astaf'ev, S.B.; Razumovskij, N.V.
1995-01-01
Effect of photovoltaic current nonstationary components, conditioned by nonstationary character of photovoltaic centers, on photorefractive properties of LiNbO 3 crystals is considered. Analytic expressions describing nonstationary photovoltaic current effect on kinetics of recording and optical erasure of photorefraction are obtained. A possibility of nonstationary photovoltaic current occurrence in crystals with multilevel charge transfer circuit is considered. Recording light pulse duration effect on photorefraction in LiNbO 3 is discussed. 25 refs., 8 figs
Nonstationary statistical theory for multipactor
International Nuclear Information System (INIS)
Anza, S.; Vicente, C.; Gil, J.; Boria, V. E.; Gimeno, B.; Raboso, D.
2010-01-01
This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.
Enhanced tunneling through nonstationary barriers
International Nuclear Information System (INIS)
Palomares-Baez, J. P.; Rodriguez-Lopez, J. L.; Ivlev, B.
2007-01-01
Quantum tunneling through a nonstationary barrier is studied analytically and by a direct numerical solution of Schroedinger equation. Both methods are in agreement and say that the main features of the phenomenon can be described in terms of classical trajectories which are solutions of Newton's equation in complex time. The probability of tunneling is governed by analytical properties of a time-dependent perturbation and the classical trajectory in the plane of complex time. Some preliminary numerical calculations of Euclidean resonance (an easy penetration through a classical nonstationary barrier due to an underbarrier interference) are presented
International Nuclear Information System (INIS)
Nkemzi, Boniface
2003-10-01
This paper is concerned with the effective implementation of the Fourier-finite-element method, which combines the approximating Fourier and the finite-element methods, for treating the Derichlet problem for the Lam.6 equations in axisymmetric domains Ω-circumflex is contained in R 3 with conical vertices and reentrant edges. The partial Fourier decomposition reduces the three-dimensional boundary value problem to an infinite sequence of decoupled two-dimensional boundary value problems on the plane meridian domain Ω α is contained in R + 2 of Ω-circumflex with solutions u, n (n = 0,1,2,...) being the Fourier coefficients of the solution u of the 3D problem. The asymptotic behavior of the Fourier coefficients near the angular points of Ω α , is described by appropriate singular vector-functions and treated numerically by linear finite elements on locally graded meshes. For the right-hand side function f-circumflex is an element of (L 2 (Ω-circumflex)) 3 it is proved that with appropriate mesh grading the rate of convergence of the combined approximations in (W 2 1 (Ω-circumflex)) 3 is of the order O(h + N -1 ), where h and N are the parameters of the finite-element and Fourier approximations, respectively, with h → 0 and N → ∞. (author)
International Nuclear Information System (INIS)
Ceolin, C.; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T.
2015-01-01
Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)
Energy Technology Data Exchange (ETDEWEB)
Ceolin, C., E-mail: celina.ceolin@gmail.com [Universidade Federal de Santa Maria (UFSM), Frederico Westphalen, RS (Brazil). Centro de Educacao Superior Norte; Schramm, M.; Bodmann, B.E.J.; Vilhena, M.T., E-mail: celina.ceolin@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica
2015-07-01
Recently the stationary neutron diffusion equation in heterogeneous rectangular geometry was solved by the expansion of the scalar fluxes in polynomials in terms of the spatial variables (x; y), considering the two-group energy model. The focus of the present discussion consists in the study of an error analysis of the aforementioned solution. More specifically we show how the spatial subdomain segmentation is related to the degree of the polynomial and the Lipschitz constant. This relation allows to solve the 2-D neutron diffusion problem for second degree polynomials in each subdomain. This solution is exact at the knots where the Lipschitz cone is centered. Moreover, the solution has an analytical representation in each subdomain with supremum and infimum functions that shows the convergence of the solution. We illustrate the analysis with a selection of numerical case studies. (author)
Directory of Open Access Journals (Sweden)
Francisco Javier Bucio
2017-10-01
Full Text Available Due to its nutritional and economic value, the tomato is considered one of the main vegetables in terms of production and consumption in the world. For this reason, an important case study is the fruit maturation parametrized by its mass loss in this study. This process develops in the fruit mainly after harvest. Since that parameter affects the economic value of the crop, the scientific community has been progressively approaching the issue. However, there is no a state-of-the-art practical model allowing the prediction of the tomato fruit mass loss yet. This study proposes a prediction model for tomato mass loss in a continuous and definite time-frame using regression methods. The model is based on a combination of adjustment methods such as least squares polynomial regression leading to error estimation, and cross validation techniques. Experimental results from a 50 fruit of tomato sample studied over a 54 days period were compared to results from the model using a second-order polynomial approach found to provide optimal data fit with a resulting efficiency of ~97%. The model also allows the design of precise logistic strategies centered on post-harvest tomato mass loss prediction usable by producers, distributors, and consumers.
Wavelet analysis for nonstationary signals
International Nuclear Information System (INIS)
Penha, Rosani Maria Libardi da
1999-01-01
Mechanical vibration signals play an important role in anomalies identification resulting of equipment malfunctioning. Traditionally, Fourier spectral analysis is used where the signals are assumed to be stationary. However, occasional transient impulses and start-up process are examples of nonstationary signals that can be found in mechanical vibrations. These signals can provide important information about the equipment condition, as early fault detection. The Fourier analysis can not adequately be applied to nonstationary signals because the results provide data about the frequency composition averaged over the duration of the signal. In this work, two methods for nonstationary signal analysis are used: Short Time Fourier Transform (STFT) and wavelet transform. The STFT is a method of adapting Fourier spectral analysis for nonstationary application to time-frequency domain. To have a unique resolution throughout the entire time-frequency domain is its main limitation. The wavelet transform is a new analysis technique suitable to nonstationary signals, which handles the STFT drawbacks, providing multi-resolution frequency analysis and time localization in a unique time-scale graphic. The multiple frequency resolutions are obtained by scaling (dilatation/compression) the wavelet function. A comparison of the conventional Fourier transform, STFT and wavelet transform is made applying these techniques to: simulated signals, arrangement rotor rig vibration signal and rotate machine vibration signal Hanning window was used to STFT analysis. Daubechies and harmonic wavelets were used to continuos, discrete and multi-resolution wavelet analysis. The results show the Fourier analysis was not able to detect changes in the signal frequencies or discontinuities. The STFT analysis detected the changes in the signal frequencies, but with time-frequency resolution problems. The wavelet continuos and discrete transform demonstrated to be a high efficient tool to detect
Mallak, Saed
1996-01-01
Ankara : Department of Mathematics and Institute of Engineering and Sciences of Bilkent University, 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves leaf 29 In thi.s work, we studierl the Ergodicilv of Non-Stationary .Markov chains. We gave several e.xainples with different cases. We proved that given a sec[uence of Markov chains such that the limit of this sec|uence is an Ergodic Markov chain, then the limit of the combination ...
Splines employment for inverse problem of nonstationary thermal conduction
International Nuclear Information System (INIS)
Nikonov, S.P.; Spolitak, S.I.
1985-01-01
An analytical solution has been obtained for an inverse problem of nonstationary thermal conduction which is faced in nonstationary heat transfer data processing when the rewetting in channels with uniform annular fuel element imitators is investigated. In solving the problem both boundary conditions and power density within the imitator are regularized via cubic splines constructed with the use of Reinsch algorithm. The solution can be applied for calculation of temperature distribution in the imitator and the heat flux in two-dimensional approximation (r-z geometry) under the condition that the rewetting front velocity is known, and in one-dimensional r-approximation in cases with negligible axial transport or when there is a lack of data about the temperature disturbance source velocity along the channel
Radiation of light impurities in a nonstationary plasma
International Nuclear Information System (INIS)
Abramov, V.A.; Krotova, G.I.
1984-01-01
In the framework of a nonstationary coronal model with account for latest data on elementary process cross sections calculations of oxygen radiation power are performed. It is shown that taking into account electron temperature nonstationarity characteristic of the initial stage in nowadays tokamaks, line emission power in the principal maximum region (Tsub(e) approximately 40 eV) changes but slightly, whereas the radiation power in the second maximum (Tsub(e) approximately 100 eV increases approximately 20 times as compared with stationary values
Learning for Nonstationary Dirichlet Processes
Czech Academy of Sciences Publication Activity Database
Quinn, A.; Kárný, Miroslav
2007-01-01
Roč. 21, č. 10 (2007), s. 827-855 ISSN 0890-6327 R&D Projects: GA AV ČR 1ET100750401 Grant - others:MŠk ČR(CZ) 2C06001 Program:2C Institutional research plan: CEZ:AV0Z10750506 Keywords : Nestacionární procesy * učení * Dirichletovy procesy * zapomínání Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.776, year: 2007 http://library.utia.cas.cz/separaty/2007/as/karny- learning for nonstationary dirichlet processes.pdf
Study on statistical analysis of nonlinear and nonstationary reactor noises
International Nuclear Information System (INIS)
Hayashi, Koji
1993-03-01
For the purpose of identification of nonlinear mechanism and diagnosis of nuclear reactor systems, analysis methods for nonlinear reactor noise have been studied. By adding newly developed approximate response function to GMDH, a conventional nonlinear identification method, a useful method for nonlinear spectral analysis and identification of nonlinear mechanism has been established. Measurement experiment and analysis were performed on the reactor power oscillation observed in the NSRR installed at the JAERI and the cause of the instability was clarified. Furthermore, the analysis and data recording methods for nonstationary noise have been studied. By improving the time resolution of instantaneous autoregressive spectrum, a method for monitoring and diagnosis of operational status of nuclear reactor has been established. A preprocessing system for recording of nonstationary reactor noise was developed and its usability was demonstrated through a measurement experiment. (author) 139 refs
Deviations from uniform power law scaling in nonstationary time series
Viswanathan, G. M.; Peng, C. K.; Stanley, H. E.; Goldberger, A. L.
1997-01-01
A classic problem in physics is the analysis of highly nonstationary time series that typically exhibit long-range correlations. Here we test the hypothesis that the scaling properties of the dynamics of healthy physiological systems are more stable than those of pathological systems by studying beat-to-beat fluctuations in the human heart rate. We develop techniques based on the Fano factor and Allan factor functions, as well as on detrended fluctuation analysis, for quantifying deviations from uniform power-law scaling in nonstationary time series. By analyzing extremely long data sets of up to N = 10(5) beats for 11 healthy subjects, we find that the fluctuations in the heart rate scale approximately uniformly over several temporal orders of magnitude. By contrast, we find that in data sets of comparable length for 14 subjects with heart disease, the fluctuations grow erratically, indicating a loss of scaling stability.
Non-Stationary Dependence Structures for Spatial Extremes
Huser, Raphaël
2016-03-03
Max-stable processes are natural models for spatial extremes because they provide suitable asymptotic approximations to the distribution of maxima of random fields. In the recent past, several parametric families of stationary max-stable models have been developed, and fitted to various types of data. However, a recurrent problem is the modeling of non-stationarity. In this paper, we develop non-stationary max-stable dependence structures in which covariates can be easily incorporated. Inference is performed using pairwise likelihoods, and its performance is assessed by an extensive simulation study based on a non-stationary locally isotropic extremal t model. Evidence that unknown parameters are well estimated is provided, and estimation of spatial return level curves is discussed. The methodology is demonstrated with temperature maxima recorded over a complex topography. Models are shown to satisfactorily capture extremal dependence.
A Novel Simulation Model for Nonstationary Rice Fading Channels
Directory of Open Access Journals (Sweden)
Kaili Jiang
2018-01-01
Full Text Available In this paper, we propose a new simulator for nonstationary Rice fading channels under nonisotropic scattering scenarios, as well as the improved computation method of simulation parameters. The new simulator can also be applied on generating Rayleigh fading channels by adjusting parameters. The proposed simulator takes into account the smooth transition of fading phases between the adjacent channel states. The time-variant statistical properties of the proposed simulator, that is, the probability density functions (PDFs of envelope and phase, autocorrelation function (ACF, and Doppler power spectrum density (DPSD, are also analyzed and derived. Simulation results have demonstrated that our proposed simulator provides good approximation on the statistical properties with the corresponding theoretical ones, which indicates its usefulness for the performance evaluation and validation of the wireless communication systems under nonstationary and nonisotropic scenarios.
Detrending of non-stationary noise data by spline techniques
International Nuclear Information System (INIS)
Behringer, K.
1989-11-01
An off-line method for detrending non-stationary noise data has been investigated. It uses a least squares spline approximation of the noise data with equally spaced breakpoints. Subtraction of the spline approximation from the noise signal at each data point gives a residual noise signal. The method acts as a high-pass filter with very sharp frequency cutoff. The cutoff frequency is determined by the breakpoint distance. The steepness of the cutoff is controlled by the spline order. (author) 12 figs., 1 tab., 5 refs
Nonstationary oscillations in gyrotrons revisited
International Nuclear Information System (INIS)
Dumbrajs, O.; Kalis, H.
2015-01-01
Development of gyrotrons requires careful understanding of different regimes of gyrotron oscillations. It is known that in the planes of the generalized gyrotron variables: cyclotron resonance mismatch and dimensionless current or cyclotron resonance mismatch and dimensionless interaction length complicated alternating sequences of regions of stationary, periodic, automodulation, and chaotic oscillations exist. In the past, these regions were investigated on the supposition that the transit time of electrons through the interaction space is much shorter than the cavity decay time. This assumption is valid for short and/or high diffraction quality resonators. However, in the case of long and/or low diffraction quality resonators, which are often utilized, this assumption is no longer valid. In such a case, a different mathematical formalism has to be used for studying nonstationary oscillations. One example of such a formalism is described in the present paper
Loss energy states of nonstationary quantum systems
International Nuclear Information System (INIS)
Dodonov, V.V.; Man'ko, V.I.
1978-01-01
The concept of loss energy states is introduced. The loss energy states of the quantum harmonic damping oscillator are considered in detail. The method of constructing the loss energy states for general multidimensional quadratic nonstationary quantum systems is briefly discussed
Local polynomial Whittle estimation covering non-stationary fractional processes
DEFF Research Database (Denmark)
Nielsen, Frank
to the non-stationary region. By approximating the short-run component of the spectrum by a polynomial, instead of a constant, in a shrinking neighborhood of zero we alleviate some of the bias that the classical local Whittle estimators is prone to. This bias reduction comes at a cost as the variance is in...... study illustrates the performance of the proposed estimator compared to the classical local Whittle estimator and the local polynomial Whittle estimator. The empirical justi.cation of the proposed estimator is shown through an analysis of credit spreads....
Sparse Bayesian Learning for Nonstationary Data Sources
Fujimaki, Ryohei; Yairi, Takehisa; Machida, Kazuo
This paper proposes an online Sparse Bayesian Learning (SBL) algorithm for modeling nonstationary data sources. Although most learning algorithms implicitly assume that a data source does not change over time (stationary), one in the real world usually does due to such various factors as dynamically changing environments, device degradation, sudden failures, etc (nonstationary). The proposed algorithm can be made useable for stationary online SBL by setting time decay parameters to zero, and as such it can be interpreted as a single unified framework for online SBL for use with stationary and nonstationary data sources. Tests both on four types of benchmark problems and on actual stock price data have shown it to perform well.
Results of nonlinear and nonstationary image processing
International Nuclear Information System (INIS)
Pizer, S.M.; Correla, J.A.; Chesler, D.A.; Metz, C.E.
1973-01-01
A nonstationary method, multiple z-divided filtering, and a nonlinear method, biased smearing have been applied to scintigrams. Biased smearing does not appear to hold much promise. Multiple z-divided filtering, on the other hand, appears to be justified, and initial results at minimum encourage further research into the possibility that this technique may become a method of choice
Nonstationary stochastic charge fluctuations of a dust particle in plasmas.
Shotorban, B
2011-06-01
Stochastic charge fluctuations of a dust particle that are due to discreteness of electrons and ions in plasmas can be described by a one-step process master equation [T. Matsoukas and M. Russell, J. Appl. Phys. 77, 4285 (1995)] with no exact solution. In the present work, using the system size expansion method of Van Kampen along with the linear noise approximation, a Fokker-Planck equation with an exact Gaussian solution is developed by expanding the master equation. The Gaussian solution has time-dependent mean and variance governed by two ordinary differential equations modeling the nonstationary process of dust particle charging. The model is tested via the comparison of its results to the results obtained by solving the master equation numerically. The electron and ion currents are calculated through the orbital motion limited theory. At various times of the nonstationary process of charging, the model results are in a very good agreement with the master equation results. The deviation is more significant when the standard deviation of the charge is comparable to the mean charge in magnitude.
Likelihood inference for a nonstationary fractional autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Ørregård Nielsen, Morten
2010-01-01
This paper discusses model-based inference in an autoregressive model for fractional processes which allows the process to be fractional of order d or d-b. Fractional differencing involves infinitely many past values and because we are interested in nonstationary processes we model the data X1......,...,X_{T} given the initial values X_{-n}, n=0,1,..., as is usually done. The initial values are not modeled but assumed to be bounded. This represents a considerable generalization relative to all previous work where it is assumed that initial values are zero. For the statistical analysis we assume...... the conditional Gaussian likelihood and for the probability analysis we also condition on initial values but assume that the errors in the autoregressive model are i.i.d. with suitable moment conditions. We analyze the conditional likelihood and its derivatives as stochastic processes in the parameters, including...
Non-stationary covariance function modelling in 2D least-squares collocation
Darbeheshti, N.; Featherstone, W. E.
2009-06-01
Standard least-squares collocation (LSC) assumes 2D stationarity and 3D isotropy, and relies on a covariance function to account for spatial dependence in the observed data. However, the assumption that the spatial dependence is constant throughout the region of interest may sometimes be violated. Assuming a stationary covariance structure can result in over-smoothing of, e.g., the gravity field in mountains and under-smoothing in great plains. We introduce the kernel convolution method from spatial statistics for non-stationary covariance structures, and demonstrate its advantage for dealing with non-stationarity in geodetic data. We then compared stationary and non- stationary covariance functions in 2D LSC to the empirical example of gravity anomaly interpolation near the Darling Fault, Western Australia, where the field is anisotropic and non-stationary. The results with non-stationary covariance functions are better than standard LSC in terms of formal errors and cross-validation against data not used in the interpolation, demonstrating that the use of non-stationary covariance functions can improve upon standard (stationary) LSC.
Yoon, Heonjun; Kim, Miso; Park, Choon-Su; Youn, Byeng D.
2018-01-01
Piezoelectric vibration energy harvesting (PVEH) has received much attention as a potential solution that could ultimately realize self-powered wireless sensor networks. Since most ambient vibrations in nature are inherently random and nonstationary, the output performances of PVEH devices also randomly change with time. However, little attention has been paid to investigating the randomly time-varying electroelastic behaviors of PVEH systems both analytically and experimentally. The objective of this study is thus to make a step forward towards a deep understanding of the time-varying performances of PVEH devices under nonstationary random vibrations. Two typical cases of nonstationary random vibration signals are considered: (1) randomly-varying amplitude (amplitude modulation; AM) and (2) randomly-varying amplitude with randomly-varying instantaneous frequency (amplitude and frequency modulation; AM-FM). In both cases, this study pursues well-balanced correlations of analytical predictions and experimental observations to deduce the relationships between the time-varying output performances of the PVEH device and two primary input parameters, such as a central frequency and an external electrical resistance. We introduce three correlation metrics to quantitatively compare analytical prediction and experimental observation, including the normalized root mean square error, the correlation coefficient, and the weighted integrated factor. Analytical predictions are in an excellent agreement with experimental observations both mechanically and electrically. This study provides insightful guidelines for designing PVEH devices to reliably generate electric power under nonstationary random vibrations.
Tada, Kohei; Koga, Hiroaki; Okumura, Mitsutaka; Tanaka, Shingo
2018-06-01
Spin contamination error in the total energy of the Au2/MgO system was estimated using the density functional theory/plane-wave scheme and approximate spin projection methods. This is the first investigation in which the errors in chemical phenomena on a periodic surface are estimated. The spin contamination error of the system was 0.06 eV. This value is smaller than that of the dissociation of Au2 in the gas phase (0.10 eV). This is because of the destabilization of the singlet spin state due to the weakening of the Au-Au interaction caused by the Au-MgO interaction.
Effect of non-stationary climate on infectious gastroenteritis transmission in Japan
Onozuka, Daisuke
2014-06-01
Local weather factors are widely considered to influence the transmission of infectious gastroenteritis. Few studies, however, have examined the non-stationary relationships between global climatic factors and transmission of infectious gastroenteritis. We analyzed monthly data for cases of infectious gastroenteritis in Fukuoka, Japan from 2000 to 2012 using cross-wavelet coherency analysis to assess the pattern of associations between indices for the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Infectious gastroenteritis cases were non-stationary and significantly associated with the IOD and ENSO (Multivariate ENSO Index [MEI], Niño 1 + 2, Niño 3, Niño 4, and Niño 3.4) for a period of approximately 1 to 2 years. This association was non-stationary and appeared to have a major influence on the synchrony of infectious gastroenteritis transmission. Our results suggest that non-stationary patterns of association between global climate factors and incidence of infectious gastroenteritis should be considered when developing early warning systems for epidemics of infectious gastroenteritis.
Nonstationary interference and scattering from random media
International Nuclear Information System (INIS)
Nazikian, R.
1991-12-01
For the small angle scattering of coherent plane waves from inhomogeneous random media, the three dimensional mean square distribution of random fluctuations may be recovered from the interferometric detection of the nonstationary modulational structure of the scattered field. Modulational properties of coherent waves scattered from random media are related to nonlocal correlations in the double sideband structure of the Fourier transform of the scattering potential. Such correlations may be expressed in terms of a suitability generalized spectral coherence function for analytic fields
Hazard function theory for nonstationary natural hazards
Read, L.; Vogel, R. M.
2015-12-01
Studies from the natural hazards literature indicate that many natural processes, including wind speeds, landslides, wildfires, precipitation, streamflow and earthquakes, show evidence of nonstationary behavior such as trends in magnitudes through time. Traditional probabilistic analysis of natural hazards based on partial duration series (PDS) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance is constant through time. Given evidence of trends and the consequent expected growth in devastating impacts from natural hazards across the world, new methods are needed to characterize their probabilistic behavior. The field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (x) with its failure time series (t), enabling computation of corresponding average return periods and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose PDS magnitudes are assumed to follow the widely applied Poisson-GP model. We derive a 2-parameter Generalized Pareto hazard model and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series x, with corresponding failure time series t, should have application to a wide class of natural hazards.
Non-stationary (13)C-metabolic flux ratio analysis.
Hörl, Manuel; Schnidder, Julian; Sauer, Uwe; Zamboni, Nicola
2013-12-01
(13)C-metabolic flux analysis ((13)C-MFA) has become a key method for metabolic engineering and systems biology. In the most common methodology, fluxes are calculated by global isotopomer balancing and iterative fitting to stationary (13)C-labeling data. This approach requires a closed carbon balance, long-lasting metabolic steady state, and the detection of (13)C-patterns in a large number of metabolites. These restrictions mostly reduced the application of (13)C-MFA to the central carbon metabolism of well-studied model organisms grown in minimal media with a single carbon source. Here we introduce non-stationary (13)C-metabolic flux ratio analysis as a novel method for (13)C-MFA to allow estimating local, relative fluxes from ultra-short (13)C-labeling experiments and without the need for global isotopomer balancing. The approach relies on the acquisition of non-stationary (13)C-labeling data exclusively for metabolites in the proximity of a node of converging fluxes and a local parameter estimation with a system of ordinary differential equations. We developed a generalized workflow that takes into account reaction types and the availability of mass spectrometric data on molecular ions or fragments for data processing, modeling, parameter and error estimation. We demonstrated the approach by analyzing three key nodes of converging fluxes in central metabolism of Bacillus subtilis. We obtained flux estimates that are in agreement with published results obtained from steady state experiments, but reduced the duration of the necessary (13)C-labeling experiment to less than a minute. These results show that our strategy enables to formally estimate relative pathway fluxes on extremely short time scale, neglecting cellular carbon balancing. Hence this approach paves the road to targeted (13)C-MFA in dynamic systems with multiple carbon sources and towards rich media. © 2013 Wiley Periodicals, Inc.
A risk-based approach to flood management decisions in a nonstationary world
Rosner, Ana; Vogel, Richard M.; Kirshen, Paul H.
2014-03-01
Traditional approaches to flood management in a nonstationary world begin with a null hypothesis test of "no trend" and its likelihood, with little or no attention given to the likelihood that we might ignore a trend if it really existed. Concluding a trend exists when it does not, or rejecting a trend when it exists are known as type I and type II errors, respectively. Decision-makers are poorly served by statistical and/or decision methods that do not carefully consider both over- and under-preparation errors, respectively. Similarly, little attention is given to how to integrate uncertainty in our ability to detect trends into a flood management decision context. We show how trend hypothesis test results can be combined with an adaptation's infrastructure costs and damages avoided to provide a rational decision approach in a nonstationary world. The criterion of expected regret is shown to be a useful metric that integrates the statistical, economic, and hydrological aspects of the flood management problem in a nonstationary world.
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-07-01
Many spatial processes in environmental applications, such as climate variables and climate model errors on a global scale, exhibit complex nonstationary dependence structure, in not only their marginal covariance but also their cross-covariance. Flexible cross-covariance models for processes on a global scale are critical for an accurate description of each spatial process as well as the cross-dependences between them and also for improved predictions. We propose various ways to produce cross-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters vary over space, coupled with a differential operators approach for modeling large-scale nonstationarity. We compare their performance to the performance of some existing models in terms of the aic and spatial predictions in two applications: joint modeling of surface temperature and precipitation, and joint modeling of errors in climate model ensembles. © 2014 Elsevier Inc.
A comparison of three approaches to non-stationary flood frequency analysis
Debele, S. E.; Strupczewski, W. G.; Bogdanowicz, E.
2017-08-01
Non-stationary flood frequency analysis (FFA) is applied to statistical analysis of seasonal flow maxima from Polish and Norwegian catchments. Three non-stationary estimation methods, namely, maximum likelihood (ML), two stage (WLS/TS) and GAMLSS (generalized additive model for location, scale and shape parameters), are compared in the context of capturing the effect of non-stationarity on the estimation of time-dependent moments and design quantiles. The use of a multimodel approach is recommended, to reduce the errors due to the model misspecification in the magnitude of quantiles. The results of calculations based on observed seasonal daily flow maxima and computer simulation experiments showed that GAMLSS gave the best results with respect to the relative bias and root mean square error in the estimates of trend in the standard deviation and the constant shape parameter, while WLS/TS provided better accuracy in the estimates of trend in the mean value. Within three compared methods the WLS/TS method is recommended to deal with non-stationarity in short time series. Some practical aspects of the GAMLSS package application are also presented. The detailed discussion of general issues related to consequences of climate change in the FFA is presented in the second part of the article entitled "Around and about an application of the GAMLSS package in non-stationary flood frequency analysis".
Woźniak, M.; Smołka, M.; Cortes, Adriano Mauricio; Paszyński, M.; Schaefer, R.
2016-01-01
We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme
Hazard function theory for nonstationary natural hazards
Read, Laura K.; Vogel, Richard M.
2016-04-01
Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
Non-stationary compositions of Anosov diffeomorphisms
International Nuclear Information System (INIS)
Stenlund, Mikko
2011-01-01
Motivated by non-equilibrium phenomena in nature, we study dynamical systems whose time-evolution is determined by non-stationary compositions of chaotic maps. The constituent maps are topologically transitive Anosov diffeomorphisms on a two-dimensional compact Riemannian manifold, which are allowed to change with time—slowly, but in a rather arbitrary fashion. In particular, such systems admit no invariant measure. By constructing a coupling, we prove that any two sufficiently regular distributions of the initial state converge exponentially with time. Thus, a system of this kind loses memory of its statistical history rapidly
Fermat principle for a nonstationary medium.
Voronovich, A G; Godin, O A
2003-07-25
One possible formulation of a variational principle of the Fermat type for systems with time-dependent parameters is suggested. In a stationary case, it reduces to the Mopertui-Lagrange least-action principle. A class of Hamiltonians (dispersion relations) is indicated, for which the variational principle reduces to the Fermat principle in a general nonstationary case. Hamiltonians that are homogeneous functions of momenta are in this category. For the important case of nondispersive waves (corresponding to Hamiltonians being homogeneous function of momenta order 1) the Fermat principle fully determines the geometry of the rays. Equations relating the variation of signal frequency with the rate of change of propagation time are established.
A Novel Simulator of Nonstationary Random MIMO Channels in Rayleigh Fading Scenarios
Directory of Open Access Journals (Sweden)
Qiuming Zhu
2016-01-01
Full Text Available For simulations of nonstationary multiple-input multiple-output (MIMO Rayleigh fading channels in time-variant scattering environments, a novel channel simulator is proposed based on the superposition of chirp signals. This new method has the advantages of low complexity and implementation simplicity as the sum of sinusoids (SOS method. In order to reproduce realistic time varying statistics for dynamic channels, an efficient parameter computation method is also proposed for updating the frequency parameters of employed chirp signals. Simulation results indicate that the proposed simulator is effective in generating nonstationary MIMO channels with close approximation of the time-variant statistical characteristics in accordance with the expected theoretical counterparts.
Analysis of stress and deformation in non-stationary creep
International Nuclear Information System (INIS)
Feijoo, R.A.; Taroco, E.; Guerreiro, J.N.C.
1980-12-01
A variational method and its algorithm are presented; they permit the analysis of stress and deformation in non-stationary creep. This algorithm is applied to an infinite cylinder submitted to an internal pressure. The solution obtained is compared with the solution of non-stationary creep problems [pt
Fast Approximate Joint Diagonalization Incorporating Weight Matrices
Czech Academy of Sciences Publication Activity Database
Tichavský, Petr; Yeredor, A.
2009-01-01
Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf
Symmetric approximations of the Navier-Stokes equations
International Nuclear Information System (INIS)
Kobel'kov, G M
2002-01-01
A new method for the symmetric approximation of the non-stationary Navier-Stokes equations by a Cauchy-Kovalevskaya-type system is proposed. Properties of the modified problem are studied. In particular, the convergence as ε→0 of the solutions of the modified problem to the solutions of the original problem on an infinite interval is established
A procedure for the significance testing of unmodeled errors in GNSS observations
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
Modeling coherent errors in quantum error correction
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Approximate symmetries of Hamiltonians
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
Energy Technology Data Exchange (ETDEWEB)
Jamet, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
Following on to work started in a previous report, the author carries out in the case of few examples, the calculation of the transmission coefficient T using accurate methods. He then deduces from this the error in the B K W method. The calculations are carried out for values of T ranging down to 10{sup -200}. The use of modern computers makes it possible to obtain values of T to eight decimal places in a few seconds and the practical advantage of the B K W approximation appears therefore considerably reduced. The author gives also a method which may be used for an exact calculation of the energy levels of a potential well. (author) [French] Poursuivant une etude commencee dans une note anterieure, l'auteur effectue, sur quelques exemples, le calcul du coefficient de transmission T par des methodes exactes. Il en deduit ensuite l'erreur de la methode B K W. Les calculs sont faits pour des valeurs de T allant jusqu'a 10{sup -200}. L'utilisation des machines a calculer modernes permettant d'obtenir en quelques secondes, la valeur de T avec 8 decimales exactes, l'interet pratique de l'approximation B K W semble considerablement diminue. L'auteur indique egalement une methode qui peut servir a calculer exactement les niveaux d'energie d'un puits de potentiel. (auteur)
Faster Simulation Methods for the Non-Stationary Random Vibrations of Non-Linear MDOF Systems
DEFF Research Database (Denmark)
Askar, A.; Köylüoglu, H. U.; Nielsen, Søren R. K.
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
Faster Simulation Methods for the Nonstationary Random Vibrations of Non-linear MDOF Systems
DEFF Research Database (Denmark)
Askar, A.; Köylüo, U.; Nielsen, Søren R.K.
1996-01-01
subject to nonstationary Gaussian white noise excitation, as an alternative to conventional direct simulation methods. These alternative simulation procedures rely on an assumption of local Gaussianity during each time step. This assumption is tantamount to various linearizations of the equations....... Such a treatment offers higher rates of convergence, faster speed and higher accuracy. These procedures are compared to the direct Monte Carlo simulation procedure, which uses a fourth order Runge-Kutta scheme with the white noise process approximated by a broad band Ruiz-Penzien broken line process...
Damping Identification of Bridges Under Nonstationary Ambient Vibration
Directory of Open Access Journals (Sweden)
Sunjoong Kim
2017-12-01
Full Text Available This research focuses on identifying the damping ratio of bridges using nonstationary ambient vibration data. The damping ratios of bridges in service have generally been identified using operational modal analysis (OMA based on a stationary white noise assumption for input signals. However, most bridges are generally subjected to nonstationary excitations while in service, and this violation of the basic assumption can lead to uncertainties in damping identification. To deal with nonstationarity, an amplitude-modulating function was calculated from measured responses to eliminate global trends caused by nonstationary input. A natural excitation technique (NExT-eigensystem realization algorithm (ERA was applied to estimate the damping ratio for a stationarized process. To improve the accuracy of OMA-based damping estimates, a comparative analysis was performed between an extracted stationary process and nonstationary data to assess the effect of eliminating nonstationarity. The mean value and standard deviation of the damping ratio for the first vertical mode decreased after signal stationarization. Keywords: Damping, Operational modal analysis, Traffic-induced vibration, Nonstationary, Signal stationarization, Amplitude-modulating, Bridge, Cable-stayed, Suspension
CERN. Geneva
2015-01-01
Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the dat...
Schmidt, Wolfgang M
1980-01-01
"In 1970, at the U. of Colorado, the author delivered a course of lectures on his famous generalization, then just established, relating to Roth's theorem on rational approxi- mations to algebraic numbers. The present volume is an ex- panded and up-dated version of the original mimeographed notes on the course. As an introduction to the author's own remarkable achievements relating to the Thue-Siegel-Roth theory, the text can hardly be bettered and the tract can already be regarded as a classic in its field."(Bull.LMS) "Schmidt's work on approximations by algebraic numbers belongs to the deepest and most satisfactory parts of number theory. These notes give the best accessible way to learn the subject. ... this book is highly recommended." (Mededelingen van het Wiskundig Genootschap)
H2 emission from non-stationary magnetized bow shocks
Tram, L. N.; Lesaffre, P.; Cabrit, S.; Gusdorf, A.; Nhung, P. T.
2018-01-01
When a fast moving star or a protostellar jet hits an interstellar cloud, the surrounding gas gets heated and illuminated: a bow shock is born that delineates the wake of the impact. In such a process, the new molecules that are formed and excited in the gas phase become accessible to observations. In this paper, we revisit models of H2 emission in these bow shocks. We approximate the bow shock by a statistical distribution of planar shocks computed with a magnetized shock model. We improve on previous works by considering arbitrary bow shapes, a finite irradiation field and by including the age effect of non-stationary C-type shocks on the excitation diagram and line profiles of H2. We also examine the dependence of the line profiles on the shock velocity and on the viewing angle: we suggest that spectrally resolved observations may greatly help to probe the dynamics inside the bow shock. For reasonable bow shapes, our analysis shows that low-velocity shocks largely contribute to H2 excitation diagram. This can result in an observational bias towards low velocities when planar shocks are used to interpret H2 emission from an unresolved bow. We also report a large magnetization bias when the velocity of the planar model is set independently. Our 3D models reproduce excitation diagrams in BHR 71 and Orion bow shocks better than previous 1D models. Our 3D model is also able to reproduce the shape and width of the broad H2 1-0S(1) line profile in an Orion bow shock (Brand et al. 1989).
Matérn-based nonstationary cross-covariance models for global processes
Jun, Mikyoung
2014-01-01
-covariance models, based on the Matérn covariance model class, that are suitable for describing prominent nonstationary characteristics of the global processes. In particular, we seek nonstationary versions of Matérn covariance models whose smoothness parameters
Correlation, Regression, and Cointegration of Nonstationary Economic Time Series
DEFF Research Database (Denmark)
Johansen, Søren
), and Phillips (1986) found the limit distributions. We propose to distinguish between empirical and population correlation coefficients and show in a bivariate autoregressive model for nonstationary variables that the empirical correlation and regression coefficients do not converge to the relevant population...... values, due to the trending nature of the data. We conclude by giving a simple cointegration analysis of two interests. The analysis illustrates that much more insight can be gained about the dynamic behavior of the nonstationary variables then simply by calculating a correlation coefficient......Yule (1926) introduced the concept of spurious or nonsense correlation, and showed by simulation that for some nonstationary processes, that the empirical correlations seem not to converge in probability even if the processes were independent. This was later discussed by Granger and Newbold (1974...
An Integrated Real-Time Beamforming and Postfiltering System for Nonstationary Noise Environments
Directory of Open Access Journals (Sweden)
Gannot Sharon
2003-01-01
Full Text Available We present a novel approach for real-time multichannel speech enhancement in environments of nonstationary noise and time-varying acoustical transfer functions (ATFs. The proposed system integrates adaptive beamforming, ATF identification, soft signal detection, and multichannel postfiltering. The noise canceller branch of the beamformer and the ATF identification are adaptively updated online, based on hypothesis test results. The noise canceller is updated only during stationary noise frames, and the ATF identification is carried out only when desired source components have been detected. The hypothesis testing is based on the nonstationarity of the signals and the transient power ratio between the beamformer primary output and its reference noise signals. Following the beamforming and the hypothesis testing, estimates for the signal presence probability and for the noise power spectral density are derived. Subsequently, an optimal spectral gain function that minimizes the mean square error of the log-spectral amplitude (LSA is applied. Experimental results demonstrate the usefulness of the proposed system in nonstationary noise environments.
Correlation, regression, and cointegration of nonstationary economic time series
DEFF Research Database (Denmark)
Johansen, Søren
Yule (1926) introduced the concept of spurious or nonsense correlation, and showed by simulation that for some nonstationary processes, that the empirical correlations seem not to converge in probability even if the processes were independent. This was later discussed by Granger and Newbold (1974......), and Phillips (1986) found the limit distributions. We propose to distinguish between empirical and population correlation coeffients and show in a bivariate autoregressive model for nonstationary variables that the empirical correlation and regression coe¢ cients do not converge to the relevant population...
Non-stationary flow of hydraulic oil in long pipe
Directory of Open Access Journals (Sweden)
Hružík Lumír
2014-03-01
Full Text Available The paper deals with experimental evaluation and numerical simulation of non-stationary flow of hydraulic oil in a long hydraulic line. Non-stationary flow is caused by a quick closing of valves at the beginning and the end of the pipe. Time dependence of pressure is measured by means of pressure sensors at the beginning and the end of the pipe. A mathematical model of a given circuit is created using Matlab SimHydraulics software. The long line is simulated by means of segmented pipe. The simulation is verified by experiment.
Real-Time Emulation of Nonstationary Channels in Safety-Relevant Vehicular Scenarios
Directory of Open Access Journals (Sweden)
Golsa Ghiaasi
2018-01-01
Full Text Available This paper proposes and discusses the architecture for a real-time vehicular channel emulator capable of reproducing the input/output behavior of nonstationary time-variant radio propagation channels in safety-relevant vehicular scenarios. The vehicular channel emulator architecture aims at a hardware implementation which requires minimal hardware complexity for emulating channels with the varying delay-Doppler characteristics of safety-relevant vehicular scenarios. The varying delay-Doppler characteristics require real-time updates to the multipath propagation model for each local stationarity region. The vehicular channel emulator is used for benchmarking the packet error performance of commercial off-the-shelf (COTS vehicular IEEE 802.11p modems and a fully software-defined radio-based IEEE 802.11p modem stack. The packet error ratio (PER estimated from temporal averaging over a single virtual drive and the packet error probability (PEP estimated from ensemble averaging over repeated virtual drives are evaluated and compared for the same vehicular scenario. The proposed architecture is realized as a virtual instrument on National Instruments™ LabVIEW. The National Instrument universal software radio peripheral with reconfigurable input-output (USRP-Rio 2953R is used as the software-defined radio platform for implementation; however, the results and considerations reported are of general purpose and can be applied to other platforms. Finally, we discuss the PER performance of the modem for two categories of vehicular channel models: a vehicular nonstationary channel model derived for urban single lane street crossing scenario of the DRIVEWAY’09 measurement campaign and the stationary ETSI models.
Finite approximations in fluid mechanics
International Nuclear Information System (INIS)
Hirschel, E.H.
1986-01-01
This book contains twenty papers on work which was conducted between 1983 and 1985 in the Priority Research Program ''Finite Approximations in Fluid Mechanics'' of the German Research Society (Deutsche Forschungsgemeinschaft). Scientists from numerical mathematics, fluid mechanics, and aerodynamics present their research on boundary-element methods, factorization methods, higher-order panel methods, multigrid methods for elliptical and parabolic problems, two-step schemes for the Euler equations, etc. Applications are made to channel flows, gas dynamical problems, large eddy simulation of turbulence, non-Newtonian flow, turbomachine flow, zonal solutions for viscous flow problems, etc. The contents include: multigrid methods for problems from fluid dynamics, development of a 2D-Transonic Potential Flow Solver; a boundary element spectral method for nonstationary viscous flows in 3 dimensions; navier-stokes computations of two-dimensional laminar flows in a channel with a backward facing step; calculations and experimental investigations of the laminar unsteady flow in a pipe expansion; calculation of the flow-field caused by shock wave and deflagration interaction; a multi-level discretization and solution method for potential flow problems in three dimensions; solutions of the conservation equations with the approximate factorization method; inviscid and viscous flow through rotating meridional contours; zonal solutions for viscous flow problems
Fetterly, Kenneth A; Favazza, Christopher P
2016-08-07
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the
Numerical Clifford Analysis for the Non-stationary Schroedinger Equation
International Nuclear Information System (INIS)
Faustino, N.; Vieira, N.
2007-01-01
We construct a discrete fundamental solution for the parabolic Dirac operator which factorizes the non-stationary Schroedinger operator. With such fundamental solution we construct a discrete counterpart for the Teodorescu and Cauchy-Bitsadze operators and the Bergman projectors. We finalize this paper with convergence results regarding the operators and a concrete numerical example
Dynamic Factor Analysis of Nonstationary Multivariate Time Series.
Molenaar, Peter C. M.; And Others
1992-01-01
The dynamic factor model proposed by P. C. Molenaar (1985) is exhibited, and a dynamic nonstationary factor model (DNFM) is constructed with latent factor series that have time-varying mean functions. The use of a DNFM is illustrated using data from a television viewing habits study. (SLD)
A Phase Vocoder Based on Nonstationary Gabor Frames
DEFF Research Database (Denmark)
Ottosen, Emil Solsbæk; Dörfler, Monika
2017-01-01
We propose a new algorithm for time stretching music signals based on the theory of nonstationary Gabor frames (NSGFs). The algorithm extends the techniques of the classical phase vocoder (PV) by incorporating adaptive timefrequency (TF) representations and adaptive phase locking. The adaptive TF...
Elastic-plastic response characteristics during frequency nonstationary waves
International Nuclear Information System (INIS)
Miyama, T.; Kanda, J.; Iwasaki, R.; Sunohara, H.
1987-01-01
The purpose of this paper is to study fundamental effects of the frequency nonstationarity on the inelastic responses. First, the inelastic response characteristics are examined by applying stationary waves. Then simple representation of nonstationary characteristics is considered to general nonstationary input. The effects for frequency nonstationary response are summarized for inelastic systems. The inelastic response characteristics under white noise and simple frequency nonstationary wave were investigated, and conclusions can be summarized as follows. 1) The maximum response values for both BL model and OO model corresponds fairly well with those estimated from the energy constant law, even when R is small. For the OO model, the maximum displacement response forms a unique curve except for very small R. 2) The plastic deformation for the BL model is affected by wide frequency components, as R decreases. The plastic deformation for the OO model can be determined from the last stiffness. 3). The inelastic response of the BL model is considerably affected by the frequency nonstationarity of the input motion, while the response is less affected by the nonstationarity for OO model. (orig./HP)
A bootstrap invariance principle for highly nonstationary long memory processes
Kapetanios, George
2004-01-01
This paper presents an invariance principle for highly nonstationary long memory processes, defined as processes with long memory parameter lying in (1, 1.5). This principle provides the tools for showing asymptotic validity of the bootstrap in the context of such processes.
Cointegration and Econometric Analysis of Non-Stationary Data in ...
African Journals Online (AJOL)
This is in conformity with the philosophy underlying the cointegration theory. Therefore, ignoring cointegration in non-stationary time series variables could lead to misspecification of the underlying process in the determination of corporate income tax in Nigeria. Thus, the study conclude that cointegration is greatly enhanced ...
Non-Stationary Dependence Structures for Spatial Extremes
Huser, Raphaë l; Genton, Marc G.
2016-01-01
been developed, and fitted to various types of data. However, a recurrent problem is the modeling of non-stationarity. In this paper, we develop non-stationary max-stable dependence structures in which covariates can be easily incorporated. Inference
Dynamic Memory Model for Non-Stationary Optimization
DEFF Research Database (Denmark)
Bendtsen, Claus Nørgaard; Krink, Thiemo
2002-01-01
Real-world problems are often nonstationary and can cause cyclic, repetitive patterns in the search landscape. For this class of problems, we introduce a new GA with dynamic explicit memory, which showed superior performance compared to a classic GA and a previously introduced memory-based GA for...
Robust Forecasting of Non-Stationary Time Series
Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.
2010-01-01
This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable
Nonstationary Hydrological Frequency Analysis: Theoretical Methods and Application Challenges
Xiong, L.
2014-12-01
Because of its great implications in the design and operation of hydraulic structures under changing environments (either climate change or anthropogenic changes), nonstationary hydrological frequency analysis has become so important and essential. Two important achievements have been made in methods. Without adhering to the consistency assumption in the traditional hydrological frequency analysis, the time-varying probability distribution of any hydrological variable can be established by linking the distribution parameters to some covariates such as time or physical variables with the help of some powerful tools like the Generalized Additive Model of Location, Scale and Shape (GAMLSS). With the help of copulas, the multivariate nonstationary hydrological frequency analysis has also become feasible. However, applications of the nonstationary hydrological frequency formula to the design and operation of hydraulic structures for coping with the impacts of changing environments in practice is still faced with many challenges. First, the nonstationary hydrological frequency formulae with time as covariate could only be extrapolated for a very short time period beyond the latest observation time, because such kind of formulae is not physically constrained and the extrapolated outcomes could be unrealistic. There are two physically reasonable methods that can be used for changing environments, one is to directly link the quantiles or the distribution parameters to some measureable physical factors, and the other is to use the derived probability distributions based on hydrological processes. However, both methods are with a certain degree of uncertainty. For the design and operation of hydraulic structures under changing environments, it is recommended that design results of both stationary and nonstationary methods be presented together and compared with each other, to help us understand the potential risks of each method.
Sampling design optimisation for rainfall prediction using a non-stationary geostatistical model
Wadoux, Alexandre M. J.-C.; Brus, Dick J.; Rico-Ramirez, Miguel A.; Heuvelink, Gerard B. M.
2017-09-01
The accuracy of spatial predictions of rainfall by merging rain-gauge and radar data is partly determined by the sampling design of the rain-gauge network. Optimising the locations of the rain-gauges may increase the accuracy of the predictions. Existing spatial sampling design optimisation methods are based on minimisation of the spatially averaged prediction error variance under the assumption of intrinsic stationarity. Over the past years, substantial progress has been made to deal with non-stationary spatial processes in kriging. Various well-documented geostatistical models relax the assumption of stationarity in the mean, while recent studies show the importance of considering non-stationarity in the variance for environmental processes occurring in complex landscapes. We optimised the sampling locations of rain-gauges using an extension of the Kriging with External Drift (KED) model for prediction of rainfall fields. The model incorporates both non-stationarity in the mean and in the variance, which are modelled as functions of external covariates such as radar imagery, distance to radar station and radar beam blockage. Spatial predictions are made repeatedly over time, each time recalibrating the model. The space-time averaged KED variance was minimised by Spatial Simulated Annealing (SSA). The methodology was tested using a case study predicting daily rainfall in the north of England for a one-year period. Results show that (i) the proposed non-stationary variance model outperforms the stationary variance model, and (ii) a small but significant decrease of the rainfall prediction error variance is obtained with the optimised rain-gauge network. In particular, it pays off to place rain-gauges at locations where the radar imagery is inaccurate, while keeping the distribution over the study area sufficiently uniform.
EDITORIAL: CAMOP: Quantum Non-Stationary Systems CAMOP: Quantum Non-Stationary Systems
Dodonov, Victor V.; Man'ko, Margarita A.
2010-09-01
Although time-dependent quantum systems have been studied since the very beginning of quantum mechanics, they continue to attract the attention of many researchers, and almost every decade new important discoveries or new fields of application are made. Among the impressive results or by-products of these studies, one should note the discovery of the path integral method in the 1940s, coherent and squeezed states in the 1960-70s, quantum tunneling in Josephson contacts and SQUIDs in the 1960s, the theory of time-dependent quantum invariants in the 1960-70s, different forms of quantum master equations in the 1960-70s, the Zeno effect in the 1970s, the concept of geometric phase in the 1980s, decoherence of macroscopic superpositions in the 1980s, quantum non-demolition measurements in the 1980s, dynamics of particles in quantum traps and cavity QED in the 1980-90s, and time-dependent processes in mesoscopic quantum devices in the 1990s. All these topics continue to be the subject of many publications. Now we are witnessing a new wave of interest in quantum non-stationary systems in different areas, from cosmology (the very first moments of the Universe) and quantum field theory (particle pair creation in ultra-strong fields) to elementary particle physics (neutrino oscillations). A rapid increase in the number of theoretical and experimental works on time-dependent phenomena is also observed in quantum optics, quantum information theory and condensed matter physics. Time-dependent tunneling and time-dependent transport in nano-structures are examples of such phenomena. Another emerging direction of study, stimulated by impressive progress in experimental techniques, is related to attempts to observe the quantum behavior of macroscopic objects, such as mirrors interacting with quantum fields in nano-resonators. Quantum effects manifest themselves in the dynamics of nano-electromechanical systems; they are dominant in the quite new and very promising field of circuit
Prestack wavefield approximations
Alkhalifah, Tariq
2013-01-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Prestack wavefield approximations
Alkhalifah, Tariq
2013-09-01
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers, and the image point, or in other words, prestack wavefields. Extrapolating such wavefields, nevertheless, suffers from limitations. Chief among them is the singularity associated with horizontally propagating waves. I have devised highly accurate approximations free of such singularities which are highly accurate. Specifically, I use Padé expansions with denominators given by a power series that is an order lower than that of the numerator, and thus, introduce a free variable to balance the series order and normalize the singularity. For the higher-order Padé approximation, the errors are negligible. Additional simplifications, like recasting the DSR formula as a function of scattering angle, allow for a singularity free form that is useful for constant-angle-gather imaging. A dynamic form of this DSR formula can be supported by kinematic evaluations of the scattering angle to provide efficient prestack wavefield construction. Applying a similar approximation to the dip angle yields an efficient 1D wave equation with the scattering and dip angles extracted from, for example, DSR ray tracing. Application to the complex Marmousi data set demonstrates that these approximations, although they may provide less than optimal results, allow for efficient and flexible implementations. © 2013 Society of Exploration Geophysicists.
Limitations of shallow nets approximation.
Lin, Shao-Bo
2017-10-01
In this paper, we aim at analyzing the approximation abilities of shallow networks in reproducing kernel Hilbert spaces (RKHSs). We prove that there is a probability measure such that the achievable lower bound for approximating by shallow nets can be realized for all functions in balls of reproducing kernel Hilbert space with high probability, which is different with the classical minimax approximation error estimates. This result together with the existing approximation results for deep nets shows the limitations for shallow nets and provides a theoretical explanation on why deep nets perform better than shallow nets. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Winkler, R.; Wilhelm, J.
A detailed description is presented of calculating the nonstationary electron distribution function in a weakly ionized collision-dominated plasma from the Boltzmann kinetic equation respecting the effects of the time-dependent electric field, collision processes and the electron formation and loss. The finite difference approximation was used for numerical solution. Using the Crank-Nicolson method and parabolic interpolation between the grid points the Boltzmann equation was transformed to a system of linear equations which was then solved by iterations at a preset accuracy. Using the calculated distribution function values, the macroscopic plasma parameters were determined and the balance of electron density and energy checked in each time step. The mathematical procedure is illustrated using a neon plasma perturbed by a rectangular electric pulse. The time development shown of the distribution function at moments when the pulse was switched on and off demonstrates the great stability of the numerical solution. (J.U.)
Virtual cathode regime in nonstationary electric high-current discharge in hydrogen
International Nuclear Information System (INIS)
Baksht, F.G.; Borodin, V.S.; Zhuravlev, V.N.
1988-01-01
Virtual cathode (VC) regime in a non-stationary high-current hydrogen arch is constructed. Basic calculational characteristics of the near-the-cathode layer are presented. The calculation was conducted for a 1 cm long cathode under 2x10 4 A/cm 2 current density in pulse and 10 atm. pressure. A rectangular current pulse was considered. It is shown that VC formation is caused by electron temperature reduction in the near-the-cathode area. This results in the reduction of ion flux from plasma to the cathode surface and finally in the change of a sign of space charge and field intensity near the surface. Under the transition to VC regime only the cathode temperature and its effective work function are practically changed, while the rest of parameters remain approximately constant
Salomatov, V. V.; Puzyrev, E. M.; Salomatov, A. V.
2018-05-01
A class of nonlinear problems of nonstationary radiative-convective heat transfer under the microwave action with a small penetration depth is considered in a stabilized coolant flow in a circular channel. The solutions to these problems are obtained, using asymptotic procedures at the stages of nonstationary and stationary convective heat transfer on the heat-radiating channel surface. The nonstationary and stationary stages of the solution are matched, using the "longitudinal coordinate-time" characteristic. The approximate solutions constructed on such principles correlate reliably with the exact ones at the limiting values of the operation parameters, as well as with numerical and experimental data of other researchers. An important advantage of these solutions is that they allow the determination of the main regularities of the microwave and thermal radiation influence on convective heat transfer in a channel even before performing cumbersome calculations. It is shown that, irrespective of the heat exchange regime (nonstationary or stationary), the Nusselt number decreases and the rate of the surface temperature change increases with increase in the intensity of thermal action.
Directory of Open Access Journals (Sweden)
Yue Hu
2018-01-01
Full Text Available Wind turbines usually operate under nonstationary conditions, such as wide-range speed fluctuation and time-varying load. Its critical component, the planetary gearbox, is prone to malfunction or failure, which leads to downtime and repair costs. Therefore, fault diagnosis and condition monitoring for the planetary gearbox in wind turbines is a vital research topic. Meanwhile, the signals measured by the vibration sensors mounted in the gearbox exhibit time-varying and nonstationary features. In this study, a novel time-frequency method based on high-order synchrosqueezing transform (SST and multi-taper empirical wavelet transform (MTEWT is proposed for the wind turbine planetary gearbox under nonstationary conditions. The high-order SST uses accurate instantaneous frequency approximations to obtain a sharper time-frequency representation (TFR. As the acquired signal consists of many components, like the meshing and rotating components of the gear and bearing, the fault component may be masked by other unrelated components. The MTEWT is used to separate the fault feature from the masking components. A variety of experimental signals of the wind turbine planetary gearbox under nonstationary conditions have been analyzed to demonstrate the effectiveness and robustness of the proposed method. Results show that the proposed method is effective in diagnosing both gear and bearing faults.
Hu, Yue; Tu, Xiaotong; Li, Fucai; Meng, Guang
2018-01-07
Wind turbines usually operate under nonstationary conditions, such as wide-range speed fluctuation and time-varying load. Its critical component, the planetary gearbox, is prone to malfunction or failure, which leads to downtime and repair costs. Therefore, fault diagnosis and condition monitoring for the planetary gearbox in wind turbines is a vital research topic. Meanwhile, the signals measured by the vibration sensors mounted in the gearbox exhibit time-varying and nonstationary features. In this study, a novel time-frequency method based on high-order synchrosqueezing transform (SST) and multi-taper empirical wavelet transform (MTEWT) is proposed for the wind turbine planetary gearbox under nonstationary conditions. The high-order SST uses accurate instantaneous frequency approximations to obtain a sharper time-frequency representation (TFR). As the acquired signal consists of many components, like the meshing and rotating components of the gear and bearing, the fault component may be masked by other unrelated components. The MTEWT is used to separate the fault feature from the masking components. A variety of experimental signals of the wind turbine planetary gearbox under nonstationary conditions have been analyzed to demonstrate the effectiveness and robustness of the proposed method. Results show that the proposed method is effective in diagnosing both gear and bearing faults.
Inferential framework for non-stationary dynamics: theory and applications
International Nuclear Information System (INIS)
Duggento, Andrea; Luchinsky, Dmitri G; McClintock, Peter V E; Smelyanskiy, Vadim N
2009-01-01
An extended Bayesian inference framework is presented, aiming to infer time-varying parameters in non-stationary nonlinear stochastic dynamical systems. The convergence of the method is discussed. The performance of the technique is studied using, as an example, signal reconstruction for a system of neurons modeled by FitzHugh–Nagumo oscillators: it is applied to reconstruction of the model parameters and elements of the measurement matrix, as well as to inference of the time-varying parameters of the non-stationary system. It is shown that the proposed approach is able to reconstruct unmeasured (hidden) variables of the system, to determine the model parameters, to detect stepwise changes of control parameters for each oscillator and to track the continuous evolution of the control parameters in the adiabatic limit
Compounding approach for univariate time series with nonstationary variances
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Non-stationary condition monitoring through event alignment
DEFF Research Database (Denmark)
Pontoppidan, Niels Henrik; Larsen, Jan
2004-01-01
We present an event alignment framework which enables change detection in non-stationary signals. change detection. Classical condition monitoring frameworks have been restrained to laboratory settings with stationary operating conditions, which are not resembling real world operation....... In this paper we apply the technique for non-stationary condition monitoring of large diesel engines based on acoustical emission sensor signals. The performance of the event alignment is analyzed in an unsupervised probabilistic detection framework based on outlier detection with either Principal Component...... Analysis or Gaussian Processes modeling. We are especially interested in the true performance of the condition monitoring performance with mixed aligned and unaligned data, e.g. detection of fault condition of unaligned examples versus false alarms of aligned normal condition data. Further, we expect...
Nonstationary ARCH and GARCH with t-distributed Innovations
DEFF Research Database (Denmark)
Pedersen, Rasmus Søndergaard; Rahbek, Anders
Consistency and asymptotic normality are established for the maximum likelihood estimators in the nonstationary ARCH and GARCH models with general t-distributed innovations. The results hold for joint estimation of (G)ARCH effects and the degrees of freedom parameter parametrizing the t-distribut......Consistency and asymptotic normality are established for the maximum likelihood estimators in the nonstationary ARCH and GARCH models with general t-distributed innovations. The results hold for joint estimation of (G)ARCH effects and the degrees of freedom parameter parametrizing the t......-distribution. With T denoting sample size, classic square-root T-convergence is shown to hold with closed form expressions for the multivariate covariances....
Thin viscoelastic disc subjected to radial non-stationary loading
Directory of Open Access Journals (Sweden)
Adámek V.
2010-07-01
Full Text Available The investigation of non-stationary wave phenomena in isotropic viscoelastic solids using analytical approaches is the aim of this paper. Concretely, the problem of a thin homogeneous disc subjected to radial pressure load nonzero on the part of its rim is solved. The external excitation is described by the Heaviside function in time, so the nonstationary state of stress is induced in the disc. Dissipative material behaviour of solid studied is represented by the discrete material model of standard linear viscoelastic solid in the Zener configuration. After the derivation of motion equations final form, the method of integral transforms in combination with the Fourier method is used for finding the problem solution. The solving process results in the derivation of integral transforms of radial and circumferential displacement components. Finally, the type of derived functions singularities and possible methods for their inverse Laplace transform are mentioned.
Learning in Non-Stationary Environments Methods and Applications
Lughofer, Edwin
2012-01-01
Recent decades have seen rapid advances in automatization processes, supported by modern machines and computers. The result is significant increases in system complexity and state changes, information sources, the need for faster data handling and the integration of environmental influences. Intelligent systems, equipped with a taxonomy of data-driven system identification and machine learning algorithms, can handle these problems partially. Conventional learning algorithms in a batch off-line setting fail whenever dynamic changes of the process appear due to non-stationary environments and external influences. Learning in Non-Stationary Environments: Methods and Applications offers a wide-ranging, comprehensive review of recent developments and important methodologies in the field. The coverage focuses on dynamic learning in unsupervised problems, dynamic learning in supervised classification and dynamic learning in supervised regression problems. A later section is dedicated to applications in which dyna...
ADSL Transceivers Applying DSM and Their Nonstationary Noise Robustness
Directory of Open Access Journals (Sweden)
Bostoen Tom
2006-01-01
Full Text Available Dynamic spectrum management (DSM comprises a new set of techniques for multiuser power allocation and/or detection in digital subscriber line (DSL networks. At the Alcatel Research and Innovation Labs, we have recently developed a DSM test bed, which allows the performance of DSM algorithms to be evaluated in practice. With this test bed, we have evaluated the performance of a DSM level-1 algorithm known as iterative water-filling in an ADSL scenario. This paper describes the results of, on the one hand, the performance gains achieved with iterative water-filling, and, on the other hand, the nonstationary noise robustness of DSM-enabled ADSL modems. It will be shown that DSM trades off nonstationary noise robustness for performance improvements. A new bit swap procedure is then introduced to increase the noise robustness when applying DSM.
Network simulation of nonstationary ionic transport through liquid junctions
International Nuclear Information System (INIS)
Castilla, J.; Horno, J.
1993-01-01
Nonstationary ionic transport across the liquid junctions has been studied using Network Thermodynamics. A network model for the time-dependent Nernst-Plack-Poisson system of equation is proposed. With this network model and the electrical circuit simulation program PSPICE, the concentrations, charge density, and electrical potentials, at short times, have been simulated for the binary system NaCl/NaCl. (Author) 13 refs
On the dynamics of non-stationary binary stellar systems
International Nuclear Information System (INIS)
Bekov, A. A.; Bejsekov, A.N.; Aldibaeva, L.T.
2005-01-01
The motion of test body in the external gravitational field of the binary stellar system with slowly variable some physical parameters of radiating components is considered on the base of restricted non-stationary photo-gravitational three and two bodies problem. The family of polar and coplanar solutions are obtained. These solutions give the possibility of the dynamical and structure interpretation of the binary young evolving stars and galaxies. (author)
Robust Forecasting of Non-Stationary Time Series
Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.
2010-01-01
This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...
A Generalized Framework for Non-Stationary Extreme Value Analysis
Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.
2017-12-01
Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA
Nonstationary heat flow in the piston of the turbocharged engine
Directory of Open Access Journals (Sweden)
Piotr GUSTOF
2010-01-01
Full Text Available In this study the numeric computations of nonstationary heat flow in form of temperature distribution on characteristic surfaces of the piston of the turbocharged engine at the beginning phase its work was presented. The computations were performed for fragmentary load engine by means of the two-zone combustion model, the boundary conditions of III kind and the finite elements method (FEM by using of COSMOS/M program.
Stationary and nonstationary properties of evolving networks with preferential linkage
International Nuclear Information System (INIS)
Jezewski, W.
2002-01-01
Networks evolving by preferential attachment of both external and internal links are investigated. The rate of adding an external link is assumed to depend linearly on the degree of a preexisting node to which a new node is connected. The process of creating an internal link, between a pair of existing vertices, is assumed to be controlled entirely by the vertex that has more links than the other vertex in the pair, and the rate of creation of such a link is assumed to be, in general, nonlinear in the degree of the more strongly connected vertex. It is shown that degree distributions of networks evolving only by creating internal links display for large degrees a nonstationary power-law decay with a time-dependent scaling exponent. Nonstationary power-law behaviors are numerically shown to persist even when the number of nodes is not fixed and both external and internal connections are introduced, provided that the rate of preferential attachment of internal connections is nonlinear. It is argued that nonstationary effects are not unlikely in real networks, although these effects may not be apparent, especially in networks with a slowly varying mean degree
Medina, Daniel C; Findley, Sally E; Guindo, Boubacar; Doumbia, Seydou
2007-11-21
Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. In this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Optimizing a Military Supply Chain in the Presence of Random, Non-Stationary Demands
National Research Council Canada - National Science Library
Yew
2003-01-01
... logistics supply chain that satisfies uncertain, non-stationary demands, while taking into account the volatility and singularity of military operations This research focuses on the development...
Diophantine approximation and badly approximable sets
DEFF Research Database (Denmark)
Kristensen, S.; Thorn, R.; Velani, S.
2006-01-01
. The classical set Bad of `badly approximable' numbers in the theory of Diophantine approximation falls within our framework as do the sets Bad(i,j) of simultaneously badly approximable numbers. Under various natural conditions we prove that the badly approximable subsets of Omega have full Hausdorff dimension...
Hoede, C.; Li, Z.
2001-01-01
In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,
Autocalibration method for non-stationary CT bias correction.
Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José
2018-02-01
Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.
Theoretical analysis of radiographic images by nonstationary Poisson processes
International Nuclear Information System (INIS)
Tanaka, Kazuo; Uchida, Suguru; Yamada, Isao.
1980-01-01
This paper deals with the noise analysis of radiographic images obtained in the usual fluorescent screen-film system. The theory of nonstationary Poisson processes is applied to the analysis of the radiographic images containing the object information. The ensemble averages, the autocorrelation functions, and the Wiener spectrum densities of the light-energy distribution at the fluorescent screen and of the film optical-density distribution are obtained. The detection characteristics of the system are evaluated theoretically. Numerical examples one-dimensional image are shown and the results are compared with those obtained under the assumption that the object image is related to the background noise by the additive process. (author)
International Nuclear Information System (INIS)
Knuefer; Lindauer
1980-01-01
Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)
Cappell, M S; Spray, D C; Bennett, M V
1988-06-28
Protractor muscles in the gastropod mollusc Navanax inermis exhibit typical spontaneous miniature end plate potentials with mean amplitude 1.71 +/- 1.19 (standard deviation) mV. The evoked end plate potential is quantized, with a quantum equal to the miniature end plate potential amplitude. When their rate is stationary, occurrence of miniature end plate potentials is a random, Poisson process. When non-stationary, spontaneous miniature end plate potential occurrence is a non-stationary Poisson process, a Poisson process with the mean frequency changing with time. This extends the random Poisson model for miniature end plate potentials to the frequently observed non-stationary occurrence. Reported deviations from a Poisson process can sometimes be accounted for by the non-stationary Poisson process and more complex models, such as clustered release, are not always needed.
Guo, L.; Van der Wegen, M.; Jay, D.A.; Matte, P.; Wang, Z.B.; Roelvink, J.A.; He, Q.
2015-01-01
River-tide dynamics remain poorly understood, in part because conventional harmonic analysis (HA) does not cope effectively with nonstationary signals. To explore nonstationary behavior of river tides and the modulation effects of river discharge, this work analyzes tidal signals in the Yangtze
Wavelet-Based Methodology for Evolutionary Spectra Estimation of Nonstationary Typhoon Processes
Directory of Open Access Journals (Sweden)
Guang-Dong Zhou
2015-01-01
Full Text Available Closed-form expressions are proposed to estimate the evolutionary power spectral density (EPSD of nonstationary typhoon processes by employing the wavelet transform. Relying on the definition of the EPSD and the concept of the wavelet transform, wavelet coefficients of a nonstationary typhoon process at a certain time instant are interpreted as the Fourier transform of a new nonstationary oscillatory process, whose modulating function is equal to the modulating function of the nonstationary typhoon process multiplied by the wavelet function in time domain. Then, the EPSD of nonstationary typhoon processes is deduced in a closed form and is formulated as a weighted sum of the squared moduli of time-dependent wavelet functions. The weighted coefficients are frequency-dependent functions defined by the wavelet coefficients of the nonstationary typhoon process and the overlapping area of two shifted wavelets. Compared with the EPSD, defined by a sum of the squared moduli of the wavelets in frequency domain in literature, this paper provides an EPSD estimation method in time domain. The theoretical results are verified by uniformly modulated nonstationary typhoon processes and non-uniformly modulated nonstationary typhoon processes.
International Nuclear Information System (INIS)
Chen, Shih-Hung; Chen, Liu
2013-01-01
The nonstationary oscillation of the gyrotron backward wave oscillator (gyro-BWO) with cylindrical interaction structure was studied utilizing both steady-state analyses and time-dependent simulations. Comparisons of the numerical results reveal that the gyro-BWO becomes nonstationary when the trailing field structure completely forms due to the dephasing energetic electrons. The backward propagation of radiated waves with a lower resonant frequency from the trailing field structure interferes with the main internal feedback loop, thereby inducing the nonstationary oscillation of the gyro-BWO. The nonstationary gyro-BWO exhibits the same spectral pattern of modulated oscillations with a constant frequency separation between the central frequency and sidebands throughout the whole system. The frequency separation is found to be scaled with the square root of the maximum field amplitude, thus further demonstrating that the nonstationary oscillation of the gyro-BWO is associated with the beam-wave resonance detuning
Incremental learning of concept drift in nonstationary environments.
Elwell, Ryan; Polikar, Robi
2011-10-01
We introduce an ensemble of classifiers-based approach for incremental learning of concept drift, characterized by nonstationary environments (NSEs), where the underlying data distributions change over time. The proposed algorithm, named Learn(++). NSE, learns from consecutive batches of data without making any assumptions on the nature or rate of drift; it can learn from such environments that experience constant or variable rate of drift, addition or deletion of concept classes, as well as cyclical drift. The algorithm learns incrementally, as other members of the Learn(++) family of algorithms, that is, without requiring access to previously seen data. Learn(++). NSE trains one new classifier for each batch of data it receives, and combines these classifiers using a dynamically weighted majority voting. The novelty of the approach is in determining the voting weights, based on each classifier's time-adjusted accuracy on current and past environments. This approach allows the algorithm to recognize, and act accordingly, to the changes in underlying data distributions, as well as to a possible reoccurrence of an earlier distribution. We evaluate the algorithm on several synthetic datasets designed to simulate a variety of nonstationary environments, as well as a real-world weather prediction dataset. Comparisons with several other approaches are also included. Results indicate that Learn(++). NSE can track the changing environments very closely, regardless of the type of concept drift. To allow future use, comparison and benchmarking by interested researchers, we also release our data used in this paper. © 2011 IEEE
Woźniak, M.
2016-06-02
We study the features of a new mixed integration scheme dedicated to solving the non-stationary variational problems. The scheme is composed of the FEM approximation with respect to the space variable coupled with a 3-leveled time integration scheme with a linearized right-hand side operator. It was applied in solving the Cahn-Hilliard parabolic equation with a nonlinear, fourth-order elliptic part. The second order of the approximation along the time variable was proven. Moreover, the good scalability of the software based on this scheme was confirmed during simulations. We verify the proposed time integration scheme by monitoring the Ginzburg-Landau free energy. The numerical simulations are performed by using a parallel multi-frontal direct solver executed over STAMPEDE Linux cluster. Its scalability was compared to the results of the three direct solvers, including MUMPS, SuperLU and PaSTiX.
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef; Nobile, Fabio; Tempone, Raul; Wolfers, Sö ren
2017-01-01
, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose
Directory of Open Access Journals (Sweden)
Ermuratschii V.V.
2014-04-01
Full Text Available e paper presents a method of the approximate calculation of the non-stationary temperature field inside of thermal packed bed energy storages with feasible and latent heat. Applying thermoelectric models and computational methods in electrical engineering, the task of computing non-stationary heat transfer is resolved with respect to third type boundary conditions without applying differential equations of the heat transfer. For sub-volumes of the energy storage the method is executed iteratively in spatiotemporal domain. Single-body heating is modeled for each sub-volume, and modeling conditions are assumed to be identical for remained bod-ies, located in the same sub-volume. For each iteration step the boundary conditions will be represented by re-sults at the previous step. The fulfillment of the first law of thermodynamics for system “energy storage - body” is obtained by the iterative search of the mean temperature of the energy storage. Under variable boundary con-ditions the proposed method maybe applied to calculating temperature field inside of energy storages with packed beds consisted of solid material, liquid and phase-change material. The method may also be employed to compute transient, power and performance characteristics of packed bed energy storages.
International Nuclear Information System (INIS)
Winterflood, A.H.
1980-01-01
In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)
Multiresolution approximation for volatility processes
E. Capobianco (Enrico)
2002-01-01
textabstractWe present an application of wavelet techniques to non-stationary time series with the aim of detecting the dependence structure which is typically found to characterize intraday stock index financial returns. It is particularly important to identify what components truly belong to the
Enhancement of Non-Stationary Speech using Harmonic Chirp Filters
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2015-01-01
In this paper, the issue of single channel speech enhancement of non-stationary voiced speech is addressed. The non-stationarity of speech is well known, but state of the art speech enhancement methods assume stationarity within frames of 20–30 ms. We derive optimal distortionless filters that take...... the non-stationarity nature of voiced speech into account via linear constraints. This is facilitated by imposing a harmonic chirp model on the speech signal. As an implicit part of the filter design, the noise statistics are also estimated based on the observed signal and parameters of the harmonic chirp...... model. Simulations on real speech show that the chirp based filters perform better than their harmonic counterparts. Further, it is seen that the gain of using the chirp model increases when the estimated chirp parameter is big corresponding to periods in the signal where the instantaneous fundamental...
Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals
Hedayatifar, L.; Vahabi, M.; Jafari, G. R.
2011-08-01
When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.
Nonstationary Transient Vibroacoustic Response of a Beam Structure
Caimi, R. E.; Margasahayam, R. N.; Nayfeh, Jamal F.
1997-01-01
This study consists of an investigation into the nonstationary transient response of the Verification Test Article (VETA) when subjected to random acoustic excitation. The goal is to assess excitation models that can be used in the design of structures and equipment when knowledge of the structure and the excitation is limited. The VETA is an instrumented cantilever beam that was exposed to acoustic loading during five Space Shuttle launches. The VETA analytical structural model response is estimated using the direct averaged power spectral density and the normalized pressure spectra methods. The estimated responses are compared to the measured response of the VETA. These comparisons are discussed with a focus on prediction conservatism and current design practice.
Martingales, nonstationary increments, and the efficient market hypothesis
McCauley, Joseph L.; Bassler, Kevin E.; Gunaratne, Gemunu H.
2008-06-01
We discuss the deep connection between nonstationary increments, martingales, and the efficient market hypothesis for stochastic processes x(t) with arbitrary diffusion coefficients D(x,t). We explain why a test for a martingale is generally a test for uncorrelated increments. We explain why martingales look Markovian at the level of both simple averages and 2-point correlations. But while a Markovian market has no memory to exploit and cannot be beaten systematically, a martingale admits memory that might be exploitable in higher order correlations. We also use the analysis of this paper to correct a misstatement of the ‘fair game’ condition in terms of serial correlations in Fama’s paper on the EMH. We emphasize that the use of the log increment as a variable in data analysis generates spurious fat tails and spurious Hurst exponents.
Gravitational entropy of nonstationary black holes and spherical shells
International Nuclear Information System (INIS)
Hiscock, W.A.
1989-01-01
The problem of defining the gravitational entropy of a nonstationary black hole is considered in a simple model consisting of a spherical shell which collapses into a preexisting black hole. The second law of black-hole mechanics strongly suggests identifying one-quarter of the area of the event horizon as the gravitational entropy of the system. It is, however, impossible to accurately locate the position of the global event horizon using only local measurements. In order to maintain a local thermodynamics, it is suggested that the entropy of the black hole be identified with one-quarter the area of the apparent horizon. The difference between the event-horizon entropy (to the extent it can be determined) and the apparent-horizon entropy may then be interpreted as the gravitational entropy of the collapsing shell. The total (event-horizon) gravitational entropy evolves in a smooth (C 0 ) fashion, even in the presence of δ-functional shells of matter
Simulation of nonstationary phenomena in atmospheric-pressure glow discharge
Korolev, Yu. D.; Frants, O. B.; Nekhoroshev, V. O.; Suslov, A. I.; Kas'yanov, V. S.; Shemyakin, I. A.; Bolotov, A. V.
2016-06-01
Nonstationary processes in atmospheric-pressure glow discharge manifest themselves in spontaneous transitions from the normal glow discharge into a spark. In the experiments, both so-called completed transitions in which a highly conductive constricted channel arises and incomplete transitions accompanied by the formation of a diffuse channel are observed. A model of the positive column of a discharge in air is elaborated that allows one to interpret specific features of the discharge both in the stationary stage and during its transition into a spark and makes it possible to calculate the characteristic oscillatory current waveforms for completed transitions into a spark and aperiodic ones for incomplete transitions. The calculated parameters of the positive column in the glow discharge mode agree well with experiment. Data on the densities of the most abundant species generated in the discharge (such as atomic oxygen, metastable nitrogen molecules, ozone, nitrogen oxides, and negative oxygen ions) are presented.
Simulation of nonstationary phenomena in atmospheric-pressure glow discharge
International Nuclear Information System (INIS)
Korolev, Yu. D.; Frants, O. B.; Nekhoroshev, V. O.; Suslov, A. I.; Kas’yanov, V. S.; Shemyakin, I. A.; Bolotov, A. V.
2016-01-01
Nonstationary processes in atmospheric-pressure glow discharge manifest themselves in spontaneous transitions from the normal glow discharge into a spark. In the experiments, both so-called completed transitions in which a highly conductive constricted channel arises and incomplete transitions accompanied by the formation of a diffuse channel are observed. A model of the positive column of a discharge in air is elaborated that allows one to interpret specific features of the discharge both in the stationary stage and during its transition into a spark and makes it possible to calculate the characteristic oscillatory current waveforms for completed transitions into a spark and aperiodic ones for incomplete transitions. The calculated parameters of the positive column in the glow discharge mode agree well with experiment. Data on the densities of the most abundant species generated in the discharge (such as atomic oxygen, metastable nitrogen molecules, ozone, nitrogen oxides, and negative oxygen ions) are presented.
Non-stationary vibrations of a thin viscoelastic orthotropic beam
Czech Academy of Sciences Publication Activity Database
Adámek, V.; Valeš, František; Tikal, B.
2009-01-01
Roč. 71, č. 12 (2009), e2569-e2576 ISSN 0362-546X R&D Projects: GA ČR(CZ) GA101/07/0946 Institutional research plan: CEZ:AV0Z20760514 Keywords : thin beam * non-stationary vibration * analytical solution Subject RIV: BI - Acoustics Impact factor: 1.487, year: 2009 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V0Y-4WB3N8S-4&_user=640952&_rdoc=1&_fmt=&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1156243286&_rerunOrigin= google &_acct=C000034318&_version=1&_urlVersion=0&_userid=640952&md5=ce096901a3382058455e822a20645820
Generalized Predictive Control for Non-Stationary Systems
DEFF Research Database (Denmark)
Palsson, Olafur Petur; Madsen, Henrik; Søgaard, Henning Tangen
1994-01-01
This paper shows how the generalized predictive control (GPC) can be extended to non-stationary (time-varying) systems. If the time-variation is slow, then the classical GPC can be used in context with an adaptive estimation procedure of a time-invariant ARIMAX model. However, in this paper prior...... knowledge concerning the nature of the parameter variations is assumed available. The GPC is based on the assumption that the prediction of the system output can be expressed as a linear combination of present and future controls. Since the Diophantine equation cannot be used due to the time......-variation of the parameters, the optimal prediction is found as the general conditional expectation of the system output. The underlying model is of an ARMAX-type instead of an ARIMAX-type as in the original version of the GPC (Clarke, D. W., C. Mohtadi and P. S. Tuffs (1987). Automatica, 23, 137-148) and almost all later...
A Phase Vocoder Based on Nonstationary Gabor Frames
DEFF Research Database (Denmark)
Ottosen, Emil Solsbæk; Dörfler, Monika
2017-01-01
We propose a new algorithm for time stretching music signals based on the theory of nonstationary Gabor frames (NSGFs). The algorithm extends the techniques of the classical phase vocoder (PV) by incorporating adaptive timefrequency (TF) representations and adaptive phase locking. The adaptive TF...... representations imply good time resolution for the onsets of attack transients and good frequency resolution for the sinusoidal components. We estimate the phase values only at peak channels and the remaining phases are then locked to the values of the peaks in an adaptive manner. During attack transients we keep...... that with just three times as many TF coefficients as signal samples, artifacts such as phasiness and transient smearing can be greatly reduced compared to the classical PV. The proposed algorithm is tested on both synthetic and real world signals and compared with state of the art algorithms in a reproducible...
Nonstationary signals phase-energy approach-theory and simulations
Klein, R; Braun, S; 10.1006/mssp.2001.1398
2001-01-01
Modern time-frequency methods are intended to deal with a variety of nonstationary signals. One specific class, prevalent in the area of rotating machines, is that of harmonic signals of varying frequencies and amplitude. This paper presents a new adaptive phase-energy (APE) approach for time-frequency representation of varying harmonic signals. It is based on the concept of phase (frequency) paths and the instantaneous power spectral density (PSD). It is this path which represents the dynamic behaviour of the system generating the observed signal. The proposed method utilises dynamic filters based on an extended Nyquist theorem, enabling extraction of signal components with optimal signal-to-noise ratio. The APE detects the most energetic harmonic components (frequency paths) in the analysed signal. Tests on simulated signals show the superiority of the APE in resolution and resolving power as compared to STFT and wavelets wave- packet decomposition. The dynamic filters also enable the reconstruction of the ...
Explicitly solvable complex Chebyshev approximation problems related to sine polynomials
Freund, Roland
1989-01-01
Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.
A Nonstationary Markov Model Detects Directional Evolution in Hymenopteran Morphology.
Klopfstein, Seraina; Vilhelmsen, Lars; Ronquist, Fredrik
2015-11-01
Directional evolution has played an important role in shaping the morphological, ecological, and molecular diversity of life. However, standard substitution models assume stationarity of the evolutionary process over the time scale examined, thus impeding the study of directionality. Here we explore a simple, nonstationary model of evolution for discrete data, which assumes that the state frequencies at the root differ from the equilibrium frequencies of the homogeneous evolutionary process along the rest of the tree (i.e., the process is nonstationary, nonreversible, but homogeneous). Within this framework, we develop a Bayesian approach for testing directional versus stationary evolution using a reversible-jump algorithm. Simulations show that when only data from extant taxa are available, the success in inferring directionality is strongly dependent on the evolutionary rate, the shape of the tree, the relative branch lengths, and the number of taxa. Given suitable evolutionary rates (0.1-0.5 expected substitutions between root and tips), accounting for directionality improves tree inference and often allows correct rooting of the tree without the use of an outgroup. As an empirical test, we apply our method to study directional evolution in hymenopteran morphology. We focus on three character systems: wing veins, muscles, and sclerites. We find strong support for a trend toward loss of wing veins and muscles, while stationarity cannot be ruled out for sclerites. Adding fossil and time information in a total-evidence dating approach, we show that accounting for directionality results in more precise estimates not only of the ancestral state at the root of the tree, but also of the divergence times. Our model relaxes the assumption of stationarity and reversibility by adding a minimum of additional parameters, and is thus well suited to studying the nature of the evolutionary process in data sets of limited size, such as morphology and ecology. © The Author
Partitioning uncertainty in streamflow projections under nonstationary model conditions
Chawla, Ila; Mujumdar, P. P.
2018-02-01
Assessing the impacts of Land Use (LU) and climate change on future streamflow projections is necessary for efficient management of water resources. However, model projections are burdened with significant uncertainty arising from various sources. Most of the previous studies have considered climate models and scenarios as major sources of uncertainty, but uncertainties introduced by land use change and hydrologic model assumptions are rarely investigated. In this paper an attempt is made to segregate the contribution from (i) general circulation models (GCMs), (ii) emission scenarios, (iii) land use scenarios, (iv) stationarity assumption of the hydrologic model, and (v) internal variability of the processes, to overall uncertainty in streamflow projections using analysis of variance (ANOVA) approach. Generally, most of the impact assessment studies are carried out with unchanging hydrologic model parameters in future. It is, however, necessary to address the nonstationarity in model parameters with changing land use and climate. In this paper, a regression based methodology is presented to obtain the hydrologic model parameters with changing land use and climate scenarios in future. The Upper Ganga Basin (UGB) in India is used as a case study to demonstrate the methodology. The semi-distributed Variable Infiltration Capacity (VIC) model is set-up over the basin, under nonstationary conditions. Results indicate that model parameters vary with time, thereby invalidating the often-used assumption of model stationarity. The streamflow in UGB under the nonstationary model condition is found to reduce in future. The flows are also found to be sensitive to changes in land use. Segregation results suggest that model stationarity assumption and GCMs along with their interactions with emission scenarios, act as dominant sources of uncertainty. This paper provides a generalized framework for hydrologists to examine stationarity assumption of models before considering them
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-01-01
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
Non-Linear Approximation of Bayesian Update
Litvinenko, Alexander
2016-06-23
We develop a non-linear approximation of expensive Bayesian formula. This non-linear approximation is applied directly to Polynomial Chaos Coefficients. In this way, we avoid Monte Carlo sampling and sampling error. We can show that the famous Kalman Update formula is a particular case of this update.
The calculation of average error probability in a digital fibre optical communication system
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
bounding the error of a continuous approximation for linear systems
African Journals Online (AJOL)
DR S.E UWAMUSI
Across all branches of Engineering and Sciences, computational methods provide the .... Stephen Ehidiamhen Uwamusi, Department of Mathematics, University of Benin, ..... Table 5: (Applied Rump's operation on Gauss-Siedel method (3.8)).
Practical error analysis of the quasi-steady-state approximation ...
African Journals Online (AJOL)
It has become associated with singular perturbation theory [1], which provides a means of assessing the accuracy and validity of the QSSA, but this involves rather complicated mathematics. In contrast, it is shown here how the necessary safeguards against misuse can be based on a simpler intuitive approach to singular ...
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay
2017-02-13
In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Uniform analytic approximation of Wigner rotation matrices
Hoffmann, Scott E.
2018-02-01
We derive the leading asymptotic approximation, for low angle θ, of the Wigner rotation matrix elements, dm1m2 j(θ ) , uniform in j, m1, and m2. The result is in terms of a Bessel function of integer order. We numerically investigate the error for a variety of cases and find that the approximation can be useful over a significant range of angles. This approximation has application in the partial wave analysis of wavepacket scattering.
Some strange numerical solutions of the non-stationary Navier-Stokes equations in pipes
Energy Technology Data Exchange (ETDEWEB)
Rummler, B.
2001-07-01
A general class of boundary-pressure-driven flows of incompressible Newtonian fluids in three-dimensional pipes with known steady laminar realizations is investigated. Considering the laminar velocity as a 3D-vector-function of the cross-section-circle arguments, we fix the scale for the velocity by the L{sub 2}-norm of the laminar velocity. The usual new variables are introduced to get dimension-free Navier-Stokes equations. The characteristic physical and geometrical quantities are subsumed in the energetic Reynolds number Re and a parameter {psi}, which involves the energetic ratio and the directions of the boundary-driven part and the pressure-driven part of the laminar flow. The solution of non-stationary dimension-free Navier-Stokes equations is sought in the form u=u{sub L}+u, where u{sub L} is the scaled laminar velocity and periodical conditions in center-line-direction are prescribed for u. An autonomous system (S) of ordinary differential equations for the time-dependent coefficients of the spatial Stokes eigenfunction is got by application of the Galerkin-method to the dimension-free Navier-Stokes equations for u. The finite-dimensional approximations u{sub N({lambda}}{sub )} of u are defined in the usual way. (orig.)
International Nuclear Information System (INIS)
Lin, S.; Li, Y.; Liu, C.; Wang, H.; Zhang, N.; Cui, W.; Neuber, A.
2015-01-01
This paper presents a statistical theory for the initial onset of multipactor breakdown in coaxial transmission lines, taking both the nonuniform electric field and random electron emission velocity into account. A general numerical method is first developed to construct the joint probability density function based on the approximate equation of the electron trajectory. The nonstationary dynamics of the multipactor process on both surfaces of coaxial lines are modelled based on the probability of various impacts and their corresponding secondary emission. The resonant assumption of the classical theory on the independent double-sided and single-sided impacts is replaced by the consideration of their interaction. As a result, the time evolutions of the electron population for exponential growth and absorption on both inner and outer conductor, in response to the applied voltage above and below the multipactor breakdown level, are obtained to investigate the exact mechanism of multipactor discharge in coaxial lines. Furthermore, the multipactor threshold predictions of the presented model are compared with experimental results using measured secondary emission yield of the tested samples which shows reasonable agreement. Finally, the detailed impact scenario reveals that single-surface multipactor is more likely to occur with a higher outer to inner conductor radius ratio
DEFF Research Database (Denmark)
Harrod, Steven; Kelton, W. David
2006-01-01
Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes...
Comparison of nonstationary generalized logistic models based on Monte Carlo simulation
Directory of Open Access Journals (Sweden)
S. Kim
2015-06-01
Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.
Self-adaptive change detection in streaming data with non-stationary distribution
Zhang, Xiangliang; Wang, Wei
2010-01-01
Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non
DEFF Research Database (Denmark)
Kock, Anders Bredahl
2016-01-01
We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...
International Nuclear Information System (INIS)
Shintani, Masanori
1988-01-01
This paper shows that the average and variance of the accumulated damage caused by earthquakes on the piping system attached to a building are related to the seismic response factor λ. The earthquakes refered to in this paper are of a non-stationary random process kind. The average is proportional to λ 2 and the variance to λ 4 . The analytical values of the average and variance for a single-degree-of-freedom system are compared with those obtained from computer simulations. Here the model of the building is a single-degree-of-freedom system. Both average of accumulated damage are approximately equal. The variance obtained from the analysis does not coincide with that from simulations. The reason is considered to be the forced vibraiton by sinusoidal waves, and the sinusoidal waves included random waves. Taking account of amplitude magnification factor, the values of the variance approach those obtained from simulations. (author)
Directory of Open Access Journals (Sweden)
Orlov Alexey
2016-01-01
Full Text Available This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge cascades for separation of multicomponent isotope mixtures.
Orlov Alexey; Ushakov Anton; Sovach Victor
2016-01-01
This article presents results of development of the mathematical model of nonstationary separation processes occurring in gas centrifuge cascades for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of germanium isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary separation processes in gas centrifuge casca...
Orlov, Aleksey Alekseevich; Ushakov, Anton; Sovach, Victor
2017-01-01
The article presents results of development of a mathematical model of nonstationary hydraulic processes in gas centrifuge cascade for separation of multicomponent isotope mixtures. This model was used for the calculation parameters of gas centrifuge cascade for separation of silicon isotopes. Comparison of obtained values with results of other authors revealed that developed mathematical model is adequate to describe nonstationary hydraulic processes in gas centrifuge cascades for separation...
Analyzing nonstationary financial time series via hilbert-huang transform (HHT)
Huang, Norden E. (Inventor)
2008-01-01
An apparatus, computer program product and method of analyzing non-stationary time varying phenomena. A representation of a non-stationary time varying phenomenon is recursively sifted using Empirical Mode Decomposition (EMD) to extract intrinsic mode functions (IMFs). The representation is filtered to extract intrinsic trends by combining a number of IMFs. The intrinsic trend is inherent in the data and identifies an IMF indicating the variability of the phenomena. The trend also may be used to detrend the data.
Trend analysis using non-stationary time series clustering based on the finite element method
Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.
2014-01-01
In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...
Numerical optimization with computational errors
Zaslavski, Alexander J
2016-01-01
This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...
Error due to unresolved scales in estimation problems for atmospheric data assimilation
Janjic, Tijana
The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only
Teaching geographical hydrology in a non-stationary world
Hendriks, Martin R.; Karssenberg, Derek
2010-05-01
Understanding hydrological processes in a non-stationary world requires knowledge of hydrological processes and their interactions. Also, one needs to understand the (non-linear) relations between the hydrological system and other parts of our Earth system, such as the climate system, the socio-economic system, and the ecosystem. To provide this knowledge and understanding we think that three components are essential when teaching geographical hydrology. First of all, a student needs to acquire a thorough understanding of classical hydrology. For this, knowledge of the basic hydrological equations, such as the energy equation (Bernoulli), flow equation (Darcy), continuity (or water balance) equation is needed. This, however, is not sufficient to make a student fully understand the interactions between hydrological compartments, or between hydrological subsystems and other parts of the Earth system. Therefore, secondly, a student also needs to be knowledgeable of methods by which the different subsystems can be coupled; in general, numerical models are used for this. A major disadvantage of numerical models is their complexity. A solution may be to use simpler models, provided that a student really understands how hydrological processes function in our real, non-stationary world. The challenge for a student then lies in understanding the interactions between the subsystems, and to be able to answer questions such as: what is the effect of a change in vegetation or land use on runoff? Thirdly, knowledge of field hydrology is of utmost importance. For this a student needs to be trained in the field. Fieldwork is very important as a student is confronted in the field with spatial and temporal variability, as well as with real life uncertainties, rather than being lured into believing the world as presented in hydrological textbooks and models, e.g. the world under study is homogeneous, isotropic, or lumped (averaged). Also, students in the field learn to plan and
International Nuclear Information System (INIS)
Ginsburg, C.A.
1980-01-01
In many problems, a desired property A of a function f(x) is determined by the behaviour of f(x) approximately equal to g(x,A) as x→xsup(*). In this letter, a method for resuming the power series in x of f(x) and approximating A (modulated Pade approximant) is presented. This new approximant is an extension of a resumation method for f(x) in terms of rational functions. (author)
Improved Dutch Roll Approximation for Hypersonic Vehicle
Directory of Open Access Journals (Sweden)
Liang-Liang Yin
2014-06-01
Full Text Available An improved dutch roll approximation for hypersonic vehicle is presented. From the new approximations, the dutch roll frequency is shown to be a function of the stability axis yaw stability and the dutch roll damping is mainly effected by the roll damping ratio. In additional, an important parameter called roll-to-yaw ratio is obtained to describe the dutch roll mode. Solution shows that large-roll-to-yaw ratio is the generate character of hypersonic vehicle, which results the large error for the practical approximation. Predictions from the literal approximations derived in this paper are compared with actual numerical values for s example hypersonic vehicle, results show the approximations work well and the error is below 10 %.
Sparse approximation with bases
2015-01-01
This book systematically presents recent fundamental results on greedy approximation with respect to bases. Motivated by numerous applications, the last decade has seen great successes in studying nonlinear sparse approximation. Recent findings have established that greedy-type algorithms are suitable methods of nonlinear approximation in both sparse approximation with respect to bases and sparse approximation with respect to redundant systems. These insights, combined with some previous fundamental results, form the basis for constructing the theory of greedy approximation. Taking into account the theoretical and practical demand for this kind of theory, the book systematically elaborates a theoretical framework for greedy approximation and its applications. The book addresses the needs of researchers working in numerical mathematics, harmonic analysis, and functional analysis. It quickly takes the reader from classical results to the latest frontier, but is written at the level of a graduate course and do...
On the non-stationary generalized Langevin equation
Meyer, Hugues; Voigtmann, Thomas; Schilling, Tanja
2017-12-01
In molecular dynamics simulations and single molecule experiments, observables are usually measured along dynamic trajectories and then averaged over an ensemble ("bundle") of trajectories. Under stationary conditions, the time-evolution of such averages is described by the generalized Langevin equation. By contrast, if the dynamics is not stationary, it is not a priori clear which form the equation of motion for an averaged observable has. We employ the formalism of time-dependent projection operator techniques to derive the equation of motion for a non-equilibrium trajectory-averaged observable as well as for its non-stationary auto-correlation function. The equation is similar in structure to the generalized Langevin equation but exhibits a time-dependent memory kernel as well as a fluctuating force that implicitly depends on the initial conditions of the process. We also derive a relation between this memory kernel and the autocorrelation function of the fluctuating force that has a structure similar to a fluctuation-dissipation relation. In addition, we show how the choice of the projection operator allows us to relate the Taylor expansion of the memory kernel to data that are accessible in MD simulations and experiments, thus allowing us to construct the equation of motion. As a numerical example, the procedure is applied to Brownian motion initialized in non-equilibrium conditions and is shown to be consistent with direct measurements from simulations.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
A Non-Stationary 1981–2012 AVHRR NDVI3g Time Series
Directory of Open Access Journals (Sweden)
Jorge E. Pinzon
2014-07-01
Full Text Available The NDVI3g time series is an improved 8-km normalized difference vegetation index (NDVI data set produced from Advanced Very High Resolution Radiometer (AVHRR instruments that extends from 1981 to the present. The AVHRR instruments have flown or are flying on fourteen polar-orbiting meteorological satellites operated by the National Oceanic and Atmospheric Administration (NOAA and are currently flying on two European Organization for the Exploitation of Meteorological Satellites (EUMETSAT polar-orbiting meteorological satellites, MetOp-A and MetOp-B. This long AVHRR record is comprised of data from two different sensors: the AVHRR/2 instrument that spans July 1981 to November 2000 and the AVHRR/3 instrument that continues these measurements from November 2000 to the present. The main difficulty in processing AVHRR NDVI data is to properly deal with limitations of the AVHRR instruments. Complicating among-instrument AVHRR inter-calibration of channels one and two is the dual gain introduced in late 2000 on the AVHRR/3 instruments for both these channels. We have processed NDVI data derived from the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS from 1997 to 2010 to overcome among-instrument AVHRR calibration difficulties. We use Bayesian methods with high quality well-calibrated SeaWiFS NDVI data for deriving AVHRR NDVI calibration parameters. Evaluation of the uncertainties of our resulting NDVI values gives an error of ± 0.005 NDVI units for our 1981 to present data set that is independent of time within our AVHRR NDVI continuum and has resulted in a non-stationary climate data set.
A Non-Stationary 1981-2012 AVHRR NDVI(sub 3g) Time Series
Pinzon, Jorge E.; Tucker, Compton J.
2014-01-01
The NDVI(sub 3g) time series is an improved 8-km normalized difference vegetation index (NDVI) data set produced from Advanced Very High Resolution Radiometer (AVHRR) instruments that extends from 1981 to the present. The AVHRR instruments have flown or are flying on fourteen polar-orbiting meteorological satellites operated by the National Oceanic and Atmospheric Administration (NOAA) and are currently flying on two European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) polar-orbiting meteorological satellites, MetOp-A and MetOp-B. This long AVHRR record is comprised of data from two different sensors: the AVHRR/2 instrument that spans July 1981 to November 2000 and the AVHRR/3 instrument that continues these measurements from November 2000 to the present. The main difficulty in processing AVHRR NDVI data is to properly deal with limitations of the AVHRR instruments. Complicating among-instrument AVHRR inter-calibration of channels one and two is the dual gain introduced in late 2000 on the AVHRR/3 instruments for both these channels. We have processed NDVI data derived from the Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) from 1997 to 2010 to overcome among-instrument AVHRR calibration difficulties. We use Bayesian methods with high quality well-calibrated SeaWiFS NDVI data for deriving AVHRR NDVI calibration parameters. Evaluation of the uncertainties of our resulting NDVI values gives an error of plus or minus 0.005 NDVI units for our 1981 to present data set that is independent of time within our AVHRR NDVI continuum and has resulted in a non-stationary climate data set.
Approximating distributions from moments
Pawula, R. F.
1987-11-01
A method based upon Pearson-type approximations from statistics is developed for approximating a symmetric probability density function from its moments. The extended Fokker-Planck equation for non-Markov processes is shown to be the underlying foundation for the approximations. The approximation is shown to be exact for the beta probability density function. The applicability of the general method is illustrated by numerous pithy examples from linear and nonlinear filtering of both Markov and non-Markov dichotomous noise. New approximations are given for the probability density function in two cases in which exact solutions are unavailable, those of (i) the filter-limiter-filter problem and (ii) second-order Butterworth filtering of the random telegraph signal. The approximate results are compared with previously published Monte Carlo simulations in these two cases.
CONTRIBUTIONS TO RATIONAL APPROXIMATION,
Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)
Approximation techniques for engineers
Komzsik, Louis
2006-01-01
Presenting numerous examples, algorithms, and industrial applications, Approximation Techniques for Engineers is your complete guide to the major techniques used in modern engineering practice. Whether you need approximations for discrete data of continuous functions, or you''re looking for approximate solutions to engineering problems, everything you need is nestled between the covers of this book. Now you can benefit from Louis Komzsik''s years of industrial experience to gain a working knowledge of a vast array of approximation techniques through this complete and self-contained resource.
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Expectation Consistent Approximate Inference
DEFF Research Database (Denmark)
Opper, Manfred; Winther, Ole
2005-01-01
We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...
Vinay BC; Nikhitha MK; Patel Sunil B
2015-01-01
In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.
Ordered cones and approximation
Keimel, Klaus
1992-01-01
This book presents a unified approach to Korovkin-type approximation theorems. It includes classical material on the approximation of real-valuedfunctions as well as recent and new results on set-valued functions and stochastic processes, and on weighted approximation. The results are notonly of qualitative nature, but include quantitative bounds on the order of approximation. The book is addressed to researchers in functional analysis and approximation theory as well as to those that want to applythese methods in other fields. It is largely self- contained, but the readershould have a solid background in abstract functional analysis. The unified approach is based on a new notion of locally convex ordered cones that are not embeddable in vector spaces but allow Hahn-Banach type separation and extension theorems. This concept seems to be of independent interest.
International Nuclear Information System (INIS)
Liu, Jie
2015-01-01
This Ph. D. work is motivated by the possibility of monitoring the conditions of components of energy systems for their extended and safe use, under proper practice of operation and adequate policies of maintenance. The aim is to develop a Support Vector Regression (SVR)-based framework for predicting time series data under stationary/nonstationary environmental and operational conditions. Single SVR and SVR-based ensemble approaches are developed to tackle the prediction problem based on both small and large datasets. Strategies are proposed for adaptively updating the single SVR and SVR-based ensemble models in the existence of pattern drifts. Comparisons with other online learning approaches for kernel-based modelling are provided with reference to time series data from a critical component in Nuclear Power Plants (NPPs) provided by Electricite de France (EDF). The results show that the proposed approaches achieve comparable prediction results, considering the Mean Squared Error (MSE) and Mean Relative Error (MRE), in much less computation time. Furthermore, by analyzing the geometrical meaning of the Feature Vector Selection (FVS) method proposed in the literature, a novel geometrically interpretable kernel method, named Reduced Rank Kernel Ridge Regression-II (RRKRR-II), is proposed to describe the linear relations between a predicted value and the predicted values of the Feature Vectors (FVs) selected by FVS. Comparisons with several kernel methods on a number of public datasets prove the good prediction accuracy and the easy-of-tuning of the hyper-parameters of RRKRR-II. (author)
Assessing the extent of non-stationary biases in GCMs
Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish
2017-06-01
General circulation models (GCMs) are the main tools for estimating changes in the climate for the future. The imperfect representation of climate models introduces biases in the simulations that need to be corrected prior to their use for impact assessments. Bias correction methods generally assume that the bias calculated over the historical period does not change and can be applied to the future. This study investigates this assumption by considering the extent and nature of bias non-stationarity using 20th century precipitation and temperature simulations from six CMIP5 GCMs across Australia. Four statistics (mean, standard deviation, 10th and 90th quantiles) in monthly and seasonal biases are obtained for three different time window lengths (10, 25 and 33 years) to examine the properties of bias over time. This approach is repeated for two different phases of the Interdecadal Pacific Oscillation (IPO), which is known to have strong influences on the Australian climate. It is found that bias non-stationarity at decadal timescales is indeed an issue over some of Australia for some GCMs. When considering interdecadal variability there are significant difference in the bias between positive and negative phases of the IPO. Regional analyses confirmed these findings with the largest differences seen on the east coast of Australia, where IPO impacts tend to be the strongest. The nature of the bias non-stationarity found in this study suggests that it will be difficult to modify existing bias correction approaches to account for non-stationary biases. A more practical approach for impact assessments that use bias correction maybe to use a selection of GCMs where the assumption of bias non-stationarity holds.
Climate Informed Low Flow Frequency Analysis Using Nonstationary Modeling
Liu, D.; Guo, S.; Lian, Y.
2014-12-01
Stationarity is often assumed for frequency analysis of low flows in water resources management and planning. However, many studies have shown that flow characteristics, particularly the frequency spectrum of extreme hydrologic events,were modified by climate change and human activities and the conventional frequency analysis without considering the non-stationary characteristics may lead to costly design. The analysis presented in this paper was based on the more than 100 years of daily flow data from the Yichang gaging station 44 kilometers downstream of the Three Gorges Dam. The Mann-Kendall trend test under the scaling hypothesis showed that the annual low flows had significant monotonic trend, whereas an abrupt change point was identified in 1936 by the Pettitt test. The climate informed low flow frequency analysis and the divided and combined method are employed to account for the impacts from related climate variables and the nonstationarities in annual low flows. Without prior knowledge of the probability density function for the gaging station, six distribution functions including the Generalized Extreme Values (GEV), Pearson Type III, Gumbel, Gamma, Lognormal, and Weibull distributions have been tested to find the best fit, in which the local likelihood method is used to estimate the parameters. Analyses show that GEV had the best fit for the observed low flows. This study has also shown that the climate informed low flow frequency analysis is able to exploit the link between climate indices and low flows, which would account for the dynamic feature for reservoir management and provide more accurate and reliable designs for infrastructure and water supply.
Designing and operating infrastructure for nonstationary flood risk management
Doss-Gollin, J.; Farnham, D. J.; Lall, U.
2017-12-01
Climate exhibits organized low-frequency and regime-like variability at multiple time scales, causing the risk associated with climate extremes such as floods and droughts to vary in time. Despite broad recognition of this nonstationarity, there has been little theoretical development of ideas for the design and operation of infrastructure considering the regime structure of such changes and their potential predictability. We use paleo streamflow reconstructions to illustrate an approach to the design and operation of infrastructure to address nonstationary flood and drought risk. Specifically, we consider the tradeoff between flood control and conservation storage, and develop design and operation principles for allocating these storage volumes considering both a m-year project planning period and a n-year historical sampling record. As n increases, the potential uncertainty in probabilistic estimates of the return periods associated with the T-year extreme event decreases. As the duration m of the future operation period decreases, the uncertainty associated with the occurrence of the T-year event also increases. Finally, given the quasi-periodic nature of the system it may be possible to offer probabilistic predictions of the conditions in the m-year future period, especially if m is small. In the context of such predictions, one can consider that a m-year prediction may have lower bias, but higher variance, than would be associated with using a stationary estimate from the preceding n years. This bias-variance trade-off, and the potential for considering risk management for multiple values of m, provides an interesting system design challenge. We use wavelet-based simulation models in a Bayesian framework to estimate these biases and uncertainty distributions and devise a risk-optimized decision rule for the allocation of flood and conservation storage. The associated theoretical development also provides a methodology for the sizing of storage for new
Approximate and renormgroup symmetries
Energy Technology Data Exchange (ETDEWEB)
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximate and renormgroup symmetries
International Nuclear Information System (INIS)
Ibragimov, Nail H.; Kovalev, Vladimir F.
2009-01-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
Approximations of Fuzzy Systems
Directory of Open Access Journals (Sweden)
Vinai K. Singh
2013-03-01
Full Text Available A fuzzy system can uniformly approximate any real continuous function on a compact domain to any degree of accuracy. Such results can be viewed as an existence of optimal fuzzy systems. Li-Xin Wang discussed a similar problem using Gaussian membership function and Stone-Weierstrass Theorem. He established that fuzzy systems, with product inference, centroid defuzzification and Gaussian functions are capable of approximating any real continuous function on a compact set to arbitrary accuracy. In this paper we study a similar approximation problem by using exponential membership functions
Potvin, Guy
2015-10-01
We examine how the Rytov approximation describing log-amplitude and phase fluctuations of a wave propagating through weak uniform turbulence can be generalized to the case of turbulence with a large-scale nonuniform component. We show how the large-scale refractive index field creates Fermat rays using the path integral formulation for paraxial propagation. We then show how the second-order derivatives of the Fermat ray action affect the Rytov approximation, and we discuss how a numerical algorithm would model the general Rytov approximation.
Energy Technology Data Exchange (ETDEWEB)
Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-10-04
We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B_{0} is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB_{0}/B_{0}, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2
On low-frequency errors of uniformly modulated filtered white-noise models for ground motions
Safak, Erdal; Boore, David M.
1988-01-01
Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
International Nuclear Information System (INIS)
Knobloch, A.F.
1980-01-01
A simplified cost approximation for INTOR parameter sets in a narrow parameter range is shown. Plausible constraints permit the evaluation of the consequences of parameter variations on overall cost. (orig.) [de
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Detection of Partial Demagnetization Fault in PMSMs Operating under Nonstationary Conditions
DEFF Research Database (Denmark)
Wang, Chao; Delgado Prieto, Miguel; Romeral, Luis
2016-01-01
Demagnetization fault detection of in-service Permanent Magnet Synchronous Machines (PMSMs) is a challenging task because most PMSMs operate under nonstationary circumstances in industrial applications. A novel approach based on tracking characteristic orders of stator current using Vold-Kalman F......Demagnetization fault detection of in-service Permanent Magnet Synchronous Machines (PMSMs) is a challenging task because most PMSMs operate under nonstationary circumstances in industrial applications. A novel approach based on tracking characteristic orders of stator current using Vold......-Kalman Filter is proposed to detect the partial demagnetization fault in PMSMs running at nonstationary conditions. Amplitude of envelope of the fault characteristic orders is used as fault indictor. Experimental results verify the superiority of the proposed method on partial demagnetization online fault...... detection of PMSMs under various speed and load conditions....
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multilevel Monte Carlo in Approximate Bayesian Computation
Jasra, Ajay; Jo, Seongil; Nott, David; Shoemaker, Christine; Tempone, Raul
2017-01-01
is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.
Directory of Open Access Journals (Sweden)
Xiang Zeng
2016-06-01
Full Text Available Abstract We prove some almost sure central limit theorems for the maxima of strongly dependent nonstationary Gaussian vector sequences under some mild conditions. The results extend the ASCLT to nonstationary Gaussian vector sequences and give substantial improvements for the weight sequence obtained by Lin et al. (Comput. Math. Appl. 62(2:635-640, 2011.
Directory of Open Access Journals (Sweden)
Rehan Balqis M.
2016-01-01
Full Text Available Current practice in flood frequency analysis assumes that the stochastic properties of extreme floods follow that of stationary conditions. As human intervention and anthropogenic climate change influences in hydrometeorological variables are becoming evident in some places, there have been suggestions that nonstationary statistics would be better to represent the stochastic properties of the extreme floods. The probabilistic estimation of non-stationary models, however, is surrounded with uncertainty related to scarcity of observations and modelling complexities hence the difficulty to project the future condition. In the face of uncertain future and the subjectivity of model choices, this study attempts to demonstrate the practical implications of applying a nonstationary model and compares it with a stationary model in flood risk assessment. A fully integrated framework to simulate decision makers’ behaviour in flood frequency analysis is thereby developed. The framework is applied to hypothetical flood risk management decisions and the outcomes are compared with those of known underlying future conditions. Uncertainty of the economic performance of the risk-based decisions is assessed through Monte Carlo simulations. Sensitivity of the results is also tested by varying the possible magnitude of future changes. The application provides quantitative and qualitative comparative results that satisfy a preliminary analysis of whether the nonstationary model complexity should be applied to improve the economic performance of decisions. Results obtained from the case study shows that the relative differences of competing models for all considered possible future changes are small, suggesting that stationary assumptions are preferred to a shift to nonstationary statistics for practical application of flood risk management. Nevertheless, nonstationary assumption should also be considered during a planning stage in addition to stationary assumption
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbæk, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models
DEFF Research Database (Denmark)
Kristensen, Dennis; Rahbek, Anders
In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....
Energy Technology Data Exchange (ETDEWEB)
Ruiz, Jordi-Roger Riba [EUETII, Dept. d' Enginyeria Electrica, Universitat Politecnica de Catalunya, Placa del Rei 15, 08700 Igualada, Barcelona (Spain); Garcia Espinosa, Antonio [Dept. d' Enginyeria Electrica, Universitat Politecnica de Catalunya C/Colom 1, 08222 Terrassa (Spain); Romeral, Luis; Cusido, Jordi [Dept. d' Enginyeria Electronica, Universitat Politecnica de Catalunya C/Colom 1, 08222 Terrassa (Spain)
2010-10-15
Permanent magnet synchronous motors (PMSMs) are applied in high performance positioning and variable speed applications because of their enhanced features with respect to other AC motor types. Fault detection and diagnosis of electrical motors for critical applications is an active field of research. However, much research remains to be done in the field of PMSM demagnetization faults, especially when running under non-stationary conditions. This paper presents a time-frequency method specifically focused to detect and diagnose demagnetization faults in PMSMs running under non-stationary speed conditions, based on the Hilbert Huang transform. The effectiveness of the proposed method is proven by means of experimental results. (author)
International Nuclear Information System (INIS)
Lobashev, A.A.; Mostepanenko, V.M.
1993-01-01
Heisenberg formalism is developed for creation-annihilation operators of quantum fields propagating in nonstationary external fields. Quantum fields with spin 0,1/2, 1 are considered in the presence of such external fields as electromagnetic, scalar and the field of nonstationary dielectric properties of nonlinear medium. Elliptic operator parametrically depending on time is constructed. In Heisenberg representation field variables are decomposed over eigenfunction of this operator. The relation between Heisenberg creation-annihilation operators and the operators obtained in the frame of diagonalization of Hamiltonian with Bogoliubov transformations is set up
International Nuclear Information System (INIS)
Tashchilova, Eh.M.; Sharovarov, G.A.
1985-01-01
The mathematical model of nonstationary processes in heat exchangers with dissociating coolant at supercritical parameters is given. Its dimensionless criteria are deveped. The effect of NPP regenerator parameters on criteria variation is determined. The proceeding nonstationary processes are estimated qualitatively using the dimensionless parameters. Dynamics of the processes in heat exchangers is described by the energy, mass and moment-of-momentum equations for heating and heated medium taking into account heat accumulation in the heat-transfer wall and distribution of parameters along the length of a heat exchanger
International Nuclear Information System (INIS)
Hartwig, J. T.; Stokman, J. V.
2013-01-01
We realize an extended version of the trigonometric Cherednik algebra as affine Dunkl operators involving Heaviside functions. We use the quadratic Casimir element of the extended trigonometric Cherednik algebra to define an explicit nonstationary Schrödinger equation with delta-potential. We use coordinate Bethe ansatz methods to construct solutions of the nonstationary Schrödinger equation in terms of generalized Bethe wave functions. It is shown that the generalized Bethe wave functions satisfy affine difference Knizhnik-Zamolodchikov equations as functions of the momenta. The relation to the vector valued root system analogs of the quantum Bose gas on the circle with delta-function interactions is indicated.
Non-stationary pre-envelope covariances of non-classically damped systems
Muscolino, G.
1991-08-01
A new formulation is given to evaluate the stationary and non-stationary response of linear non-classically damped systems subjected to multi-correlated non-separable Gaussian input processes. This formulation is based on a new and more suitable definition of the impulse response function matrix for such systems. It is shown that, when using this definition, the stochastic response of non-classically damped systems involves the evaluation of quantities similar to those of classically damped ones. Furthermore, considerations about non-stationary cross-covariances, spectral moments and pre-envelope cross-covariances are presented for a monocorrelated input process.
Cosmological applications of Padé approximant
International Nuclear Information System (INIS)
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
EDITORIAL: The nonstationary Casimir effect and quantum systems with moving boundaries
Barton, Gabriel; Dodonov, Victor V.; Man'ko, Vladimir I.
2005-03-01
radiation, vacuum friction, and so on. Solutions of some interesting problems of nonrelativistic quantum mechanics with time-dependent boundary conditions, including applications to Bose-Einstein condensates, can also be found here. Since nonstationary Casimir effects can exist not only for photons but for any other quanta (e.g., phonons in solids or in liquid helium), we believe the approaches and results presented in this collection will find interesting applications in other branches of physics too. One possible example might be the generation of squeezed and other 'nonclassical' states of different fields by time-dependent boundary conditions. Approximately half the contributed papers stem from talks at two recent conferences: the First International Workshop on Problems with Moving Boundaries, organized by Professor J Dittrich in Prague in October 2003, and the International Workshop on the Dynamical Casimir Effect, organized by Professor G Carugno in Padua in June 2004. We wish to thank all the authors and reviewers for their efforts in preparing high quality papers, which we hope will attract the attention of other researchers, and especially of young people, to the fascinating areas covered by this special issue.
On Covering Approximation Subspaces
Directory of Open Access Journals (Sweden)
Xun Ge
2009-06-01
Full Text Available Let (U';C' be a subspace of a covering approximation space (U;C and X⊂U'. In this paper, we show that and B'(X⊂B(X∩U'. Also, iff (U;C has Property Multiplication. Furthermore, some connections between outer (resp. inner definable subsets in (U;C and outer (resp. inner definable subsets in (U';C' are established. These results answer a question on covering approximation subspace posed by J. Li, and are helpful to obtain further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.
Plasma Physics Approximations in Ares
International Nuclear Information System (INIS)
Managan, R. A.
2015-01-01
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, Fn( μ/θ ), the chemical potential, μ or ζ = ln(1+e μ/θ ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A α (ζ ),A β (ζ ), ζ, f(ζ ) = (1 + e -μ/θ )F 1/2 (μ/θ), F 1/2 '/F 1/2 , F c α , and F c β . In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
On Convex Quadratic Approximation
den Hertog, D.; de Klerk, E.; Roos, J.
2000-01-01
In this paper we prove the counterintuitive result that the quadratic least squares approximation of a multivariate convex function in a finite set of points is not necessarily convex, even though it is convex for a univariate convex function. This result has many consequences both for the field of
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg
2005-01-01
The Dirichlet compound multinomial (DCM), which has recently been shown to be well suited for modeling for word burstiness in documents, is here investigated. A number of conceptual explanations that account for these recent results, are provided. An exponential family approximation of the DCM...
Approximation by Cylinder Surfaces
DEFF Research Database (Denmark)
Randrup, Thomas
1997-01-01
We present a new method for approximation of a given surface by a cylinder surface. It is a constructive geometric method, leading to a monorail representation of the cylinder surface. By use of a weighted Gaussian image of the given surface, we determine a projection plane. In the orthogonal...
Tracking of Nonstationary Noise Based on Data-Driven Recursive Noise Power Estimation
Erkelens, J.S.; Heusdens, R.
2008-01-01
This paper considers estimation of the noise spectral variance from speech signals contaminated by highly nonstationary noise sources. The method can accurately track fast changes in noise power level (up to about 10 dB/s). In each time frame, for each frequency bin, the noise variance estimate is
Staffing a call center with uncertain non-stationary arrival rate and flexibility
Liao, S.; van Delft, C.; Jouini, O.; Koole, G.M.
2012-01-01
We consider a multi-period staffing problem in a single-shift call center. The call center handles inbound calls, as well as some alternative back-office jobs. The call arrival process is assumed to follow a doubly non-stationary stochastic process with a random mean arrival rate. The inbound calls
Optimal inventory policies with non-stationary supply disruptions and advance supply information
Atasoy, B.; Güllü, R.; Tan, T.
2012-01-01
We consider the production/inventory problem of a manufacturer (or a retailer) under non-stationary and stochastic supply availability. Although supply availability is uncertain, the supplier would be able to predict her near future shortages – and hence supply disruption to (some of) her customers
Optimal inventory policies with non-stationary supply disruptions and advance supply information
Atasoy, B.; Güllü, R.; Tan, T.
2011-01-01
We consider the production/inventory problem of a manufacturer (or a retailer) under non-stationary and stochastic supply availability. Although supply availability is uncertain, the supplier would be able to predict her near future shortages -and hence supply disruption to (some of) her customers-
Production planning of a perishable product with lead time and non-stationary demand
Pauls-Worm, K.G.J.; Haijema, R.; Hendrix, E.M.T.; Rossi, R.; Vorst, van der J.G.A.J.
2012-01-01
We study a production planning problem for a perishable product with a fixed lifetime, under a service-level constraint. The product has a non-stationary stochastic demand. Food supply chains of fresh products like cheese and several crop products, are characterised by long lead times due to
Shi, Yingzhong; Chung, Fu-Lai; Wang, Shitong
2015-09-01
Recently, a time-adaptive support vector machine (TA-SVM) is proposed for handling nonstationary datasets. While attractive performance has been reported and the new classifier is distinctive in simultaneously solving several SVM subclassifiers locally and globally by using an elegant SVM formulation in an alternative kernel space, the coupling of subclassifiers brings in the computation of matrix inversion, thus resulting to suffer from high computational burden in large nonstationary dataset applications. To overcome this shortcoming, an improved TA-SVM (ITA-SVM) is proposed using a common vector shared by all the SVM subclassifiers involved. ITA-SVM not only keeps an SVM formulation, but also avoids the computation of matrix inversion. Thus, we can realize its fast version, that is, improved time-adaptive core vector machine (ITA-CVM) for large nonstationary datasets by using the CVM technique. ITA-CVM has the merit of asymptotic linear time complexity for large nonstationary datasets as well as inherits the advantage of TA-SVM. The effectiveness of the proposed classifiers ITA-SVM and ITA-CVM is also experimentally confirmed.
Photorespiration is a central component of photosynthesis; however to better understand its role it should be viewed in the context of an integrated metabolic network rather than a series of individual reactions that operate independently. Isotopically nonstationary 13C metabolic flux analysis (INST...
International Nuclear Information System (INIS)
Barry, J.M.; Pollard, J.P.
1986-11-01
A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels
A flag-up algorithm and test for nonstationary customer-specific product graphs
DEFF Research Database (Denmark)
Fenger, Morten H. J.; Scholderer, Joachim
period. The results show that the test is clearly able to identify customers with evolving behavior, and that it can easily be deployed as part of a CRM system. It enables companies with loyalty programs to focus on nonstationary customers, i.e. customers who may represent opportunities for cross...
Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.
2018-05-01
It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.
A survey of techniques applied to non-stationary waveforms in electrical power systems
Rodrigues, R.P.; Silveira, P.M.; Ribeiro, P.F.
2010-01-01
The well-known and ever-present time-varying and non-stationary nature of waveforms in power systems requires a comprehensive and precise analytical basis that needs to be incorporated in the system studies and analyses. This time-varying behavior is due to continuous changes in system
Performance of a written radiation protection inspection of nonstationary gamma radiography users
International Nuclear Information System (INIS)
Hoehne, M.
1986-01-01
A questionare has been developed for controlling users of nonstationary gamma radiography devices. It is aimed at obtaining information about the weak points according to radiation protection and to give guidance in performing such controls by the respective radiation protection officers. The questionare is included
Testing for Co-integration in Vector Autoregressions with Non-Stationary Volatility
DEFF Research Database (Denmark)
Cavaliere, Guiseppe; Rahbæk, Anders; Taylor, A.M. Robert
Many key macro-economic and financial variables are characterised by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special...
Testing for Co-integration in Vector Autoregressions with Non-Stationary Volatility
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders Christian; Taylor, A. M. Robert
Many key macro-economic and …nancial variables are characterised by permanent changes in unconditional volatility. In this paper we analyse vector autoregressions with non-stationary (unconditional) volatility of a very general form, which includes single and multiple volatility breaks as special...
Magnetization of a warm plasma by the nonstationary ponderomotive force of an electromagnetic wave
International Nuclear Information System (INIS)
Shukla, Nitin; Shukla, P. K.; Stenflo, L.
2009-01-01
It is shown that magnetic fields can be generated in a warm plasma by the nonstationary ponderomotive force of a large-amplitude electromagnetic wave. In the present Brief Report, we derive simple and explicit results that can be useful for understanding the origin of the magnetic fields that are produced in intense laser-plasma interaction experiments.
Non-stationary dynamics of climate variability in synchronous influenza epidemics in Japan
Onozuka, Daisuke; Hagihara, Akihito
2015-09-01
Seasonal variation in the incidence of influenza is widely assumed. However, few studies have examined non-stationary relationships between global climate factors and influenza epidemics. We examined the monthly incidence of influenza in Fukuoka, Japan, from 2000 to 2012 using cross-wavelet coherency analysis to assess the patterns of associations between indices for the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). The monthly incidence of influenza showed cycles of 1 year with the IOD and 2 years with ENSO indices (Multivariate, Niño 4, and Niño 3.4). These associations were non-stationary and appeared to have major influences on the synchrony of influenza epidemics. Our study provides quantitative evidence that non-stationary associations have major influences on synchrony between the monthly incidence of influenza and the dynamics of the IOD and ENSO. Our results call for the consideration of non-stationary patterns of association between influenza cases and climatic factors in early warning systems.
Inventory control for a perishable product with non-stationary demand and service level constraints
Pauls-Worm, K.G.J.; Hendrix, E.M.T.; Haijema, R.; Vorst, van der J.G.A.J.
2013-01-01
We study the practical production planning problem of a food producer facing a non-stationary erratic demand for a perishable product with a fixed life time. In meeting the uncertain demand, the food producer uses a FIFO issuing policy. The food producer aims at meeting a certain service level at
On the Oracle Property of the Adaptive LASSO in Stationary and Nonstationary Autoregressions
DEFF Research Database (Denmark)
Kock, Anders Bredahl
We show that the Adaptive LASSO is oracle efficient in stationary and non-stationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...
Double-Wavelet Approach to Studying the Modulation Properties of Nonstationary Multimode Dynamics
DEFF Research Database (Denmark)
Sosnovtseva, Olga; Mosekilde, Erik; Pavlov, A.N.
2005-01-01
On the basis of double-wavelet analysis, the paper proposes a method to study interactions in the form of frequency and amplitude modulation in nonstationary multimode data series. Special emphasis is given to the problem of quantifying the strength of modulation for a fast signal by a coexisting...
Measurement of Non-Stationary Characteristics of a Landfall Typhoon at the Jiangyin Bridge Site
Directory of Open Access Journals (Sweden)
Xuhui He
2017-09-01
Full Text Available The wind-sensitive long-span suspension bridge is a vital element in land transportation. Understanding the wind characteristics at the bridge site is thus of great significance to the wind- resistant analysis of such a flexible structure. In this study, a strong wind event from a landfall typhoon called Soudelor recorded at the Jiangyin Bridge site with the anemometer is taken as the research object. As inherent time-varying trends are frequently captured in typhoon events, the wind characteristics of Soudelor are analyzed in a non-stationary perspective. The time-varying mean is first extracted with the wavelet-based self-adaptive method. Then, the non-stationary turbulent wind characteristics, e.g.; turbulence intensity, gust factor, turbulence integral scale, and power spectral density, are investigated and compared with the results from the stationary analysis. The comparison highlights the importance of non-stationary considerations of typhoon events, and a transition from stationarity to non-stationarity for the analysis of wind effects. The analytical results could help enrich the database of non-stationary wind characteristics, and are expected to provide references for the wind-resistant analysis of engineering structures in similar areas.
Measurement of Non-Stationary Characteristics of a Landfall Typhoon at the Jiangyin Bridge Site.
He, Xuhui; Qin, Hongxi; Tao, Tianyou; Liu, Wenshuo; Wang, Hao
2017-09-22
The wind-sensitive long-span suspension bridge is a vital element in land transportation. Understanding the wind characteristics at the bridge site is thus of great significance to the wind- resistant analysis of such a flexible structure. In this study, a strong wind event from a landfall typhoon called Soudelor recorded at the Jiangyin Bridge site with the anemometer is taken as the research object. As inherent time-varying trends are frequently captured in typhoon events, the wind characteristics of Soudelor are analyzed in a non-stationary perspective. The time-varying mean is first extracted with the wavelet-based self-adaptive method. Then, the non-stationary turbulent wind characteristics, e.g.; turbulence intensity, gust factor, turbulence integral scale, and power spectral density, are investigated and compared with the results from the stationary analysis. The comparison highlights the importance of non-stationary considerations of typhoon events, and a transition from stationarity to non-stationarity for the analysis of wind effects. The analytical results could help enrich the database of non-stationary wind characteristics, and are expected to provide references for the wind-resistant analysis of engineering structures in similar areas.
Noise Diagnostics of Stationary and Non-Stationary Reactor Processes
Energy Technology Data Exchange (ETDEWEB)
Sunde, Carl
2007-04-15
This thesis concerns the application of noise diagnostics on different problems in the area of reactor physics involving both stationary and non-stationary core processes. Five different problems are treated, divided into three different parts. The first problem treated in the first part is the classification of two-phase flow regimes from neutron radiographic and visible light images with a neuro-wavelet algorithm. The algorithm consists of wavelet pre-processing and of an artificial neural network. The result indicates that the wavelet pre-processing is improving the training of the neural network. Next, detector tubes which are suspected of impacting on nearby fuel-assemblies in a boiling water reactor (BWR) are identified by both a classical spectral method and wavelet-based methods. It was found that there is good agreement between the different methods as well as with visual inspections of detector tube and fuel assembly damage made during the outage at the plant. The third problem addresses the determination of the decay ratio of a BWR from the auto-correlation function (ACF). Here wavelets are used, with some success, both for de-trending and de-nosing of the ACF and also for direct estimation of the decay ratio from the ACF. The second part deals with the analysis of beam-mode and shell-mode core-barrel vibrations in pressurised water reactors (PWRs). The beam-mode vibrations are analysed by using parameters of the vibration peaks, in spectra from ex core detectors. A trend analysis of the peak amplitude shows that the peak amplitude is changing during the fuel cycle. When it comes to the analysis of the shell-mode vibration, 1-D analytical and numerical calculations are performed in order to calculate the neutron noise induced in the core. The two calculations are in agreement and show that a large local noise component is present in the core which could be used to classify the shell-mode vibrations. However, a measurement made in the PWR Ringhals-3 shows
Noise Diagnostics of Stationary and Non-Stationary Reactor Processes
International Nuclear Information System (INIS)
Sunde, Carl
2007-01-01
This thesis concerns the application of noise diagnostics on different problems in the area of reactor physics involving both stationary and non-stationary core processes. Five different problems are treated, divided into three different parts. The first problem treated in the first part is the classification of two-phase flow regimes from neutron radiographic and visible light images with a neuro-wavelet algorithm. The algorithm consists of wavelet pre-processing and of an artificial neural network. The result indicates that the wavelet pre-processing is improving the training of the neural network. Next, detector tubes which are suspected of impacting on nearby fuel-assemblies in a boiling water reactor (BWR) are identified by both a classical spectral method and wavelet-based methods. It was found that there is good agreement between the different methods as well as with visual inspections of detector tube and fuel assembly damage made during the outage at the plant. The third problem addresses the determination of the decay ratio of a BWR from the auto-correlation function (ACF). Here wavelets are used, with some success, both for de-trending and de-nosing of the ACF and also for direct estimation of the decay ratio from the ACF. The second part deals with the analysis of beam-mode and shell-mode core-barrel vibrations in pressurised water reactors (PWRs). The beam-mode vibrations are analysed by using parameters of the vibration peaks, in spectra from ex core detectors. A trend analysis of the peak amplitude shows that the peak amplitude is changing during the fuel cycle. When it comes to the analysis of the shell-mode vibration, 1-D analytical and numerical calculations are performed in order to calculate the neutron noise induced in the core. The two calculations are in agreement and show that a large local noise component is present in the core which could be used to classify the shell-mode vibrations. However, a measurement made in the PWR Ringhals-3 shows
System identification through nonstationary data using Time-Frequency Blind Source Separation
Guo, Yanlin; Kareem, Ahsan
2016-06-01
Classical output-only system identification (SI) methods are based on the assumption of stationarity of the system response. However, measured response of buildings and bridges is usually non-stationary due to strong winds (e.g. typhoon, and thunder storm etc.), earthquakes and time-varying vehicle motions. Accordingly, the response data may have time-varying frequency contents and/or overlapping of modal frequencies due to non-stationary colored excitation. This renders traditional methods problematic for modal separation and identification. To address these challenges, a new SI technique based on Time-Frequency Blind Source Separation (TFBSS) is proposed. By selectively utilizing "effective" information in local regions of the time-frequency plane, where only one mode contributes to energy, the proposed technique can successfully identify mode shapes and recover modal responses from the non-stationary response where the traditional SI methods often encounter difficulties. This technique can also handle response with closely spaced modes which is a well-known challenge for the identification of large-scale structures. Based on the separated modal responses, frequency and damping can be easily identified using SI methods based on a single degree of freedom (SDOF) system. In addition to the exclusive advantage of handling non-stationary data and closely spaced modes, the proposed technique also benefits from the absence of the end effects and low sensitivity to noise in modal separation. The efficacy of the proposed technique is demonstrated using several simulation based studies, and compared to the popular Second-Order Blind Identification (SOBI) scheme. It is also noted that even some non-stationary response data can be analyzed by the stationary method SOBI. This paper also delineates non-stationary cases where SOBI and the proposed scheme perform comparably and highlights cases where the proposed approach is more advantageous. Finally, the performance of the
An improved saddlepoint approximation.
Gillespie, Colin S; Renshaw, Eric
2007-08-01
Given a set of third- or higher-order moments, not only is the saddlepoint approximation the only realistic 'family-free' technique available for constructing an associated probability distribution, but it is 'optimal' in the sense that it is based on the highly efficient numerical method of steepest descents. However, it suffers from the problem of not always yielding full support, and whilst [S. Wang, General saddlepoint approximations in the bootstrap, Prob. Stat. Lett. 27 (1992) 61.] neat scaling approach provides a solution to this hurdle, it leads to potentially inaccurate and aberrant results. We therefore propose several new ways of surmounting such difficulties, including: extending the inversion of the cumulant generating function to second-order; selecting an appropriate probability structure for higher-order cumulants (the standard moment closure procedure takes them to be zero); and, making subtle changes to the target cumulants and then optimising via the simplex algorithm.
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2011-01-01
Most prestack traveltime relations we tend work with are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multi-focusing or double square-root (DSR) and the common reflection stack (CRS) equations. Using the DSR equation, I analyze the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I derive expansion based solutions of this eikonal based on polynomial expansions in terms of the reflection and dip angles in a generally inhomogenous background medium. These approximate solutions are free of singularities and can be used to estimate travetimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. A Marmousi example demonstrates the usefulness of the approach. © 2011 Society of Exploration Geophysicists.
Topology, calculus and approximation
Komornik, Vilmos
2017-01-01
Presenting basic results of topology, calculus of several variables, and approximation theory which are rarely treated in a single volume, this textbook includes several beautiful, but almost forgotten, classical theorems of Descartes, Erdős, Fejér, Stieltjes, and Turán. The exposition style of Topology, Calculus and Approximation follows the Hungarian mathematical tradition of Paul Erdős and others. In the first part, the classical results of Alexandroff, Cantor, Hausdorff, Helly, Peano, Radon, Tietze and Urysohn illustrate the theories of metric, topological and normed spaces. Following this, the general framework of normed spaces and Carathéodory's definition of the derivative are shown to simplify the statement and proof of various theorems in calculus and ordinary differential equations. The third and final part is devoted to interpolation, orthogonal polynomials, numerical integration, asymptotic expansions and the numerical solution of algebraic and differential equations. Students of both pure an...
Directory of Open Access Journals (Sweden)
Yubo Wang
2017-06-01
Full Text Available It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC. In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976 ratio and outperforms existing methods such as short-time Fourier transfrom (STFT, continuous Wavelet transform (CWT and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Melkonian, D; Korner, A; Meares, R; Bahramali, H
2012-10-01
A novel method of the time-frequency analysis of non-stationary heart rate variability (HRV) is developed which introduces the fragmentary spectrum as a measure that brings together the frequency content, timing and duration of HRV segments. The fragmentary spectrum is calculated by the similar basis function algorithm. This numerical tool of the time to frequency and frequency to time Fourier transformations accepts both uniform and non-uniform sampling intervals, and is applicable to signal segments of arbitrary length. Once the fragmentary spectrum is calculated, the inverse transform recovers the original signal and reveals accuracy of spectral estimates. Numerical experiments show that discontinuities at the boundaries of the succession of inter-beat intervals can cause unacceptable distortions of the spectral estimates. We have developed a measure that we call the "RR deltagram" as a form of the HRV data that minimises spectral errors. The analysis of the experimental HRV data from real-life and controlled breathing conditions suggests transient oscillatory components as functionally meaningful elements of highly complex and irregular patterns of HRV. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Approximate Bayesian recursive estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2014-01-01
Roč. 285, č. 1 (2014), s. 100-111 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Approximate parameter estimation * Bayesian recursive estimation * Kullback–Leibler divergence * Forgetting Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 4.038, year: 2014 http://library.utia.cas.cz/separaty/2014/AS/karny-0425539.pdf
Approximating Preemptive Stochastic Scheduling
Megow Nicole; Vredeveld Tjark
2009-01-01
We present constant approximative policies for preemptive stochastic scheduling. We derive policies with a guaranteed performance ratio of 2 for scheduling jobs with release dates on identical parallel machines subject to minimizing the sum of weighted completion times. Our policies as well as their analysis apply also to the recently introduced more general model of stochastic online scheduling. The performance guarantee we give matches the best result known for the corresponding determinist...
Optimization and approximation
Pedregal, Pablo
2017-01-01
This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.
Nuclear data processing, analysis, transformation and storage with Pade-approximants
International Nuclear Information System (INIS)
Badikov, S.A.; Gay, E.V.; Guseynov, M.A.; Rabotnov, N.S.
1992-01-01
A method is described to generate rational approximants of high order with applications to neutron data handling. The problems considered are: The approximations of neutron cross-sections in resonance region producing the parameters for Adler-Adler type formulae; calculations of resulting rational approximants' errors given in analytical form allowing to compute the error at any energy point inside the interval of approximation; calculations of the correlation coefficient of error values in two arbitrary points provided that experimental errors are independent and normally distributed; a method of simultaneous generation of a few rational approximants with identical set of poles; functionals other than LSM; two-dimensional approximation. (orig.)
Yan, Meng; Yao, Minyu; Zhang, Hongming
2005-11-01
The performance of a spectral-phase-encoded (SPE) optical code-division multiple-access (OCDMA) system is analyzed. Regarding the incorrectly decoded signal (IDS) as a nonstationary random process, we derive a novel probability distribution for it. The probability distribution of the IDS is considered a chi-squared distribution with degrees of freedom r=1, which is more reasonable and accurate than in previous work. The bit error rate (BER) of an SPE OCDMA system under multiple-access interference is evaluated. Numerical results show that the system can sustain very low BER even when there are multiple simultaneous users, and as the code length becomes longer or the initial pulse becomes shorter, the system performs better.
Huang, Weilin; Wang, Runqiu; Chen, Yangkang
2018-05-01
Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.
Cyclic approximation to stasis
Directory of Open Access Journals (Sweden)
Stewart D. Johnson
2009-06-01
Full Text Available Neighborhoods of points in $mathbb{R}^n$ where a positive linear combination of $C^1$ vector fields sum to zero contain, generically, cyclic trajectories that switch between the vector fields. Such points are called stasis points, and the approximating switching cycle can be chosen so that the timing of the switches exactly matches the positive linear weighting. In the case of two vector fields, the stasis points form one-dimensional $C^1$ manifolds containing nearby families of two-cycles. The generic case of two flows in $mathbb{R}^3$ can be diffeomorphed to a standard form with cubic curves as trajectories.
International Nuclear Information System (INIS)
El Sawi, M.
1983-07-01
A simple approach employing properties of solutions of differential equations is adopted to derive an appropriate extension of the WKBJ method. Some of the earlier techniques that are commonly in use are unified, whereby the general approximate solution to a second-order homogeneous linear differential equation is presented in a standard form that is valid for all orders. In comparison to other methods, the present one is shown to be leading in the order of iteration, and thus possibly has the ability of accelerating the convergence of the solution. The method is also extended for the solution of inhomogeneous equations. (author)
The relaxation time approximation
International Nuclear Information System (INIS)
Gairola, R.P.; Indu, B.D.
1991-01-01
A plausible approximation has been made to estimate the relaxation time from a knowledge of the transition probability of phonons from one state (r vector, q vector) to other state (r' vector, q' vector), as a result of collision. The relaxation time, thus obtained, shows a strong dependence on temperature and weak dependence on the wave vector. In view of this dependence, relaxation time has been expressed in terms of a temperature Taylor's series in the first Brillouin zone. Consequently, a simple model for estimating the thermal conductivity is suggested. the calculations become much easier than the Callaway model. (author). 14 refs
Polynomial approximation on polytopes
Totik, Vilmos
2014-01-01
Polynomial approximation on convex polytopes in \\mathbf{R}^d is considered in uniform and L^p-norms. For an appropriate modulus of smoothness matching direct and converse estimates are proven. In the L^p-case so called strong direct and converse results are also verified. The equivalence of the moduli of smoothness with an appropriate K-functional follows as a consequence. The results solve a problem that was left open since the mid 1980s when some of the present findings were established for special, so-called simple polytopes.
Finite elements and approximation
Zienkiewicz, O C
2006-01-01
A powerful tool for the approximate solution of differential equations, the finite element is extensively used in industry and research. This book offers students of engineering and physics a comprehensive view of the principles involved, with numerous illustrative examples and exercises.Starting with continuum boundary value problems and the need for numerical discretization, the text examines finite difference methods, weighted residual methods in the context of continuous trial functions, and piecewise defined trial functions and the finite element method. Additional topics include higher o
Multilevel weighted least squares polynomial approximation
Haji-Ali, Abdul-Lateef
2017-06-30
Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.
Approximate Bayesian computation.
Directory of Open Access Journals (Sweden)
Mikael Sunnåker
Full Text Available Approximate Bayesian computation (ABC constitutes a class of computational methods rooted in Bayesian statistics. In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate. ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection. ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences (e.g., in population genetics, ecology, epidemiology, and systems biology.
Diffusive Wave Approximation to the Shallow Water Equations: Computational Approach
Collier, Nathan; Radwan, Hany; Dalcin, Lisandro; Calo, Victor M.
2011-01-01
We discuss the use of time adaptivity applied to the one dimensional diffusive wave approximation to the shallow water equations. A simple and computationally economical error estimator is discussed which enables time-step size adaptivity
Sequential reconstruction of driving-forces from nonlinear nonstationary dynamics
Güntürkün, Ulaş
2010-07-01
This paper describes a functional analysis-based method for the estimation of driving-forces from nonlinear dynamic systems. The driving-forces account for the perturbation inputs induced by the external environment or the secular variations in the internal variables of the system. The proposed algorithm is applicable to the problems for which there is too little or no prior knowledge to build a rigorous mathematical model of the unknown dynamics. We derive the estimator conditioned on the differentiability of the unknown system’s mapping, and smoothness of the driving-force. The proposed algorithm is an adaptive sequential realization of the blind prediction error method, where the basic idea is to predict the observables, and retrieve the driving-force from the prediction error. Our realization of this idea is embodied by predicting the observables one-step into the future using a bank of echo state networks (ESN) in an online fashion, and then extracting the raw estimates from the prediction error and smoothing these estimates in two adaptive filtering stages. The adaptive nature of the algorithm enables to retrieve both slowly and rapidly varying driving-forces accurately, which are illustrated by simulations. Logistic and Moran-Ricker maps are studied in controlled experiments, exemplifying chaotic state and stochastic measurement models. The algorithm is also applied to the estimation of a driving-force from another nonlinear dynamic system that is stochastic in both state and measurement equations. The results are judged by the posterior Cramer-Rao lower bounds. The method is finally put into test on a real-world application; extracting sun’s magnetic flux from the sunspot time series.
Monte Carlo Euler approximations of HJM term structure financial models
Björk, Tomas
2012-11-22
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Monte Carlo Euler approximations of HJM term structure financial models
Bjö rk, Tomas; Szepessy, Anders; Tempone, Raul; Zouraris, Georgios E.
2012-01-01
We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.
Validation of the measurement model concept for error structure identification
International Nuclear Information System (INIS)
Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.
2004-01-01
The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations
Approximations and Implementations of Nonlinear Filtering Schemes.
1988-02-01
sias k an Ykar repctively the input and the output vectors. Asfold. First, there are intrinsic errors, due to explained in the previous section, the...e.g.[BV,P]). In the above example of a a-algebra, the distributive property SIA (S 2vS3) - (SIAS2)v(SIAS3) holds. A complete orthocomplemented...process can be approximated by a switched Control Systems: Stochastic Stability and parameter process depending on the aggregated slow Dynamic Relaibility
The random phase approximation
International Nuclear Information System (INIS)
Schuck, P.
1985-01-01
RPA is the adequate theory to describe vibrations of the nucleus of very small amplitudes. These vibrations can either be forced by an external electromagnetic field or can be eigenmodes of the nucleus. In a one dimensional analogue the potential corresponding to such eigenmodes of very small amplitude should be rather stiff otherwise the motion risks to be a large amplitude one and to enter a region where the approximation is not valid. This means that nuclei which are supposedly well described by RPA must have a very stable groundstate configuration (must e.g. be very stiff against deformation). This is usually the case for doubly magic nuclei or close to magic nuclei which are in the middle of proton and neutron shells which develop a very stable groundstate deformation; we take the deformation as an example but there are many other possible degrees of freedom as, for example, compression modes, isovector degrees of freedom, spin degrees of freedom, and many more
The quasilocalized charge approximation
International Nuclear Information System (INIS)
Kalman, G J; Golden, K I; Donko, Z; Hartmann, P
2005-01-01
The quasilocalized charge approximation (QLCA) has been used for some time as a formalism for the calculation of the dielectric response and for determining the collective mode dispersion in strongly coupled Coulomb and Yukawa liquids. The approach is based on a microscopic model in which the charges are quasilocalized on a short-time scale in local potential fluctuations. We review the conceptual basis and theoretical structure of the QLC approach and together with recent results from molecular dynamics simulations that corroborate and quantify the theoretical concepts. We also summarize the major applications of the QLCA to various physical systems, combined with the corresponding results of the molecular dynamics simulations and point out the general agreement and instances of disagreement between the two
Green-Ampt approximations: A comprehensive analysis
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
Global sea surface temperature (SST) anomalies can affect terrestrial precipitation via ocean-atmosphere interaction known as climate teleconnection. Non-stationary and non-linear characteristics of the ocean-atmosphere system make the identification of the teleconnection signals...
ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals
International Nuclear Information System (INIS)
Vogel, J.E.
1983-01-01
1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x
Learning from prescribing errors
Dean, B
2002-01-01
The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...
Approximate quantum Markov chains
Sutter, David
2018-01-01
This book is an introduction to quantum Markov chains and explains how this concept is connected to the question of how well a lost quantum mechanical system can be recovered from a correlated subsystem. To achieve this goal, we strengthen the data-processing inequality such that it reveals a statement about the reconstruction of lost information. The main difficulty in order to understand the behavior of quantum Markov chains arises from the fact that quantum mechanical operators do not commute in general. As a result we start by explaining two techniques of how to deal with non-commuting matrices: the spectral pinching method and complex interpolation theory. Once the reader is familiar with these techniques a novel inequality is presented that extends the celebrated Golden-Thompson inequality to arbitrarily many matrices. This inequality is the key ingredient in understanding approximate quantum Markov chains and it answers a question from matrix analysis that was open since 1973, i.e., if Lieb's triple ma...
Prestack traveltime approximations
Alkhalifah, Tariq Ali
2012-05-01
Many of the explicit prestack traveltime relations used in practice are based on homogeneous (or semi-homogenous, possibly effective) media approximations. This includes the multifocusing, based on the double square-root (DSR) equation, and the common reflection stack (CRS) approaches. Using the DSR equation, I constructed the associated eikonal form in the general source-receiver domain. Like its wave-equation counterpart, it suffers from a critical singularity for horizontally traveling waves. As a result, I recasted the eikonal in terms of the reflection angle, and thus, derived expansion based solutions of this eikonal in terms of the difference between the source and receiver velocities in a generally inhomogenous background medium. The zero-order term solution, corresponding to ignoring the lateral velocity variation in estimating the prestack part, is free of singularities and can be used to estimate traveltimes for small to moderate offsets (or reflection angles) in a generally inhomogeneous medium. The higher-order terms include limitations for horizontally traveling waves, however, we can readily enforce stability constraints to avoid such singularities. In fact, another expansion over reflection angle can help us avoid these singularities by requiring the source and receiver velocities to be different. On the other hand, expansions in terms of reflection angles result in singularity free equations. For a homogenous background medium, as a test, the solutions are reasonably accurate to large reflection and dip angles. A Marmousi example demonstrated the usefulness and versatility of the formulation. © 2012 Society of Exploration Geophysicists.
Evaluation of the Methods for Response Analysis under Non-Stationary Excitation
Directory of Open Access Journals (Sweden)
R.S. Jangid
1999-01-01
Full Text Available Response of structures to non-stationary ground motion can be obtained either by the evolutionary spectral analysis or by the Markov approach. In certain conditions, a quasi-stationary analysis can also be performed. The first two methods of analysis are difficult to apply for complex situations such as problems involving soil-structure interaction, non-classical damping and primary-secondary structure interaction. The quasi-stationary analysis, on the other hand, provides an easier solution procedure for such cases. Here-in, the effectiveness of the quasi-stationary analysis is examined with the help of the analysis of a single degree-of-freedom (SDOF system under a set of parametric variations. For this purpose, responses of the SDOF system to uniformly modulated non-stationary random ground excitation are obtained by the three methods and they are compared. In addition, the relative computational efforts for different methods are also investigated.
Detection of Unusual Events and Trends in Complex Non-Stationary Data Streams
International Nuclear Information System (INIS)
Perez, Rafael B.; Protopopescu, Vladimir A.; Worley, Brian Addison; Perez, Cristina
2006-01-01
The search for unusual events and trends hidden in multi-component, nonlinear, non-stationary, noisy signals is extremely important for a host of different applications, ranging from nuclear power plant and electric grid operation to internet traffic and implementation of non-proliferation protocols. In the context of this work, we define an unusual event as a local signal disturbance and a trend as a continuous carrier of information added to and different from the underlying baseline dynamics. The goal of this paper is to investigate the feasibility of detecting hidden intermittent events inside non-stationary signal data sets corrupted by high levels of noise, by using the Hilbert-Huang empirical mode decomposition method
Non-stationary dynamics in the bouncing ball: A wavelet perspective
Energy Technology Data Exchange (ETDEWEB)
Behera, Abhinna K., E-mail: abhinna@iiserkol.ac.in; Panigrahi, Prasanta K., E-mail: pprasanta@iiserkol.ac.in [Department of Physical Sciences, Indian Institute of Science Education and Research (IISER) Kolkata, Mohanpur 741246 (India); Sekar Iyengar, A. N., E-mail: ansekar.iyengar@saha.ac.in [Plasma Physics Division, Saha Institute of Nuclear Physics (SINP), Sector 1, Block-AF, Bidhannagar, Kolkata 700064 (India)
2014-12-01
The non-stationary dynamics of a bouncing ball, comprising both periodic as well as chaotic behavior, is studied through wavelet transform. The multi-scale characterization of the time series displays clear signatures of self-similarity, complex scaling behavior, and periodicity. Self-similar behavior is quantified by the generalized Hurst exponent, obtained through both wavelet based multi-fractal detrended fluctuation analysis and Fourier methods. The scale dependent variable window size of the wavelets aptly captures both the transients and non-stationary periodic behavior, including the phase synchronization of different modes. The optimal time-frequency localization of the continuous Morlet wavelet is found to delineate the scales corresponding to neutral turbulence, viscous dissipation regions, and different time varying periodic modulations.
Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia
Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya
2014-09-01
Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.
Elastic shells of revolution under nonstationary thermal loading using ring finite elements
International Nuclear Information System (INIS)
Yao Zhenhan
1986-01-01
The report deals with the analysis of elastic shells of revolution under nonstationary thermal loading using ring finite elements. First, a ring element for moderately thick shells is derived which should also be employed for thin shells when either higher Fourier components of the displacements, or deflection patterns with very steep gradients occur. Then, a ring element for the analysis of heat conduction in shells of revolution is derived, and algorithms for the numerical solution of linear stationary, nonlinear stationary, as well as linear nonstationary problems are presented. Finally, a ring element for the coupled thermoelastic analysis of shells of revolution is developed, and an algorithm for the solution of weakly coupled problems is given. (orig.) [de
A regional and nonstationary model for partial duration series of extreme rainfall
DEFF Research Database (Denmark)
Gregersen, Ida Bülow; Madsen, Henrik; Rosbjerg, Dan
2017-01-01
as the explanatory variables in the regional and temporal domain, respectively. Further analysis of partial duration series with nonstationary and regional thresholds shows that the mean exceedances also exhibit a significant variation in space and time for some rainfall durations, while the shape parameter is found...... of extreme rainfall. The framework is built on a partial duration series approach with a nonstationary, regional threshold value. The model is based on generalized linear regression solved by generalized estimation equations. It allows a spatial correlation between the stations in the network and accounts...... furthermore for variable observation periods at each station and in each year. Marginal regional and temporal regression models solved by generalized least squares are used to validate and discuss the results of the full spatiotemporal model. The model is applied on data from a large Danish rain gauge network...
Advantages of the non-stationary approach: test on eddy current signals
International Nuclear Information System (INIS)
Brunel, P.
1993-12-01
Conventional signal processing is often unsuitable for the interpretation of intrinsically non-stationary signals, such as surveillance or non destructive testing signals. In these cases, ''advanced'' methods are required. This report presents two applications of non-stationary signal processing methods to the complex signals obtained in eddy current non destructive testing of steam generator tubes. The first application consists in segmenting the absolute channel, which can be likened to a piecewise constant signal. The Page-Hinkley cumulative sum algorithm is used, enabling detection of unknown mean amplitude jumps in a piecewise constant signal disturbed by a white noise. Results are comparable to those obtained with the empirical method currently in use. As easy to implement as the latter, the Page-Hinkley algorithm has the added advantage of being well formalized and of identifying whether the jumps in mean are positive or negative. The second application concerns assistance in detecting characteristic fault transients in the differential channels, using the continuous wavelet transform. The useful signal and noise spectra are fairly close, but not strictly identical. With the continuous wavelet transform, these frequency differences can be turned to account. The method was tested on synthetic signals obtained by summing noise and real defect signals. Using the continuous wavelet transform reduces the minimum signal-to-noise ratio by 5 dB for detection of a transient as compared with direct detection on the original signal. Finally, a summary of non-stationary methods using our data is presented. The two investigations described confirm that non-stationary methods may be considered as interesting signal and image analysis tools, as an efficient complement to conventional methods. (author). 24 figs., 13 refs
International Nuclear Information System (INIS)
Kraus, B.; Tittel, W.; Gisin, N.; Nilsson, M.; Kroell, S.; Cirac, J. I.
2006-01-01
We propose a method for efficient storage and recall of arbitrary nonstationary light fields, such as, for instance, single photon time-bin qubits or intense fields, in optically dense atomic ensembles. Our approach to quantum memory is based on controlled, reversible, inhomogeneous broadening and relies on a hidden time-reversal symmetry of the optical Bloch equations describing the propagation of the light field. We briefly discuss experimental realizations of our proposal
On the dynamics of non-stationary binary stellar system with non-isotropic mass flow
International Nuclear Information System (INIS)
Bekov, A.A.; Bejsekov, A.N.; Aldibaeva, L.T.
2006-01-01
The motion of test body in the external gravitational field of the binary stellar systems with slowly variable some physical parameters of radiating components is considered on the base of restricted nonstationary photo-gravitational three and two bodies problem with non-isotropic mass flow. The family of polar and coplanar solutions are obtained. The solutions give the possibility of the dynamical and structure interpretation of binary young evolving stars and galaxies. (author)
Nonstationary behavior in a delayed feedback traveling wave tube folded waveguide oscillator
International Nuclear Information System (INIS)
Ryskin, N.M.; Titov, V.N.; Han, S.T.; So, J.K.; Jang, K.H.; Kang, Y.B.; Park, G.S.
2004-01-01
Folded waveguide traveling-wave tubes (FW TWT) are among the most promising candidates for powerful compact amplifiers and oscillators in millimeter and submillimeter wave bands. In this paper, the nonstationary behavior of a FW TWT oscillator with delayed feedback is investigated. Starting conditions of the oscillations are derived analytically. Results of numerical simulation of single-frequency, self-modulation (multifrequency) and chaotic generation regimes are presented. Mode competition phenomena, multistability and hysteresis are discussed
2016-03-01
each IDF curve and subsequently used to force a calibrated and validated precipitation - runoff model. Probability-based, risk-informed hydrologic...ERDC/CHL CHETN-X-2 March 2016 Approved for public release; distribution is unlimited. Bayesian Inference of Nonstationary Precipitation Intensity...based means by which to develop local precipitation Intensity-Duration-Frequency (IDF) curves using historical rainfall time series data collected for
Effect of non-stationary climate on infectious gastroenteritis transmission in Japan
Onozuka, Daisuke
2014-01-01
Local weather factors are widely considered to influence the transmission of infectious gastroenteritis. Few studies, however, have examined the non-stationary relationships between global climatic factors and transmission of infectious gastroenteritis. We analyzed monthly data for cases of infectious gastroenteritis in Fukuoka, Japan from 2000 to 2012 using cross-wavelet coherency analysis to assess the pattern of associations between indices for the Indian Ocean Dipole (IOD) and El Niño Sou...
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
Bučar, Bojan
2007-01-01
The assumption that non-stationary sorption processes associated with wood canbe evaluated by analysis of their transient system response to the disturbance developed is undoubtedly correct. In general it is, in fact, possible to obtain by time analysis of the transient phenomenon - involving the transition into an arbitrary new state of equilibrium - all data required for a credible evaluation of the observed system. Evaluation of moisture movement during drying or moistening requires determ...
Non-stationary Condition Monitoring of large diesel engines with the AEWATT toolbox
DEFF Research Database (Denmark)
Pontoppidan, Niels Henrik; Larsen, Jan; Sigurdsson, Sigurdur
2005-01-01
We are developing a specialized toolbox for non-stationary condition monitoring of large 2-stroke diesel engines based on acoustic emission measurements. The main contribution of this toolbox has so far been the utilization of adaptive linear models such as Principal and Independent Component Ana......, the inversion of those angular timing changes called “event alignment”, has allowed for condition monitoring across operation load settings, successfully enabling a single model to be used with realistic data under varying operational conditions-...
Energy Technology Data Exchange (ETDEWEB)
Todorov, N S [Low Temperature Department of the Institute of Solid State Physics of the Bulgarian Academy of Sciences, Sofia
1981-04-01
It is shown that the nonstationary Schroedinger equation does not satisfy a well-known adiabatical principle in thermodynamics. A ''renormalization procedure'' based on the possible existence of a time-irreversible basic evolution equation is proposed with the help of which one comes to agreement in a variety of specific cases of an adiabatic inclusion of a perturbing potential. The ideology of the present article rests essentially on the ideology of the preceding articles, in particular article I.
Energy Technology Data Exchange (ETDEWEB)
Todorov, N S
1981-04-01
It is shown that the nonstationary Schroedinger equation does not satisfy a well-known adiabatical principle in thermodynamics. A ''renormalization procedure'' based on the possible existence of a time-irreversible basic evolution equation is proposed with the help of which one comes to agreement in a variety of specific cases of an adiabatic inclusion of a perturbing potential. The ideology of the present article IV rests essentially on the ideology of the preceding articles, in particular article I.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Distinguishing Stationary/Nonstationary Scaling Processes Using Wavelet Tsallis q-Entropies
Directory of Open Access Journals (Sweden)
Julio Ramirez Pacheco
2012-01-01
Full Text Available Classification of processes as stationary or nonstationary has been recognized as an important and unresolved problem in the analysis of scaling signals. Stationarity or nonstationarity determines not only the form of autocorrelations and moments but also the selection of estimators. In this paper, a methodology for classifying scaling processes as stationary or nonstationary is proposed. The method is based on wavelet Tsallis q-entropies and particularly on the behaviour of these entropies for scaling signals. It is demonstrated that the observed wavelet Tsallis q-entropies of 1/f signals can be modeled by sum-cosh apodizing functions which allocates constant entropies to a set of scaling signals and varying entropies to the rest and that this allocation is controlled by q. The proposed methodology, therefore, differentiates stationary signals from non-stationary ones based on the observed wavelet Tsallis entropies for 1/f signals. Experimental studies using synthesized signals confirm that the proposed method not only achieves satisfactorily classifications but also outperforms current methods proposed in the literature.
Nonstationary influence of El Niño on the synchronous dengue epidemics in Thailand.
Directory of Open Access Journals (Sweden)
Bernard Cazelles
2005-04-01
Full Text Available BACKGROUND: Several factors, including environmental and climatic factors, influence the transmission of vector-borne diseases. Nevertheless, the identification and relative importance of climatic factors for vector-borne diseases remain controversial. Dengue is the world's most important viral vector-borne disease, and the controversy about climatic effects also applies in this case. Here we address the role of climate variability in shaping the interannual pattern of dengue epidemics. METHODS AND FINDINGS: We have analysed monthly data for Thailand from 1983 to 1997 using wavelet approaches that can describe nonstationary phenomena and that also allow the quantification of nonstationary associations between time series. We report a strong association between monthly dengue incidence in Thailand and the dynamics of El Niño for the 2-3-y periodic mode. This association is nonstationary, seen only from 1986 to 1992, and appears to have a major influence on the synchrony of dengue epidemics in Thailand. CONCLUSION: The underlying mechanism for the synchronisation of dengue epidemics may resemble that of a pacemaker, in which intrinsic disease dynamics interact with climate variations driven by El Niño to propagate travelling waves of infection. When association with El Niño is strong in the 2-3-y periodic mode, one observes high synchrony of dengue epidemics over Thailand. When this association is absent, the seasonal dynamics become dominant and the synchrony initiated in Bangkok collapses.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
Directory of Open Access Journals (Sweden)
Yin Yanshu
2017-12-01
Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
The Fourier decomposition method for nonlinear and non-stationary time series analysis.
Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik
2017-03-01
for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.
On a saddlepoint approximation to the Markov binomial distribution
DEFF Research Database (Denmark)
Jensen, Jens Ledet
A nonstandard saddlepoint approximation to the distribution of a sum of Markov dependent trials is introduced. The relative error of the approximation is studied, not only for the number of summands tending to infinity, but also for the parameter approaching the boundary of its definition range...
The log-linear return approximation, bubbles, and predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividendprice ratio. Next, we simulate various rational bubbles which have explosive conditional expec...
The Log-Linear Return Approximation, Bubbles, and Predictability
DEFF Research Database (Denmark)
Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten
2012-01-01
We study in detail the log-linear return approximation introduced by Campbell and Shiller (1988a). First, we derive an upper bound for the mean approximation error, given stationarity of the log dividend-price ratio. Next, we simulate various rational bubbles which have explosive conditional expe...
Saddlepoint Approximations for Expectations and an Application to CDO Pricing
Huang, X.; Oosterlee, C.W.
2011-01-01
We derive two types of saddlepoint approximations for expectations in the form of E[(X - K)+], where X is the sum of n independent random variables and K is a known constant. We establish error convergence rates for both types of approximations in the independently and identically distributed case.
Essays on forecasting stationary and nonstationary economic time series
Bachmeier, Lance Joseph
This dissertation consists of three essays. Chapter II considers the question of whether M2 growth can be used to forecast inflation at horizons of up to ten years. A vector error correction (VEC) model serves as our benchmark model. We find that M2 growth does have marginal predictive content for inflation at horizons of more than two years, but only when allowing for cointegration and when the cointegrating rank and vector are specified a priori. When estimating the cointegration vector or failing to impose cointegration, there is no longer evidence of causality running from M2 growth to inflation at any forecast horizon. Finally, we present evidence that M2 needs to be redefined, as forecasts of the VEC model using data on M2 observed after 1993 are worse than the forecasts of an autoregressive model of inflation. Chapter III reconsiders the evidence for a "rockets and feathers" effect in gasoline markets. We estimate an error correction model of gasoline prices using daily data for the period 1985--1998 and fail to find any evidence of asymmetry. We show that previous work suffered from two problems. First, nonstationarity in some of the regressors was ignored, leading to invalid inference. Second, the weekly data used in previous work leads to a temporal aggregation problem, and thus biased estimates of impulse response functions. Chapter IV tests for a forecasting relationship between the volume of litigation and macroeconomic variables. We analyze annual data for the period 1960--2000 on the number of cases filed, real GDP, real consumption expenditures, inflation, unemployment, and interest rates. Bivariate Granger causality tests show that several of the macroeconomic variables can be used to forecast the volume of litigation, but show no evidence that the volume of litigation can be used to forecast any of the macroeconomic variables. The analysis is then extended to bivariate and multivariate regression models, and we find similar evidence to that of the
Analytical modeling for thermal errors of motorized spindle unit
Liu, Teng; Gao, Weiguo; Zhang, Dawei; Zhang, Yifan; Chang, Wenfen; Liang, Cunman; Tian, Yanling
2017-01-01
Modeling method investigation about spindle thermal errors is significant for spindle thermal optimization in design phase. To accurately analyze the thermal errors of motorized spindle unit, this paper assumes approximately that 1) spindle linear thermal error on axial direction is ascribed to shaft thermal elongation for its heat transfer from bearings, and 2) spindle linear thermal errors on radial directions and angular thermal errors are attributed to thermal variations of bearing relati...
International Nuclear Information System (INIS)
Anon.
1991-01-01
This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements
International Nuclear Information System (INIS)
Picard, R.R.
1989-01-01
Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process
Martínez-Legaz, Juan Enrique; Soubeyran, Antoine
2003-01-01
We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.
Directory of Open Access Journals (Sweden)
M. I. Popov
2016-01-01
Full Text Available The approximate analytical solution of a problem about nonstationary free convection in the conductive and laminar mode of the Newtonian liquid in square area at the instantaneous change of temperature of a sidewall and lack of heat fluxes is submitted on top and bottom the bases. The equations of free convection in an approximation of Oberbeka-Bussinesk are linearized due to neglect by convective items. For reduction of number of hydrothermal parameters the system is given to the dimensionless look by introduction of scales for effect and explanatory variables. Transition from classical variables to the variables "whirlwind-a flow function" allowed to reduce system to a nonstationary heat conduction equation and a nonstationary nonuniform biharmonic equation, and the first is not dependent on the second. The decision in the form of a flow function is received by application integral a sine - Fourier transforms with terminating limits to a biharmonic equation at first on a variable x, and then on a variable y. The flow function has an appearance of a double series of Fourier on sine with coefficients in an integral form. Coefficients of a row represent integrals from unknown functions. On the basis of a hypothesis of an express type of integrals coefficients are calculated from the linear equation system received from boundary conditions on partial derivatives of function. Dependence of structure of a current on Prandtl's number is investigated. The cards of streamlines and isolines of components of speed describing development of a current from the moment of emergence before transition to a stationary state are received. The schedules of a field of vectors of speeds in various time illustrating dynamics of a current are provided. Reliability of a hypothesis of an express type of integral coefficients is confirmed by adequacy to physical sense and coherence of the received results with the numerical solution of a problem.
Self-similar factor approximants
International Nuclear Information System (INIS)
Gluzman, S.; Yukalov, V.I.; Sornette, D.
2003-01-01
The problem of reconstructing functions from their asymptotic expansions in powers of a small variable is addressed by deriving an improved type of approximants. The derivation is based on the self-similar approximation theory, which presents the passage from one approximant to another as the motion realized by a dynamical system with the property of group self-similarity. The derived approximants, because of their form, are called self-similar factor approximants. These complement the obtained earlier self-similar exponential approximants and self-similar root approximants. The specific feature of self-similar factor approximants is that their control functions, providing convergence of the computational algorithm, are completely defined from the accuracy-through-order conditions. These approximants contain the Pade approximants as a particular case, and in some limit they can be reduced to the self-similar exponential approximants previously introduced by two of us. It is proved that the self-similar factor approximants are able to reproduce exactly a wide class of functions, which include a variety of nonalgebraic functions. For other functions, not pertaining to this exactly reproducible class, the factor approximants provide very accurate approximations, whose accuracy surpasses significantly that of the most accurate Pade approximants. This is illustrated by a number of examples showing the generality and accuracy of the factor approximants even when conventional techniques meet serious difficulties
Generalized Gaussian Error Calculus
Grabe, Michael
2010-01-01
For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...
Medication errors: prescribing faults and prescription errors.
Velo, Giampaolo P; Minuz, Pietro
2009-06-01
1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.
Perturbative corrections for approximate inference in gaussian latent variable models
DEFF Research Database (Denmark)
Opper, Manfred; Paquet, Ulrich; Winther, Ole
2013-01-01
Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....
Analysis of the dynamical cluster approximation for the Hubbard model
Aryanpour, K.; Hettler, M. H.; Jarrell, M.
2002-01-01
We examine a central approximation of the recently introduced Dynamical Cluster Approximation (DCA) by example of the Hubbard model. By both analytical and numerical means we study non-compact and compact contributions to the thermodynamic potential. We show that approximating non-compact diagrams by their cluster analogs results in a larger systematic error as compared to the compact diagrams. Consequently, only the compact contributions should be taken from the cluster, whereas non-compact ...
The transferability of hydrological models under nonstationary climatic conditions
Directory of Open Access Journals (Sweden)
C. Z. Li
2012-04-01
Full Text Available This paper investigates issues involved in calibrating hydrological models against observed data when the aim of the modelling is to predict future runoff under different climatic conditions. To achieve this objective, we tested two hydrological models, DWBM and SIMHYD, using data from 30 unimpaired catchments in Australia which had at least 60 yr of daily precipitation, potential evapotranspiration (PET, and streamflow data. Nash-Sutcliffe efficiency (NSE, modified index of agreement (d_{1} and water balance error (WBE were used as performance criteria. We used a differential split-sample test to split up the data into 120 sub-periods and 4 different climatic sub-periods in order to assess how well the calibrated model could be transferred different periods. For each catchment, the models were calibrated for one sub-period and validated on the other three. Monte Carlo simulation was used to explore parameter stability compared to historic climatic variability. The chi-square test was used to measure the relationship between the distribution of the parameters and hydroclimatic variability. The results showed that the performance of the two hydrological models differed and depended on the model calibration. We found that if a hydrological model is set up to simulate runoff for a wet climate scenario then it should be calibrated on a wet segment of the historic record, and similarly a dry segment should be used for a dry climate scenario. The Monte Carlo simulation provides an effective and pragmatic approach to explore uncertainty and equifinality in hydrological model parameters. Some parameters of the hydrological models are shown to be significantly more sensitive to the choice of calibration periods. Our findings support the idea that when using conceptual hydrological models to assess future climate change impacts, a differential split-sample test and Monte Carlo simulation should be used to quantify uncertainties due to
Radio-Oxidation in Polyolefins: Non-Stationary Kinetic Conditions
International Nuclear Information System (INIS)
Dely, N.
2006-01-01
In the last fifty years, many authors have been interested in the radio-oxidation processes occurring in polymers. The polymer degradation under ionising radiations in presence of dioxygen is well described by a radical chemistry. The radio-oxidation process occurs in three steps: the first one is the production of radicals P degree by interaction between the polymer and the ionising radiations; then radicals P degree react spontaneously with O 2 solved in the polymer giving a peroxy radical POO degree which attacks the polymer forming a hydroperoxide POOH and a new radical P degree (propagation). The third step corresponds to the termination step, that is bimolecular reactions between radicals. It is generally assumed that the stationary state is rapidly reached and consequently that the oxidation induced during the built-up period of the radical concentration can be neglected. However, to our best knowledge, the temporal evolution of radical concentrations before reaching the steady state regime has never been studied in details. We recently performed a complete study of oxygen consumption under electron irradiation for an EPDM elastomer. An analysis, as function of dose rate and oxygen pressure, and assuming steady state conditions, allowed extracting all the kinetic constants. Starting for these experimental data, we calculated the build-up of the radical concentration by solving numerically the differential equations with help of the Minichem code. We conclude that, in fact, the oxidation induced during the built-up period is negligible. In this paper we show that [P degree] could present a quasi-stationary plateau before reaching its stationary level. Consequently, the full radical time evolution is essentially determined by two characteristic times for reaching the quasi and stationary levels and three concentrations: [P degree] and [POO degree] at the stationary level and [P degree] at the quasi-stationary plateau. We show that realistic approximations can
International Conference Approximation Theory XV
Schumaker, Larry
2017-01-01
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22–25, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type. The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, a...
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Energy Technology Data Exchange (ETDEWEB)
Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))
1990-01-01
The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.
Forms of Approximate Radiation Transport
Brunner, G
2002-01-01
Photon radiation transport is described by the Boltzmann equation. Because this equation is difficult to solve, many different approximate forms have been implemented in computer codes. Several of the most common approximations are reviewed, and test problems illustrate the characteristics of each of the approximations. This document is designed as a tutorial so that code users can make an educated choice about which form of approximate radiation transport to use for their particular simulation.
Approximation by planar elastic curves
DEFF Research Database (Denmark)
Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge
2016-01-01
We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....
Prescription Errors in Psychiatry
African Journals Online (AJOL)
Arun Kumar Agnihotri
clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.
The optimal XFEM approximation for fracture analysis
International Nuclear Information System (INIS)
Jiang Shouyan; Du Chengbin; Ying Zongquan
2010-01-01
The extended finite element method (XFEM) provides an effective tool for analyzing fracture mechanics problems. A XFEM approximation consists of standard finite elements which are used in the major part of the domain and enriched elements in the enriched sub-domain for capturing special solution properties such as discontinuities and singularities. However, two issues in the standard XFEM should specially be concerned: efficient numerical integration methods and an appropriate construction of the blending elements. In the paper, an optimal XFEM approximation is proposed to overcome the disadvantage mentioned above in the standard XFEM. The modified enrichment functions are presented that can reproduced exactly everywhere in the domain. The corresponding FORTRAN program is developed for fracture analysis. A classic problem of fracture mechanics is used to benchmark the program. The results indicate that the optimal XFEM can alleviate the errors and improve numerical precision.
Exact constants in approximation theory
Korneichuk, N
1991-01-01
This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base
International Conference Approximation Theory XIV
Schumaker, Larry
2014-01-01
This volume developed from papers presented at the international conference Approximation Theory XIV, held April 7–10, 2013 in San Antonio, Texas. The proceedings contains surveys by invited speakers, covering topics such as splines on non-tensor-product meshes, Wachspress and mean value coordinates, curvelets and shearlets, barycentric interpolation, and polynomial approximation on spheres and balls. Other contributed papers address a variety of current topics in approximation theory, including eigenvalue sequences of positive integral operators, image registration, and support vector machines. This book will be of interest to mathematicians, engineers, and computer scientists working in approximation theory, computer-aided geometric design, numerical analysis, and related approximation areas.
Non-stationary discharge patterns in motor cortex under subthalamic nucleus deep brain stimulation.
Santaniello, Sabato; Montgomery, Erwin B; Gale, John T; Sarma, Sridevi V
2012-01-01
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) directly modulates the basal ganglia (BG), but how such stimulation impacts the cortex upstream is largely unknown. There is evidence of cortical activation in 6-hydroxydopamine (OHDA)-lesioned rodents and facilitation of motor evoked potentials in Parkinson's disease (PD) patients, but the impact of the DBS settings on the cortical activity in normal vs. Parkinsonian conditions is still debated. We use point process models to analyze non-stationary activation patterns and inter-neuronal dependencies in the motor and sensory cortices of two non-human primates during STN DBS. These features are enhanced after treatment with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), which causes a consistent PD-like motor impairment, while high-frequency (HF) DBS (i.e., ≥100 Hz) strongly reduces the short-term patterns (period: 3-7 ms) both before and after MPTP treatment, and elicits a short-latency post-stimulus activation. Low-frequency DBS (i.e., ≤50 Hz), instead, has negligible effects on the non-stationary features. Finally, by using tools from the information theory [i.e., receiver operating characteristic (ROC) curve and information rate (IR)], we show that the predictive power of these models is dependent on the DBS settings, i.e., the probability of spiking of the cortical neurons (which is captured by the point process models) is significantly conditioned on the timely delivery of the DBS input. This dependency increases with the DBS frequency and is significantly larger for high- vs. low-frequency DBS. Overall, the selective suppression of non-stationary features and the increased modulation of the spike probability suggest that HF STN DBS enhances the neuronal activation in motor and sensory cortices, presumably because of reinforcement mechanisms, which perhaps involve the overlap between feedback antidromic and feed-forward orthodromic responses along the BG-thalamo-cortical loop.
Valenza, Gaetano; Faes, Luca; Citi, Luca; Orini, Michele; Barbieri, Riccardo
2018-05-01
Measures of transfer entropy (TE) quantify the direction and strength of coupling between two complex systems. Standard approaches assume stationarity of the observations, and therefore are unable to track time-varying changes in nonlinear information transfer with high temporal resolution. In this study, we aim to define and validate novel instantaneous measures of TE to provide an improved assessment of complex nonstationary cardiorespiratory interactions. We here propose a novel instantaneous point-process TE (ipTE) and validate its assessment as applied to cardiovascular and cardiorespiratory dynamics. In particular, heartbeat and respiratory dynamics are characterized through discrete time series, and modeled with probability density functions predicting the time of the next physiological event as a function of the past history. Likewise, nonstationary interactions between heartbeat and blood pressure dynamics are characterized as well. Furthermore, we propose a new measure of information transfer, the instantaneous point-process information transfer (ipInfTr), which is directly derived from point-process-based definitions of the Kolmogorov-Smirnov distance. Analysis on synthetic data, as well as on experimental data gathered from healthy subjects undergoing postural changes confirms that ipTE, as well as ipInfTr measures are able to dynamically track changes in physiological systems coupling. This novel approach opens new avenues in the study of hidden, transient, nonstationary physiological states involving multivariate autonomic dynamics in cardiovascular health and disease. The proposed method can also be tailored for the study of complex multisystem physiology (e.g., brain-heart or, more in general, brain-body interactions).
Reduction of Non-stationary Noise using a Non-negative Latent Variable Decomposition
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Larsen, Jan
2008-01-01
We present a method for suppression of non-stationary noise in single channel recordings of speech. The method is based on a non-negative latent variable decomposition model for the speech and noise signals, learned directly from a noisy mixture. In non-speech regions an over complete basis...... is learned for the noise that is then used to jointly estimate the speech and the noise from the mixture. We compare the method to the classical spectral subtraction approach, where the noise spectrum is estimated as the average over non-speech frames. The proposed method significantly outperforms...
Energy Technology Data Exchange (ETDEWEB)
Lan, X.G. [Southwest Jiaotong University, Quantum Optoelectronics Laboratory, Chengdu (China); China West Normal University, Institute of Theoretical Physics, Nanchong (China); Jiang, Q.Q. [China West Normal University, Institute of Theoretical Physics, Nanchong (China); Wei, L.F. [Southwest Jiaotong University, Quantum Optoelectronics Laboratory, Chengdu (China); Sun Yat-Sen University, State Key Laboratory of Optoelectronic Materials and Technologies, School of Physics and Engineering, Guangzhou (China)
2012-04-15
We apply the Damour-Ruffini-Sannan method to study the Hawking radiations of scalar and Dirac particles in non-stationary Kerr black holes under different tortoise coordinate transformations. We found that all the relevant Hawking radiation spectra show still the blackbody ones, while the Hawking temperatures are strongly related to the used tortoise coordinate transformations. The properties of these dependences are discussed analytically and numerically. Our results imply that proper selections of tortoise coordinate transformations should be important in the studies of Hawking radiations and the correct selection would be given by the experimental observations in the future. (orig.)
Heat transfer and hydrodynamics of nonstationary dispersed-film flow in complex shape channels
International Nuclear Information System (INIS)
Nigmatulin, B.I.; Klebanov, L.A.; Kroshilin, A.E.; Kroshilin, V.E.
1980-01-01
The mathematical model has been used to investigate the dispersed-film regime of a liquid flow and condition for the appearance of heat transfer crisis. One-dimensional motion equations are used for each component of the mixture. The model developed is used to describe the hydrodynamics and the crisis of heat transfer in rod bundles and round tubes under stationary and nonstationary conditions. The account of a separate flow of a liquid film and a vapourdrop nucleus permits to describe the main regularities of a dispersed film flow. A good agreement of calculation and experimental results is obtained [ru
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function
Directory of Open Access Journals (Sweden)
Christofer Toumazou
2013-07-01
Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.
Goychuk, I
2001-08-01
Stochastic resonance in a simple model of information transfer is studied for sensory neurons and ensembles of ion channels. An exact expression for the information gain is obtained for the Poisson process with the signal-modulated spiking rate. This result allows one to generalize the conventional stochastic resonance (SR) problem (with periodic input signal) to the arbitrary signals of finite duration (nonstationary SR). Moreover, in the case of a periodic signal, the rate of information gain is compared with the conventional signal-to-noise ratio. The paper establishes the general nonequivalence between both measures notwithstanding their apparent similarity in the limit of weak signals.
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
1997-01-01
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modeling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial Valley earthquake in California (U .S .A.). The results of the study indicate that while all three approaches can successfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Non-Stationary Modelling and Simulation of Near-Source Earthquake Ground Motion
DEFF Research Database (Denmark)
Skjærbæk, P. S.; Kirkegaard, Poul Henning; Fouskitakis, G. N.
This paper is concerned with modelling and simulation of near-source earthquake ground motion. Recent studies have revealed that these motions show heavy non-stationary behaviour with very low frequencies dominating parts of the earthquake sequence. Modelling and simulation of this behaviour...... by an epicentral distance of 16 km and measured during the 1979 Imperial valley earthquake in California (USA). The results of the study indicate that while all three approaches can succesfully predict near-source ground motions, the Neural Network based one gives somewhat poorer simulation results....
Kharkov, N. S.
2017-11-01
Results of numerical modeling of the coupled nonstationary heat and mass transfer problem under conditions of a convective flow in facade system of a three-layer concrete panel for two different constructions (with ventilation channels and without) are presented. The positive effect of ventilation channels on the energy and humidity regime over a period of 12 months is shown. Used new method of replacement a solid zone (requiring specification of porosity and material structure, what complicates process of convergence of the solution) on quasi-solid in form of a multicomponent mixture (with restrictions on convection and mass fractions).
Distributed Nonstationary Heat Model of Two-Channel Solar Air Heater
International Nuclear Information System (INIS)
Klychev, Sh. I.; Bakhramov, S. A.; Ismanzhanov, A. I.; Tashiev, N.N.
2011-01-01
An algorithm for a distributed nonstationary heat model of a solar air heater (SAH) with two operating channels is presented. The model makes it possible to determine how the coolant temperature changes with time along the solar air heater channel by considering its main thermal and ambient parameters, as well as variations in efficiency. Examples of calculations are presented. It is shown that the time within which the mean-day efficiency of the solar air heater becomes stable is significantly higher than the time within which the coolant temperature reaches stable values. The model can be used for investigation of the performances of solar water-heating collectors. (authors)
AUTOMATIC CONTROL OF PARAMETERS OF A NON-STATIONARY OBJECT WITH CROSS LINKS
Directory of Open Access Journals (Sweden)
A. Pavlov
2018-04-01
Full Text Available Many objects automatic control unsteady. This is manifested in the change of their parameters. Therefore, periodically adjust the required parameters of the controller. This work is usually carried out rarely. For a long time, regulators are working with is not the optimal settings. The consequence of this is the low quality of many industrial control systems. The solution problem is the use of robust controllers. ACS with traditional PI and PID controllers have a very limited range of normal operation modes due to the appearance of parametric disturbances due to changes in the characteristics of the automated unit and changes in the load on it. The situation is different when using in the architecture of artificial neural network controllers. It is known that when training a neural network, the adaptation procedure is often used. This makes it possible to greatly expand the area of normal operating modes of ACS with neural automatic regulators in comparison with traditional linear regulators. It is also possible to significantly improve the quality of control (especially for a non-stationary multidimensional object, provided that when designing the ACS at the stage of its simulation in the model of the regulatory object model, an adequate simulation model of the executive device. It is also possible to significantly improve the quality of control (especially for a non-stationary multidimensional regulatory object model, an adequate simulation model of the executive device. Especially actual implementation of all these requirements in the application of electric actuators. This article fully complies with these requirements. This is what makes it possible to provide a guaranteed quality of control in non-stationary ACS with multidimensional objects and cross-links between control channels. The possibility of using a known hybrid automatic regulator to stabilize the parameters of a two-channel non-stationary object with two cross-linked. A
Damage of first wall materials in fusion reactors under nonstationary thermal effects
International Nuclear Information System (INIS)
Maslaev, S.A.; Platonov, Yu.M.; Pimenov, V.N.
1991-01-01
The temperature distribution in the first wall of a fusion reactor was calculated for nonstationary thermal effects of the type of plasma destruction or the flow of 'running electrons' taking into account the melting of the surface layer of the material. The thickness of the resultant damaged layer in which thermal stresses were higher than the tensile strength of the material is estimated. The results were obtained for corrosion-resisting steel, aluminium and vanadium. Flowing down of the molten layer of the material of the first wall is calculated. (author)
Identification of the structure parameters using short-time non-stationary stochastic excitation
Jarczewska, Kamila; Koszela, Piotr; Śniady, PaweŁ; Korzec, Aleksandra
2011-07-01
In this paper, we propose an approach to the flexural stiffness or eigenvalue frequency identification of a linear structure using a non-stationary stochastic excitation process. The idea of the proposed approach lies within time domain input-output methods. The proposed method is based on transforming the dynamical problem into a static one by integrating the input and the output signals. The output signal is the structure reaction, i.e. structure displacements due to the short-time, irregular load of random type. The systems with single and multiple degrees of freedom, as well as continuous systems are considered.
Analytic solution of boundary-value problems for nonstationary model kinetic equations
International Nuclear Information System (INIS)
Latyshev, A.V.; Yushkanov, A.A.
1993-01-01
A theory for constructing the solutions of boundary-value problems for non-stationary model kinetic equations is constructed. This theory was incorrectly presented equation, separation of the variables is used, this leading to a characteristic equation. Eigenfunctions are found in the space of generalized functions, and the eigenvalue spectrum is investigated. An existence and uniqueness theorem for the expansion of the Laplace transform of the solution with respect to the eigenfunctions is proved. The proof is constructive and gives explicit expressions for the expansion coefficients. An application to the Rayleigh problem is obtained, and the corresponding result of Cercignani is corrected
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
The role of initial values in nonstationary fractional time series models
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Morten Ørregaard
We consider the nonstationary fractional model $\\Delta^{d}X_{t}=\\varepsilon _{t}$ with $\\varepsilon_{t}$ i.i.d.$(0,\\sigma^{2})$ and $d>1/2$. We derive an analytical expression for the main term of the asymptotic bias of the maximum likelihood estimator of $d$ conditional on initial values, and we...... discuss the role of the initial values for the bias. The results are partially extended to other fractional models, and three different applications of the theoretical results are given....
Non-stationary ionization in the low ionosphere by gravitational wave action
International Nuclear Information System (INIS)
Nikitin, M.A.; Kashchenko, N.M.
1977-01-01
Non-stationary effects in the lower ionosphere caused by gravitation waves are analyzed. Time dependences are obtained for extremum electron concentrations, which describe the dynamics of heterogeneous layer formation from the initially homogeneous distribution under the effect of gravitation waves. Diffusion of plasma and its complex composition are not taken into account. The problem is solved for two particular cases of low and high frequency gravitation waves impact on the ionosphere. Only in the former case electron concentration in the lower ionosphere deviates considerably from the equilibrium
Time-frequency representation of a highly nonstationary signal via the modified Wigner distribution
Zoladz, T. F.; Jones, J. H.; Jong, J.
1992-01-01
A new signal analysis technique called the modified Wigner distribution (MWD) is presented. The new signal processing tool has been very successful in determining time frequency representations of highly non-stationary multicomponent signals in both simulations and trials involving actual Space Shuttle Main Engine (SSME) high frequency data. The MWD departs from the classic Wigner distribution (WD) in that it effectively eliminates the cross coupling among positive frequency components in a multiple component signal. This attribute of the MWD, which prevents the generation of 'phantom' spectral peaks, will undoubtedly increase the utility of the WD for real world signal analysis applications which more often than not involve multicomponent signals.
Detection of unusual events and trends in complex non-stationary data streams
International Nuclear Information System (INIS)
Charlton-Perez, C.; Perez, R.B.; Protopopescu, V.; Worley, B.A.
2011-01-01
The search for unusual events and trends hidden in multi-component, nonlinear, non-stationary, noisy signals is extremely important for diverse applications, ranging from power plant operation to homeland security. In the context of this work, we define an unusual event as a local signal disturbance and a trend as a continuous carrier of information added to and different from the underlying baseline dynamics. The goal of this paper is to investigate the feasibility of detecting hidden events inside intermittent signal data sets corrupted by high levels of noise, by using the Hilbert-Huang empirical mode decomposition method.
Kartush, J M
1996-11-01
Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.
Approximate Networking for Universal Internet Access
Directory of Open Access Journals (Sweden)
Junaid Qadir
2017-12-01
Full Text Available Despite the best efforts of networking researchers and practitioners, an ideal Internet experience is inaccessible to an overwhelming majority of people the world over, mainly due to the lack of cost-efficient ways of provisioning high-performance, global Internet. In this paper, we argue that instead of an exclusive focus on a utopian goal of universally accessible “ideal networking” (in which we have a high throughput and quality of service as well as low latency and congestion, we should consider providing “approximate networking” through the adoption of context-appropriate trade-offs. In this regard, we propose to leverage the advances in the emerging trend of “approximate computing” that rely on relaxing the bounds of precise/exact computing to provide new opportunities for improving the area, power, and performance efficiency of systems by orders of magnitude by embracing output errors in resilient applications. Furthermore, we propose to extend the dimensions of approximate computing towards various knobs available at network layers. Approximate networking can be used to provision “Global Access to the Internet for All” (GAIA in a pragmatically tiered fashion, in which different users around the world are provided a different context-appropriate (but still contextually functional Internet experience.
Error estimation in plant growth analysis
Directory of Open Access Journals (Sweden)
Andrzej Gregorczyk
2014-01-01
Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.
Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model
Narayanan, S.; Raju, G. V.
1990-09-01
An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.
Asymptotic Theory for the QMLE in GARCH-X Models with Stationary and Non-Stationary Covariates
DEFF Research Database (Denmark)
Han, Heejoon; Kristensen, Dennis
as captured by its long-memory parameter dx; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE'’s of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence....... This implies that standard inferential tools, such as t-statistics, do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identifi…ed when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric...
Kozitskiy, Sergey
2018-05-01
Numerical simulation of nonstationary dissipative structures in 3D double-diffusive convection has been performed by using the previously derived system of complex Ginzburg-Landau type amplitude equations, valid in a neighborhood of Hopf bifurcation points. Simulation has shown that the state of spatiotemporal chaos develops in the system. It has the form of nonstationary structures that depend on the parameters of the system. The shape of structures does not depend on the initial conditions, and a limited number of spectral components participate in their formation.
Some results in Diophantine approximation
DEFF Research Database (Denmark)
Pedersen, Steffen Højris
the basic concepts on which the papers build. Among other it introduces metric Diophantine approximation, Mahler’s approach on algebraic approximation, the Hausdorff measure, and properties of the formal Laurent series over Fq. The introduction ends with a discussion on Mahler’s problem when considered......This thesis consists of three papers in Diophantine approximation, a subbranch of number theory. Preceding these papers is an introduction to various aspects of Diophantine approximation and formal Laurent series over Fq and a summary of each of the three papers. The introduction introduces...
Non-stationary and relaxation phenomena in cavity-assisted quantum memories
Veselkova, N. G.; Sokolov, I. V.
2017-12-01
We investigate the non-stationary and relaxation phenomena in cavity-assisted quantum memories for light. As a storage medium we consider an ensemble of cold atoms with standard Lambda-scheme of working levels. Some theoretical aspects of the problem were treated previously by many authors, and recent experiments stimulate more deep insight into the ultimate ability and limitations of the device. Since quantum memories can be used not only for the storage of quantum information, but also for a substantial manipulation of ensembles of quantum states, the speed of such manipulation and hence the ability to write and retrieve the signals of relatively short duration becomes important. In our research we do not apply the so-called bad cavity limit, and consider the memory operation of the signals whose duration is not much larger than the cavity field lifetime, accounting also for the finite lifetime of atomic coherence. In our paper we present an effective approach that makes it possible to find the non-stationary amplitude and phase behavior of strong classical control field, that matches the desirable time profile of both the envelope and the phase of the retrieved quantized signal. The phase properties of the retrieved quantized signals are of importance for the detection and manipulation of squeezing, entanglement, etc by means of optical mixing and homodyning.
Quantum Radiation Properties of Dirac Particles in General Nonstationary Black Holes
Directory of Open Access Journals (Sweden)
Jia-Chen Hua
2014-01-01
Full Text Available Quantum radiation properties of Dirac particles in general nonstationary black holes in the general case are investigated by both using the method of generalized tortoise coordinate transformation and considering simultaneously the asymptotic behaviors of the first-order and second-order forms of Dirac equation near the event horizon. It is generally shown that the temperature and the shape of the event horizon of this kind of black holes depend on both the time and different angles. Further, we give a general expression of the new extra coupling effect in thermal radiation spectrum of Dirac particles which is absent from the thermal radiation spectrum of scalar particles. Also, we reveal a relationship that is ignored before between thermal radiation and nonthermal radiation in the case of scalar particles, which is that the chemical potential in thermal radiation spectrum is equal to the highest energy of the negative energy state of scalar particles in nonthermal radiation for general nonstationary black holes.
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
International Nuclear Information System (INIS)
Li, Dong; Svensson, J.; Thomsen, H.; Werner, A.; Wolf, R.; Medina, F.
2013-01-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods
Bayesian soft X-ray tomography using non-stationary Gaussian Processes
Li, Dong; Svensson, J.; Thomsen, H.; Medina, F.; Werner, A.; Wolf, R.
2013-08-01
In this study, a Bayesian based non-stationary Gaussian Process (GP) method for the inference of soft X-ray emissivity distribution along with its associated uncertainties has been developed. For the investigation of equilibrium condition and fast magnetohydrodynamic behaviors in nuclear fusion plasmas, it is of importance to infer, especially in the plasma center, spatially resolved soft X-ray profiles from a limited number of noisy line integral measurements. For this ill-posed inversion problem, Bayesian probability theory can provide a posterior probability distribution over all possible solutions under given model assumptions. Specifically, the use of a non-stationary GP to model the emission allows the model to adapt to the varying length scales of the underlying diffusion process. In contrast to other conventional methods, the prior regularization is realized in a probability form which enhances the capability of uncertainty analysis, in consequence, scientists who concern the reliability of their results will benefit from it. Under the assumption of normally distributed noise, the posterior distribution evaluated at a discrete number of points becomes a multivariate normal distribution whose mean and covariance are analytically available, making inversions and calculation of uncertainty fast. Additionally, the hyper-parameters embedded in the model assumption can be optimized through a Bayesian Occam's Razor formalism and thereby automatically adjust the model complexity. This method is shown to produce convincing reconstructions and good agreements with independently calculated results from the Maximum Entropy and Equilibrium-Based Iterative Tomography Algorithm methods.
Directory of Open Access Journals (Sweden)
A. K. Nekrasov
2006-03-01
Full Text Available A general nonlinear theory for low-frequency electromagnetic field generation due to high-frequency nonuniform and nonstationary electromagnetic radiations in cold, uniform, multicomponent, dusty magnetoplasmas is developed. This theory permits us to consider the nonlinear action of all waves that can exist in such plasmas. The equations are derived for the dust grain velocities in the low-frequency nonlinear electric fields arising due to the presence of electromagnetic cyclotron waves travelling along the background magnetic field. The dust grains are considered to be magnetized as well as unmagnetized. Different regimes for the dust particle dynamics, depending on the spatio-temporal change of the wave amplitudes and plasma parameters, are discussed. It is shown that induced nonlinear electric fields can have both an electrostatic and electromagnetic nature. Conditions for maximum dust acceleration are found. The results obtained may be useful for understanding the possible mechanisms of dust grain dynamics in astrophysical, cosmic and laboratory plasmas under the action of nonuniform and nonstationary electromagnetic waves.
Dynamics of Inhomogeneous Shell Systems Under Non-Stationary Loading (Survey)
Lugovoi, P. Z.; Meish, V. F.
2017-09-01
Experimental works on the determination of dynamics of smooth and stiffened cylindrical shells contacting with a soil medium under various non-stationary loading are reviewed. The results of studying three-layer shells of revolution whose motion equations are obtained within the framework of the hypotheses of the Timoshenko geometrically nonlinear theory are stated. The numerical results for shells with a piecewise or discrete filler enable the analysis of estimation of the influence of geometrical and physical-mechanical parameters of structures on their dynamics and reveal new mechanical effects. Basing on the classical theory of shells and rods, the effect of the discrete arrangement of ribs and coefficients of the Winkler or Pasternak elastic foundation on the normal frequencies and modes of rectangular planar cylindrical and spherical shells is studied. The number and shape of dispersion curves for longitudinal harmonic waves in a stiffened cylindrical shell are determined. The equations of vibrations of ribbed shells of revolution on Winkler or Pasternak elastic foundation are obtained using the geometrically nonlinear theory and the Timoshenko hypotheses. On applying the integral-interpolational method, numerical algorithms are developed and the corresponding non-stationary problems are solved. The special attention is paid to the statement and solution of coupled problems on the dynamical interaction of cylindrical or spherical shells with the soil water-saturated medium of different structure.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Directory of Open Access Journals (Sweden)
P. Ribereau
2008-12-01
Full Text Available Since the pioneering work of Landwehr et al. (1979, Hosking et al. (1985 and their collaborators, the Probability Weighted Moments (PWM method has been very popular, simple and efficient to estimate the parameters of the Generalized Extreme Value (GEV distribution when modeling the distribution of maxima (e.g., annual maxima of precipitations in the Identically and Independently Distributed (IID context. When the IID assumption is not satisfied, a flexible alternative, the Maximum Likelihood Estimation (MLE approach offers an elegant way to handle non-stationarities by letting the GEV parameters to be time dependent. Despite its qualities, the MLE applied to the GEV distribution does not always provide accurate return level estimates, especially for small sample sizes or heavy tails. These drawbacks are particularly true in some non-stationary situations. To reduce these negative effects, we propose to extend the PWM method to a more general framework that enables us to model temporal covariates and provide accurate GEV-based return levels. Theoretical properties of our estimators are discussed. Small and moderate sample sizes simulations in a non-stationary context are analyzed and two brief applications to annual maxima of CO_{2} and seasonal maxima of cumulated daily precipitations are presented.
Identification of Non-Stationary Magnetic Field Sources Using the Matching Pursuit Method
Directory of Open Access Journals (Sweden)
Beata Palczynska
2017-05-01
Full Text Available The measurements of electromagnetic field emissions, performed on board a vessel have showed that, in this specific environment, a high level of non-stationary magnetic fields (MFs is observed. The adaptive time-frequency method can be used successfully to analyze this type of measured signal. It allows one to specify the time interval in which the individual frequency components of the signal occur. In this paper, the method of identification of non-stationary MF sources based on the matching pursuit (MP algorithm is presented. It consists of the decomposition of an examined time-waveform into the linear expansion of chirplet atoms and the analysis of the matrix of their parameters. The main feature of the proposed method is the modification of the chirplet’s matrix in a way that atoms, whose normalized energies are lower than a certain threshold, will be rejected. On the time-frequency planes of the spectrograms, obtained separately for each remaining chirlpet, it can clearly identify the time-frequency structures appearing in the examined signal. The choice of a threshold defines the computing speed and precision of the performed analysis. The method was implemented in the virtual application and used for processing real data, obtained from measurements of time-vary MF emissions onboard a ship.
International Nuclear Information System (INIS)
Blinkov, V.N.
1993-01-01
This paper presents a mathematical model and a open-quotes fastclose quotes computer program for analyzing nonstationary thermohydrodynamic processes in distributed multi-element circuits containing a two-phase coolant. The author's approach is based on representing the distributed multi-element circuits with the two-phase coolant (such as cooling circuits of the reactor of an atomic power station) in the form of equivalent thermohydrodynamic chains composed of idealized elements with the intrinsic properties of the structure elements of real systems. The author has developed the nomenclature of such conceptual elements for objects which can be modelled; the nomenclature encompasses the control volumes (with a single-phase or two-phase coolant or a moving boundary of boiling/condensation) and the branch lines (type of tube and connections in dependence on the inertia of the coolant being taken into account) for a hydrodynamic submodel and the thermal components and lines for a thermal submodel. The mathematical models which have been developed and the program using them are designated for various forms of calculating slow thermohydrodynamic processes in multi-element coolant circuits in reactors and modeling test stands. The program facilitates calculation of the range of stable operation, detailed studies of stationary and nonstationary modes of operation, and forecasts of effective engineering measures to obtain stability with the aid of microcomputers
Climate variability and nonstationary dynamics of Mycoplasma pneumoniae pneumonia in Japan.
Onozuka, Daisuke; Chaves, Luis Fernando
2014-01-01
A stationary association between climate factors and epidemics of Mycoplasma pneumoniae (M. pneumoniae) pneumonia has been widely assumed. However, it is unclear whether elements of the local climate that are relevant to M. pneumoniae pneumonia transmission have stationary signatures of climate factors on their dynamics over different time scales. We performed a cross-wavelet coherency analysis to assess the patterns of association between monthly M. pneumoniae cases in Fukuoka, Japan, from 2000 to 2012 and indices for the Indian Ocean Dipole (IOD) and El Niño Southern Oscillation (ENSO). Monthly M. pneumoniae cases were strongly associated with the dynamics of both the IOD and ENSO for the 1-2-year periodic mode in 2005-2007 and 2010-2011. This association was non-stationary and appeared to have a major influence on the synchrony of M. pneumoniae epidemics. Our results call for the consideration of non-stationary, possibly non-linear, patterns of association between M. pneumoniae cases and climatic factors in early warning systems.
Climate variability and nonstationary dynamics of Mycoplasma pneumoniae pneumonia in Japan.
Directory of Open Access Journals (Sweden)
Daisuke Onozuka
Full Text Available BACKGROUND: A stationary association between climate factors and epidemics of Mycoplasma pneumoniae (M. pneumoniae pneumonia has been widely assumed. However, it is unclear whether elements of the local climate that are relevant to M. pneumoniae pneumonia transmission have stationary signatures of climate factors on their dynamics over different time scales. METHODS: We performed a cross-wavelet coherency analysis to assess the patterns of association between monthly M. pneumoniae cases in Fukuoka, Japan, from 2000 to 2012 and indices for the Indian Ocean Dipole (IOD and El Niño Southern Oscillation (ENSO. RESULTS: Monthly M. pneumoniae cases were strongly associated with the dynamics of both the IOD and ENSO for the 1-2-year periodic mode in 2005-2007 and 2010-2011. This association was non-stationary and appeared to have a major influence on the synchrony of M. pneumoniae epidemics. CONCLUSIONS: Our results call for the consideration of non-stationary, possibly non-linear, patterns of association between M. pneumoniae cases and climatic factors in early warning systems.
A review on prognostic techniques for non-stationary and non-linear rotating systems
Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph
2015-10-01
The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.
Probing Gamma-ray Emission of Geminga & Vela with Non-stationary Models
Directory of Open Access Journals (Sweden)
Yating Chai
2016-06-01
Full Text Available It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Trend analysis using non-stationary time series clustering based on the finite element method
Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.
2014-05-01
In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.
Around and about an application of the GAMLSS package to non-stationary flood frequency analysis
Debele, S. E.; Bogdanowicz, E.; Strupczewski, W. G.
2017-08-01
The non-stationarity of hydrologic processes due to climate change or human activities is challenging for the researchers and practitioners. However, the practical requirements for taking into account non-stationarity as a support in decision-making procedures exceed the up-to-date development of the theory and the of software. Currently, the most popular and freely available software package that allows for non-stationary statistical analysis is the GAMLSS (generalized additive models for location, scale and shape) package. GAMLSS has been used in a variety of fields. There are also several papers recommending GAMLSS in hydrological problems; however, there are still important issues which have not previously been discussed concerning mainly GAMLSS applicability not only for research and academic purposes, but also in a design practice. In this paper, we present a summary of our experiences in the implementation of GAMLSS to non-stationary flood frequency analysis, highlighting its advantages and pointing out weaknesses with regard to methodological and practical topics.
Online updating and uncertainty quantification using nonstationary output-only measurement
Yuen, Ka-Veng; Kuok, Sin-Chi
2016-01-01
Extended Kalman filter (EKF) is widely adopted for state estimation and parametric identification of dynamical systems. In this algorithm, it is required to specify the covariance matrices of the process noise and measurement noise based on prior knowledge. However, improper assignment of these noise covariance matrices leads to unreliable estimation and misleading uncertainty estimation on the system state and model parameters. Furthermore, it may induce diverging estimation. To resolve these problems, we propose a Bayesian probabilistic algorithm for online estimation of the noise parameters which are used to characterize the noise covariance matrices. There are three major appealing features of the proposed approach. First, it resolves the divergence problem in the conventional usage of EKF due to improper choice of the noise covariance matrices. Second, the proposed approach ensures the reliability of the uncertainty quantification. Finally, since the noise parameters are allowed to be time-varying, nonstationary process noise and/or measurement noise are explicitly taken into account. Examples using stationary/nonstationary response of linear/nonlinear time-varying dynamical systems are presented to demonstrate the efficacy of the proposed approach. Furthermore, comparison with the conventional usage of EKF will be provided to reveal the necessity of the proposed approach for reliable model updating and uncertainty quantification.
Self-adaptive change detection in streaming data with non-stationary distribution
Zhang, Xiangliang
2010-01-01
Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non-stationary distribution helps to alarm the anomalies, to clean the noises, and to report the new patterns. In this paper, we employ a novel approach for detecting changes in streaming data with the purpose of improving the quality of modeling the data streams. Through observing the outliers, this approach of change detection uses a weighted standard deviation to monitor the evolution of the distribution of data streams. A cumulative statistical test, Page-Hinkley, is employed to collect the evidence of changes in distribution. The parameter used for reporting the changes is self-adaptively adjusted according to the distribution of data streams, rather than set by a fixed empirical value. The self-adaptability of the novel approach enhances the effectiveness of modeling data streams by timely catching the changes of distributions. We validated the approach on an online clustering framework with a benchmark KDDcup 1999 intrusion detection data set as well as with a real-world grid data set. The validation results demonstrate its better performance on achieving higher accuracy and lower percentage of outliers comparing to the other change detection approaches. © 2010 Springer-Verlag.
The error in total error reduction.
Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R
2014-02-01
Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.
Approximative calculation of transient short-circuit currents in power-systems
Energy Technology Data Exchange (ETDEWEB)
Heuck, K; Rosenberger, R; Dettmann, K D; Kegel, R
1986-08-01
The paper shows that it is approximatively possible to calculate the transient short-circuit currents for symmetrical and asymmetrical faults in power-systems. For that purpose a simple equivalent network is found. Its error of approximation is small. For the important maximum short-circuit current limits of error are pointed out compared to VDE 0102.
Spherical Approximation on Unit Sphere
Directory of Open Access Journals (Sweden)
Eman Samir Bhaya
2018-01-01
Full Text Available In this paper we introduce a Jackson type theorem for functions in LP spaces on sphere And study on best approximation of functions in spaces defined on unit sphere. our central problem is to describe the approximation behavior of functions in spaces for by modulus of smoothness of functions.
Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano
2013-01-01
Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
National Research Council Canada - National Science Library
Byrne, Michael D
2006-01-01
.... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...
International Nuclear Information System (INIS)
Wahlstroem, B.
1993-01-01
Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)
Efficient approximation of random fields for numerical applications
Harbrecht, Helmut; Peters, Michael; Siebenmorgen, Markus
2015-01-01
We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.
Common approximations for density operators may lead to imaginary entropy
International Nuclear Information System (INIS)
Lendi, K.; Amaral Junior, M.R. do
1983-01-01
The meaning and validity of usual second order approximations for density operators are illustrated with the help of a simple exactly soluble two-level model in which all relevant quantities can easily be controlled. This leads to exact upper bound error estimates which help to select more precisely permissible correlation times as frequently introduced if stochastic potentials are present. A final consideration of information entropy reveals clearly the limitations of this kind of approximation procedures. (Author) [pt
Tau method approximation of the Hubbell rectangular source integral
International Nuclear Information System (INIS)
Kalla, S.L.; Khajah, H.G.
2000-01-01
The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows
Efficient approximation of random fields for numerical applications
Harbrecht, Helmut
2015-01-07
We consider the rapid computation of separable expansions for the approximation of random fields. We compare approaches based on techniques from the approximation of non-local operators on the one hand and based on the pivoted Cholesky decomposition on the other hand. We provide an a-posteriori error estimate for the pivoted Cholesky decomposition in terms of the trace. Numerical examples validate and quantify the considered methods.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Precise analytic approximations for the Bessel function J1 (x)
Maass, Fernando; Martin, Pablo
2018-03-01
Precise and straightforward analytic approximations for the Bessel function J1 (x) have been found. Power series and asymptotic expansions have been used to determine the parameters of the approximation, which is as a bridge between both expansions, and it is a combination of rational and trigonometric functions multiplied with fractional powers of x. Here, several improvements with respect to the so called Multipoint Quasirational Approximation technique have been performed. Two procedures have been used to determine the parameters of the approximations. The maximum absolute errors are in both cases smaller than 0.01. The zeros of the approximation are also very precise with less than 0.04 per cent for the first one. A second approximation has been also determined using two more parameters, and in this way the accuracy has been increased to less than 0.001.
Intensity-based hierarchical elastic registration using approximating splines.
Serifovic-Trbalic, Amira; Demirovic, Damir; Cattin, Philippe C
2014-01-01
We introduce a new hierarchical approach for elastic medical image registration using approximating splines. In order to obtain the dense deformation field, we employ Gaussian elastic body splines (GEBS) that incorporate anisotropic landmark errors and rotation information. Since the GEBS approach is based on a physical model in form of analytical solutions of the Navier equation, it can very well cope with the local as well as global deformations present in the images by varying the standard deviation of the Gaussian forces. The proposed GEBS approximating model is integrated into the elastic hierarchical image registration framework, which decomposes a nonrigid registration problem into numerous local rigid transformations. The approximating GEBS registration scheme incorporates anisotropic landmark errors as well as rotation information. The anisotropic landmark localization uncertainties can be estimated directly from the image data, and in this case, they represent the minimal stochastic localization error, i.e., the Cramér-Rao bound. The rotation information of each landmark obtained from the hierarchical procedure is transposed in an additional angular landmark, doubling the number of landmarks in the GEBS model. The modified hierarchical registration using the approximating GEBS model is applied to register 161 image pairs from a digital mammogram database. The obtained results are very encouraging, and the proposed approach significantly improved all registrations comparing the mean-square error in relation to approximating TPS with the rotation information. On artificially deformed breast images, the newly proposed method performed better than the state-of-the-art registration algorithm introduced by Rueckert et al. (IEEE Trans Med Imaging 18:712-721, 1999). The average error per breast tissue pixel was less than 2.23 pixels compared to 2.46 pixels for Rueckert's method. The proposed hierarchical elastic image registration approach incorporates the GEBS
The efficiency of Flory approximation
International Nuclear Information System (INIS)
Obukhov, S.P.
1984-01-01
The Flory approximation for the self-avoiding chain problem is compared with a conventional perturbation theory expansion. While in perturbation theory each term is averaged over the unperturbed set of configurations, the Flory approximation is equivalent to the perturbation theory with the averaging over the stretched set of configurations. This imposes restrictions on the integration domain in higher order terms and they can be treated self-consistently. The accuracy δν/ν of Flory approximation for self-avoiding chain problems is estimated to be 2-5% for 1 < d < 4. (orig.)
Metcalfe, Janet
2017-01-01
Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…
International Nuclear Information System (INIS)
Pupko, V.Ya.
1978-01-01
The equation of nonstationary heat transfer caused by the appearance of a local pulse jump in the factor of heat transfer to a coolant is solved analytically for a cylindrical fuel element. The problem solution is generalized to a case of the periodically pulsating factor of heat transfer according to its value in an arbitrary point of the fuel element surface
Gray, A. B.
2017-12-01
Watersheds with sufficient monitoring data have been predominantly found to display nonstationary suspended sediment dynamics, whereby the relationship between suspended sediment concentration and discharge changes over time. Despite the importance of suspended sediment as a keystone of geophysical and biochemical processes, and as a primary mediator of water quality, stationary behavior remains largely assumed in the context of these applications. This study presents an investigation into the time dependent behavior of small mountainous rivers draining the coastal ranges of the western continental US over interannual to interdecadal time scales. Of the 250+ small coastal (drainage area systems. Temporal patterns of non-stationary behavior provided some evidence for spatial coherence, which may be related to synoptic hydro-metrological patterns and regional scale changes in land use patterns. However, the results also highlight the complex, integrative nature of watershed scale fluvial suspended sediment dynamics. This underscores the need for in-depth, forensic approaches for initial processes identification, which require long term, high resolution monitoring efforts in order to adequately inform management. The societal implications of nonstationary sediment dynamics and their controls were further explored through the case of California, USA, where over 150 impairment listings have resulted in more than 50 sediment TMDLs, only 3 of which are flux based - none of which account for non-stationary behavior.
Directory of Open Access Journals (Sweden)
X. X. Cheng
2017-01-01
Full Text Available Wind effects on structures obtained by field measurements are often found to be nonstationary, but related researches shared by the wind-engineering community are still limited. In this paper, empirical mode decomposition (EMD is applied to the nonstationary wind pressure time-history samples measured on an actual 167-meter high large cooling tower. It is found that the residue and some intrinsic mode functions (IMFs of low frequencies produced by EMD are responsible for the samples’ nonstationarity. Replacing the residue by the constant mean and subtracting the IMFs of low frequencies can help the nonstationary samples become stationary ones. A further step is taken to compare the loading characteristics extracted from the original nonstationary samples with those extracted from the processed stationary samples. Results indicate that nonstationarity effects on wind loads are notable in most cases. The passive wind tunnel simulation technique based on the assumption of stationarity is also examined, and it is found that the technique is basically conservative for use.
Demaria, E. M.; Goodrich, D. C.; Keefer, T.
2017-12-01
Observed sub-daily precipitation intensities from contrasting hydroclimatic environments in the USA are used to evaluate temporal trends and to develop Intensity-Duration Frequency (IDF) curves under stationary and nonstationary climatic conditions. Analyses are based on observations from two United States Department of Agriculture (USDA)-Agricultural Research Service (ARS) experimental watersheds located in a semi-arid and a temperate environment. We use an Annual Maximum Series (AMS) and a Partial Duration Series (PDS) approach to identify temporal trends in maximum intensities for durations ranging from 5- to 1440-minutes. A Bayesian approach with Monte Carlo techniques is used to incorporate the effect of non-stationary climatic assumptions in the IDF curves. The results show increasing trends in observed AMS sub-daily intensities in both watersheds whereas trends in the PDS observations are mostly positive in the semi-arid site and a mix of positive and negative in the temperate site. Stationary climate assumptions lead to much lower estimated sub-daily intensities than those under non-stationary assumptions with larger absolute differences found for shorter durations and smaller return periods. The risk of failure (R) of a hydraulic structure is increased for non-stationary effects over those of stationary effects, with absolute differences of 25% for a 100-year return period (T) and a project life (n) of 100 years. The study highlights the importance of considering non-stationarity, due to natural variability or to climate change, in storm design.
Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang
2018-06-01
The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.
Action errors, error management, and learning in organizations.
Frese, Michael; Keith, Nina
2015-01-03
Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.
Approximate Implicitization Using Linear Algebra
Directory of Open Access Journals (Sweden)
Oliver J. D. Barrowclough
2012-01-01
Full Text Available We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.
Rollout sampling approximate policy iteration
Dimitrakakis, C.; Lagoudakis, M.G.
2008-01-01
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
Approximate simulation of Hawkes processes
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
This article concerns a simulation algorithm for unmarked and marked Hawkes processes. The algorithm suffers from edge effects but is much faster than the perfect simulation algorithm introduced in our previous work. We derive various useful measures for the error committed when using the algorithm......, and we discuss various empirical results for the algorithm compared with perfect simulations....
Identification of QRS complex in non-stationary electrocardiogram of sick infants.
Kota, S; Swisher, C B; Al-Shargabi, T; Andescavage, N; du Plessis, A; Govindan, R B
2017-08-01
Due to the high-frequency of routine interventions in an intensive care setting, electrocardiogram (ECG) recordings from sick infants are highly non-stationary, with recurrent changes in the baseline, alterations in the morphology of the waveform, and attenuations of the signal strength. Current methods lack reliability in identifying QRS complexes (a marker of individual cardiac cycles) in the non-stationary ECG. In the current study we address this problem by proposing a novel approach to QRS complex identification. Our approach employs lowpass filtering, half-wave rectification, and the use of instantaneous Hilbert phase to identify QRS complexes in the ECG. We demonstrate the application of this method using ECG recordings from eight preterm infants undergoing intensive care, as well as from 18 normal adult volunteers available via a public database. We compared our approach to the commonly used approaches including Pan and Tompkins (PT), gqrs, wavedet, and wqrs for identifying QRS complexes and then compared each with manually identified QRS complexes. For preterm infants, a comparison between the QRS complexes identified by our approach and those identified through manual annotations yielded sensitivity and positive predictive values of 99% and 99.91%, respectively. The comparison metrics for each method are as follows: PT (sensitivity: 84.49%, positive predictive value: 99.88%), gqrs (85.25%, 99.49%), wavedet (95.24%, 99.86%), and wqrs (96.99%, 96.55%). Thus, the sensitivity values of the four methods previously described, are lower than the sensitivity of the method we propose; however, the positive predictive values of these other approaches is comparable to those of our method, with the exception of the wqrs approach, which yielded a slightly lower value. For adult ECG, our approach yielded a sensitivity of 99.78%, whereas PT yielded 99.79%. The positive predictive value was 99.42% for both our approach as well as for PT. We propose a novel method for
Flood frequency analysis of historical flood data under stationary and non-stationary modelling
Machado, M. J.; Botero, B. A.; López, J.; Francés, F.; Díez-Herrero, A.; Benito, G.
2015-06-01
Historical records are an important source of information on extreme and rare floods and fundamental to establish a reliable flood return frequency. The use of long historical records for flood frequency analysis brings in the question of flood stationarity, since climatic and land-use conditions can affect the relevance of past flooding as a predictor of future flooding. In this paper, a detailed 400 yr flood record from the Tagus River in Aranjuez (central Spain) was analysed under stationary and non-stationary flood frequency approaches, to assess their contribution within hazard studies. Historical flood records in Aranjuez were obtained from documents (Proceedings of the City Council, diaries, chronicles, memoirs, etc.), epigraphic marks, and indirect historical sources and reports. The water levels associated with different floods (derived from descriptions or epigraphic marks) were computed into discharge values using a one-dimensional hydraulic model. Secular variations in flood magnitude and frequency, found to respond to climate and environmental drivers, showed a good correlation between high values of historical flood discharges and a negative mode of the North Atlantic Oscillation (NAO) index. Over the systematic gauge record (1913-2008), an abrupt change on flood magnitude was produced in 1957 due to constructions of three major reservoirs in the Tagus headwaters (Bolarque, Entrepeñas and Buendia) controlling 80% of the watershed surface draining to Aranjuez. Two different models were used for the flood frequency analysis: (a) a stationary model estimating statistical distributions incorporating imprecise and categorical data based on maximum likelihood estimators, and (b) a time-varying model based on "generalized additive models for location, scale and shape" (GAMLSS) modelling, which incorporates external covariates related to climate variability (NAO index) and catchment hydrology factors (in this paper a reservoir index; RI). Flood frequency
Project Lifespan-based Nonstationary Hydrologic Design Methods for Changing Environment
Xiong, L.
2017-12-01
Under changing environment, we must associate design floods with the design life period of projects to ensure the hydrologic design is really relevant to the operation of the hydrologic projects, because the design value for a given exceedance probability over the project life period would be significantly different from that over other time periods of the same length due to the nonstationarity of probability distributions. Several hydrologic design methods that take the design life period of projects into account have been proposed in recent years, i.e. the expected number of exceedances (ENE), design life level (DLL), equivalent reliability (ER), and average design life level (ADLL). Among the four methods to be compared, both the ENE and ER methods are return period-based methods, while DLL and ADLL are risk/reliability- based methods which estimate design values for given probability values of risk or reliability. However, the four methods can be unified together under a general framework through a relationship transforming the so-called representative reliability (RRE) into the return period, i.e. m=1/1(1-RRE), in which we compute the return period m using the representative reliability RRE.The results of nonstationary design quantiles and associated confidence intervals calculated by ENE, ER and ADLL were very similar, since ENE or ER was a special case or had a similar expression form with respect to ADLL. In particular, the design quantiles calculated by ENE and ADLL were the same when return period was equal to the length of the design life. In addition, DLL can yield similar design values if the relationship between DLL and ER/ADLL return periods is considered. Furthermore, ENE, ER and ADLL had good adaptability to either an increasing or decreasing situation, yielding not too large or too small design quantiles. This is important for applications of nonstationary hydrologic design methods in actual practice because of the concern of choosing the emerging
Aristizabal, F; Glavinovic, M I
2003-10-01
Tracking spectral changes of rapidly varying signals is a demanding task. In this study, we explore on Monte Carlo-simulated glutamate-activated AMPA patch and synaptic currents whether a wavelet analysis offers such a possibility. Unlike Fourier methods that determine only the frequency content of a signal, the wavelet analysis determines both the frequency and the time. This is owing to the nature of the basis functions, which are infinite for Fourier transforms (sines and cosines are infinite), but are finite for wavelet analysis (wavelets are localized waves). In agreement with previous reports, the frequency of the stationary patch current fluctuations is higher for larger currents, whereas the mean-variance plots are parabolic. The spectra of the current fluctuations and mean-variance plots are close to the theoretically predicted values. The median frequency of the synaptic and nonstationary patch currents is, however, time dependent, though at the peak of synaptic currents, the median frequency is insensitive to the number of glutamate molecules released. Such time dependence demonstrates that the "composite spectra" of the current fluctuations gathered over the whole duration of synaptic currents cannot be used to assess the mean open time or effective mean open time of AMPA channels. The current (patch or synaptic) versus median frequency plots show hysteresis. The median frequency is thus not a simple reflection of the overall receptor saturation levels and is greater during the rise phase for the same saturation level. The hysteresis is due to the higher occupancy of the doubly bound state during the rise phase and not due to the spatial spread of the saturation disk, which remains remarkably constant. Albeit time dependent, the variance of the synaptic and nonstationary patch currents can be accurately determined. Nevertheless the evaluation of the number of AMPA channels and their single current from the mean-variance plots of patch or synaptic
An approximation method for nonlinear integral equations of Hammerstein type
International Nuclear Information System (INIS)
Chidume, C.E.; Moore, C.
1989-05-01
The solution of a nonlinear integral equation of Hammerstein type in Hilbert spaces is approximated by means of a fixed point iteration method. Explicit error estimates are given and, in some cases, convergence is shown to be at least as fast as a geometric progression. (author). 25 refs
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
DEFF Research Database (Denmark)
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Minimax rational approximation of the Fermi-Dirac distribution
Moussa, Jonathan E.
2016-10-01
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Directory of Open Access Journals (Sweden)
Kovin S Naidoo
2012-01-01
Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.
Enhancement and Noise Statistics Estimation for Non-Stationary Voiced Speech
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2016-01-01
In this paper, single channel speech enhancement in the time domain is considered. We address the problem of modelling non-stationary speech by describing the voiced speech parts by a harmonic linear chirp model instead of using the traditional harmonic model. This means that the speech signal...... through simulations on synthetic and speech signals, that the chirp versions of the filters perform better than their harmonic counterparts in terms of output signal-to-noise ratio (SNR) and signal reduction factor. For synthetic signals, the output SNR for the harmonic chirp APES based filter...... is increased 3 dB compared to the harmonic APES based filter at an input SNR of 10 dB, and at the same time the signal reduction factor is decreased. For speech signals, the increase is 1.5 dB along with a decrease in the signal reduction factor of 0.7. As an implicit part of the APES filter, a noise...
Calculation of nonstationary two-dimensional temperature field in a tube wall in burnout
International Nuclear Information System (INIS)
Kashcheev, V.M.; Pykhtina, T.V.; Yur'ev, Yu.S.
1977-01-01
Numerically solved is a nonstationary two-dimensional equation of heat conduction for a tube wall of fuel element simulator with arbitrary energy release. The tube is heat-insulated from the outside. The vapour-liquid mixture flows inside the tube. The burnout is realized, when the heat transfer coefficient corresponds to the developed boiling in one part of the tube, and to the deteriorated regime in the other part of it. The thermal losses are regarded on both ends of the tube. Given are the statement of the problem, the algorithm of the solution, the results of the test adjusting problem. Obtained is the satisfactory agreement of calculated fixed temperature with experimental one
Instantaneous Purified Orbit: A New Tool for Analysis of Nonstationary Vibration of Rotor System
Directory of Open Access Journals (Sweden)
Shi Dongfeng
2001-01-01
Full Text Available In some circumstances, vibration signals of large rotating machinery possess time-varying characteristics to some extent. Traditional diagnosis methods, such as FFT spectrum and orbit diagram, are confronted with a huge challenge to deal with this problem. This work aims at studying the four intrinsic drawbacks of conventional vibration signal processing method and instantaneous purified orbit (IPO on the basis of improved Fourier spectrum (IFS to analyze nonstationary vibration. On account of integration, the benefits of short period Fourier transform (SPFT and regular holospectrum, this method can intuitively reflect vibration characteristics of’a rotor system by means of parameter analysis for corresponding frequency ellipses. Practical examples, such as transient vibration in run-up stages and bistable condition of rotor show that IPO is a powerful tool for diagnosis and analysis of the vibration behavior of rotor systems.
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
Gay-Balmaz, François; Holm, Darryl D.
2018-01-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
3rd International Conference on Condition Monitoring of Machinery in Non-Stationary Operations
Rubini, Riccardo; D'Elia, Gianluca; Cocconcelli, Marco; Chaari, Fakher; Zimroz, Radoslaw; Bartelmus, Walter; Haddar, Mohamed
2014-01-01
This book presents the processings of the third edition of the Condition Monitoring of Machinery in Non-Stationary Operations (CMMNO13) which was held in Ferrara, Italy. This yearly event merges an international community of researchers who met – in 2011 in Wroclaw (Poland) and in 2012 in Hammamet (Tunisia) – to discuss issues of diagnostics of rotating machines operating in complex motion and/or load conditions. The growing interest of the industrial world on the topics covered by the CMMNO13 involves the fields of packaging, automotive, agricultural, mining, processing and wind machines in addition to that of the systems for data acquisition.The participation of speakers and visitors from industry makes the event an opportunity for immediate assessment of the potential applications of advanced methodologies for the signal analysis. Signals acquired from machines often contain contributions from several different components as well as noise. Therefore, the major challenge of condition monitoring is to po...
DEFF Research Database (Denmark)
Amado, Cristina; Teräsvirta, Timo
-run and the short-run dynamic behaviour of the volatilities. The structure of the conditional correlation matrix is assumed to be either time independent or to vary over time. We apply our model to pairs of seven daily stock returns belonging to the S&P 500 composite index and traded at the New York Stock Exchange......In this paper we investigate the effects of careful modelling the long-run dynamics of the volatilities of stock market returns on the conditional correlation structure. To this end we allow the individual unconditional variances in Conditional Correlation GARCH models to change smoothly over time...... by incorporating a nonstationary component in the variance equations. The modelling technique to determine the parametric structure of this time-varying component is based on a sequence of specification Lagrange multiplier-type tests derived in Amado and Teräsvirta (2011). The variance equations combine the long...
Unveiling non-stationary coupling between Amazon and ocean during recent extreme events
Ramos, Antônio M. de T.; Zou, Yong; de Oliveira, Gilvan Sampaio; Kurths, Jürgen; Macau, Elbert E. N.
2018-02-01
The interplay between extreme events in the Amazon's precipitation and the anomaly in the temperature of the surrounding oceans is not fully understood, especially its causal relations. In this paper, we investigate the climatic interaction between these regions from 1999 until 2012 using modern tools of complex system science. We identify the time scale of the coupling quantitatively and unveil the non-stationary influence of the ocean's temperature. The findings show consistently the distinctions between the coupling in the recent major extreme events in Amazonia, such as the two droughts that happened in 2005 and 2010 and the three floods during 1999, 2009 and 2012. Interestingly, the results also reveal the influence over the anomalous precipitation of Southwest Amazon has become increasingly lagged. The analysis can shed light on the underlying dynamics of the climate network system and consequently can improve predictions of extreme rainfall events.
Experimental data processing technique for nonstationary heat transfer on fuel rod simulators
International Nuclear Information System (INIS)
Nikonov, S.P.; Nikonov, A.P.; Belyukin, V.A.
1982-01-01
Non-stationary heat-transfer data processing is considered in connection with experimental studies of the emergency cooling whereat fuel rod imitators both with direct and indirect shell heating were used. The objective of data processing was obtaining the temperature distribution within the imitator, the heat flux removed by the coolant and the shell-coolant heat-transfer coefficient. The special attention was paid to the temperature distribution calculation at the data processing during the reflooding experiments. In this case two factors are assumed to be known: the time dependency of temperature variation at a certain point within the imitator cross-section and the heat flux at some point of the same cross-section. The initial data preparation for calculations, employing the procedure of smoothing by cubic spline functions, is considered as well, with application of an algorithm reported in the literature, which is efficient for the given functional dependency wherein the deviation in each point is known [ru
International Nuclear Information System (INIS)
Yu-Dong, Chen; Li, Li; Yi, Zhang; Jian-Ming, Hu
2009-01-01
In the study of complex networks (systems), the scaling phenomenon of flow fluctuations refers to a certain power-law between the mean flux (activity) (F i ) of the i-th node and its variance σ i as σ i α (F i ) α . Such scaling laws are found to be prevalent both in natural and man-made network systems, but the understanding of their origins still remains limited. This paper proposes a non-stationary Poisson process model to give an analytical explanation of the non-universal scaling phenomenon: the exponent α varies between 1/2 and 1 depending on the size of sampling time window and the relative strength of the external/internal driven forces of the systems. The crossover behaviour and the relation of fluctuation scaling with pseudo long range dependence are also accounted for by the model. Numerical experiments show that the proposed model can recover the multi-scaling phenomenon. (general)
A nonstationary Markov transition model for computing the relative risk of dementia before death
Yu, Lei; Griffith, William S.; Tyas, Suzanne L.; Snowdon, David A.; Kryscio, Richard J.
2010-01-01
This paper investigates the long-term behavior of the k-step transition probability matrix for a nonstationary discrete time Markov chain in the context of modeling transitions from intact cognition to dementia with mild cognitive impairment (MCI) and global impairment (GI) as intervening cognitive states. The authors derive formulas for the following absorption statistics: (1) the relative risk of absorption between competing absorbing states, and (2) the mean and variance of the number of visits among the transient states before absorption. Since absorption is not guaranteed, sufficient conditions are discussed to ensure that the substochastic matrix associated with transitions among transient states converges to zero in limit. Results are illustrated with an application to the Nun Study, a cohort of 678 participants, 75 to 107 years of age, followed longitudinally with up to ten cognitive assessments over a fifteen-year period. PMID:20087848
Markov-switching model for nonstationary runoff conditioned on El Nino information
DEFF Research Database (Denmark)
Gelati, Emiliano; Madsen, H.; Rosbjerg, Dan
2010-01-01
We define a Markov-modulated autoregressive model with exogenous input (MARX) to generate runoff scenarios using climatic information. Runoff parameterization is assumed to be conditioned on a hidden climate state following a Markov chain, where state transition probabilities are functions...... of the climatic input. MARX allows stochastic modeling of nonstationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We apply MARX to inflow time series of the Daule Peripa reservoir (Ecuador). El Nino Southern...... Oscillation (ENSO) information is used to condition runoff parameterization. Among the investigated ENSO indexes, the NINO 1+2 sea surface temperature anomalies and the trans-Nino index perform best as predictors. In the perspective of reservoir optimization at various time scales, MARX produces realistic...
Robust suppression of nonstationary power-line interference in electrocardiogram signals
International Nuclear Information System (INIS)
Li, Guojun; Zeng, Xiaopin; Zhou, Yu; Liu, Guojin; Zhou, Xichuan; Zhou, Xiaona
2012-01-01
It is a challenge to suppress time-varying power-line interference (PLI) with various levels in electrocardiogram (ECG) signals. Most previous attempts of tracking and suppressing the nonstationary PLI signal are based on the least-squares (LS) algorithm. This makes these methods susceptible to QRS complex in suppressing a low-level PLI signal which is frequently coupled in battery-operated ECG equipment. To address the limitation of LS-based methods, this study presents a robust PLI suppression system based on a robust extension of the Kalman filter. In addition, we used an improved version of empirical mode decomposition to further attenuate the QRS complex. Experiments show that our system could effectively suppress the PLI while preserving meaningful ECG components at various interference levels. (paper)
Stochastic Geometric Models with Non-stationary Spatial Correlations in Lagrangian Fluid Flows
Gay-Balmaz, François; Holm, Darryl D.
2018-06-01
Inspired by spatiotemporal observations from satellites of the trajectories of objects drifting near the surface of the ocean in the National Oceanic and Atmospheric Administration's "Global Drifter Program", this paper develops data-driven stochastic models of geophysical fluid dynamics (GFD) with non-stationary spatial correlations representing the dynamical behaviour of oceanic currents. Three models are considered. Model 1 from Holm (Proc R Soc A 471:20140963, 2015) is reviewed, in which the spatial correlations are time independent. Two new models, called Model 2 and Model 3, introduce two different symmetry breaking mechanisms by which the spatial correlations may be advected by the flow. These models are derived using reduction by symmetry of stochastic variational principles, leading to stochastic Hamiltonian systems, whose momentum maps, conservation laws and Lie-Poisson bracket structures are used in developing the new stochastic Hamiltonian models of GFD.
Is the Labour Force Participation Rate Non-Stationary in Romania?
Directory of Open Access Journals (Sweden)
Tiwari Aviral Kumar
2015-01-01
Full Text Available The purpose of this paper is to test hysteresis of the Romanian labour force participation rate, by using time series data, with quarterly frequency, covering the period 1999Q1-2013Q4. The main results reveal that the Romanian labour force participation rate is a nonlinear process and has a partial unit root (i.e. it is stationary in the first regime and non-stationary in the second one, the main breaking point being registered around year 2005. In this context, the value of using unemployment rate as an indicator for capturing joblessness in this country is debatable. Starting from 2005, the participation rate has not followed long-term changes in unemployment rate, the disturbances having permanent effects on labour force participation rate.
Nonstationary modeling of a long record of rainfall and temperature over Rome
Villarini, Gabriele; Smith, James A.; Napolitano, Francesco
2010-10-01
A long record (1862-2004) of seasonal rainfall and temperature from the Rome observatory of Collegio Romano are modeled in a nonstationary framework by means of the Generalized Additive Models in Location, Scale and Shape (GAMLSS). Modeling analyses are used to characterize nonstationarities in rainfall and related climate variables. It is shown that the GAMLSS models are able to represent the magnitude and spread in the seasonal time series with parameters which are a smooth function of time. Covariate analyses highlight the role of seasonal and interannual variability of large-scale climate forcing, as reflected in three teleconnection indexes (Atlantic Multidecadal Oscillation, North Atlantic Oscillation, and Mediterranean Index), for modeling seasonal rainfall and temperature over Rome. In particular, the North Atlantic Oscillation is a significant predictor during the winter, while the Mediterranean Index is a significant predictor for almost all seasons.
4th International Conference on Condition Monitoring of Machinery in Non-Stationary Operations
Zimroz, Radoslaw; Bartelmus, Walter; Haddar, Mohamed
2016-01-01
The book provides readers with a snapshot of recent research and technological trends in the field of condition monitoring of machinery working under a broad range of operating conditions. Each chapter, accepted after a rigorous peer-review process, reports on an original piece of work presented and discussed at the 4th International Conference on Condition Monitoring of Machinery in Non-stationary Operations, CMMNO 2014, held on December 15-16, 2014, in Lyon, France. The contributions have been grouped into three different sections according to the main subfield (signal processing, data mining, or condition monitoring techniques) they are related to. The book includes both theoretical developments as well as a number of industrial case studies, in different areas including, but not limited to: noise and vibration; vibro-acoustic diagnosis; signal processing techniques; diagnostic data analysis; instantaneous speed identification; monitoring and diagnostic systems; and dynamic and fault modeling. This book no...
Mathematical modeling of non-stationary gas flow in gas pipeline
Fetisov, V. G.; Nikolaev, A. K.; Lykov, Y. V.; Duchnevich, L. N.
2018-03-01
An analysis of the operation of the gas transportation system shows that for a considerable part of time pipelines operate in an unsettled regime of gas movement. Its pressure and flow rate vary along the length of pipeline and over time as a result of uneven consumption and selection, switching on and off compressor units, shutting off stop valves, emergence of emergency leaks. The operational management of such regimes is associated with difficulty of reconciling the operating modes of individual sections of gas pipeline with each other, as well as with compressor stations. Determining the grounds that cause change in the operating mode of the pipeline system and revealing patterns of these changes determine the choice of its parameters. Therefore, knowledge of the laws of changing the main technological parameters of gas pumping through pipelines in conditions of non-stationary motion is of great importance for practice.
Preventing Errors in Laterality
Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie
2014-01-01
An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...
International Nuclear Information System (INIS)
Reason, J.
1988-01-01
This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
Cannon, A. J.
2009-12-01
Parameters in a Generalized Extreme Value (GEV) distribution are specified as a function of covariates using a conditional density network (CDN), which is a probabilistic extension of the multilayer perceptron neural network. If the covariate is time, or is dependent on time, then the GEV-CDN model can be used to perform nonlinear, nonstationary GEV analysis of hydrological or climatological time series. Due to the flexibility of the neural network architecture, the model is capable of representing a wide range of nonstationary relationships. Model parameters are estimated by generalized maximum likelihood, an approach that is tailored to the estimation of GEV parameters from geophysical time series. Model complexity is identified using the Bayesian information criterion and the Akaike information criterion with small sample size correction. Monte Carlo simulations are used to validate GEV-CDN performance on four simple synthetic problems. The model is then demonstrated on precipitation data from southern California, a series that exhibits nonstationarity due to interannual/interdecadal climatic variability. A hierarchy of models can be defined by adjusting three aspects of the GEV-CDN model architecture: (i) by specifying either a linear or a nonlinear hidden-layer activation function; (ii) by adjusting the number of hidden-layer nodes; or (iii) by disconnecting weights leading to output-layer nodes. To illustrate, five GEV-CDN models are shown here in order of increasing complexity for the case of a single covariate, which, in this case, is assumed to be time. The shape parameter is assumed to be constant in all models, although this is not a requirement of the GEV-CDN framework.
Real-time reservoir operation considering non-stationary inflow prediction
Zhao, J.; Xu, W.; Cai, X.; Wang, Z.
2011-12-01
Stationarity of inflow has been a basic assumption for reservoir operation rule design, which is now facing challenges due to climate change and human interferences. This paper proposes a modeling framework to incorporate non-stationary inflow prediction for optimizing the hedging operation rule of large reservoirs with multiple-year flow regulation capacity. A multi-stage optimization model is formulated and a solution algorithm based on the optimality conditions is developed to incorporate non-stationary annual inflow prediction through a rolling, dynamic framework that updates the prediction from period to period and adopt the updated prediction in reservoir operation decision. The prediction model is ARIMA(4,1,0), in which parameter 4 stands for the order of autoregressive, 1 represents a linear trend, and 0 is the order of moving average. The modeling framework and solution algorithm is applied to the Miyun reservoir in China, determining a yearly operating schedule during the period from 1996 to 2009, during which there was a significant declining trend of reservoir inflow. Different operation policy scenarios are modeled, including standard operation policy (SOP, matching the current demand as much as possible), hedging rule (i.e., leaving a certain amount of water for future to avoid large risk of water deficit) with forecast from ARIMA (HR-1), hedging (HR) with perfect forecast (HR-2 ). Compared to the results of these scenarios to that of the actual reservoir operation (AO), the utility of the reservoir operation under HR-1 is 3.0% lower than HR-2, but 3.7% higher than the AO and 14.4% higher than SOP. Note that the utility under AO is 10.3% higher than that under SOP, which shows that a certain level of hedging under some inflow prediction or forecast was used in the real-world operation. Moreover, the impacts of discount rate and forecast uncertainty level on the operation will be discussed.
Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series
Vicente, Raul; Díaz-Pernas, Francisco J.; Wibral, Michael
2014-01-01
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and
International Nuclear Information System (INIS)
La Pointe, P.R.
1994-11-01
This report describes the comparison of stationary and non-stationary geostatistical models for the purpose of inferring block-scale hydraulic conductivity values from packer tests at Aespoe. The comparison between models is made through the evaluation of cross-validation statistics for three experimental designs. The first experiment consisted of a 'Delete-1' test previously used at Finnsjoen. The second test consisted of 'Delete-10%' and the third test was a 'Delete-50%' test. Preliminary data analysis showed that the 3 m and 30 m packer test data can be treated as a sample from a single population for the purposes of geostatistical analyses. Analysis of the 3 m data does not indicate that there are any systematic statistical changes with depth, rock type, fracture zone vs non-fracture zone or other mappable factor. Directional variograms are ambiguous to interpret due to the clustered nature of the data, but do not show any obvious anisotropy that should be accounted for in geostatistical analysis. Stationary analysis suggested that there exists a sizeable spatially uncorrelated component ('Nugget Effect') in the 3 m data, on the order of 60% of the observed variance for the various models fitted. Four different nested models were automatically fit to the data. Results for all models in terms of cross-validation statistics were very similar for the first set of validation tests. Non-stationary analysis established that both the order of drift and the order of the intrinsic random functions is low. This study also suggests that conventional cross-validation studies and automatic variogram fitting are not necessarily evaluating how well a model will infer block scale hydraulic conductivity values. 20 refs, 20 figs, 14 tabs
Flood frequency analysis for nonstationary annual peak records in an urban drainage basin
Villarini, G.; Smith, J.A.; Serinaldi, F.; Bales, J.; Bates, P.D.; Krajewski, W.F.
2009-01-01
Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110 km2) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1 m3 s- 1 km- 2 to a maximum of 5.1 m3 s- 1 km- 2. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2 m3 s- 1 km- 2). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2 m3 s- 1 km- 2 ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades. ?? 2009 Elsevier Ltd.
Flood frequency analysis for nonstationary annual peak records in an urban drainage basin
Villarini, Gabriele; Smith, James A.; Serinaldi, Francesco; Bales, Jerad; Bates, Paul D.; Krajewski, Witold F.
2009-08-01
Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110km) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1mskm to a maximum of 5.1mskm. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2mskm). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2mskm ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades.
Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique
Directory of Open Access Journals (Sweden)
Daniel Maposa
2016-05-01
Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate. Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value
International Nuclear Information System (INIS)
Davis, A.; Wiscombe, W.; Cahalan, R.; Marshak, A.
1994-01-01
Geophysical data rarely show any smoothness at any scale, and this often makes comparison with theoretical model output difficult. However, highly fluctuating signals and fractual structures are typical of open dissipative systems with nonlinear dynamics, the focus of most geophysical research. High levels of variability are excited over a large range of scales by the combined actions of external forcing and internal instability. At very small scales we expect geophysical fields to be smooth, but these are rarely resolved with available instrumentation or simulation tools; nondifferentiable and even discontinuous models are therefore in order. We need methods of statistically analyzing geophysical data, whether measured in situ, remotely sensed or even generated by a computer model, that are adapted to these characteristics. An important preliminary task is to define statistically stationary features in generally nonstationary signals. We first discuss a simple criterion for stationarity in finite data streams that exhibit power law energy spectra and then, guided by developments in turbulence studies, we advocate the use of two ways of analyzing the scale dependence of statistical information: singular measures and qth order structure functions. In nonstationary situations, the approach based on singular measures seeks power law behavior in integrals over all possible scales of a nonnegative stationary field derived from the data, leading to a characterization of the intermittency in this field. In contrast, the approach based on structure functions uses the signal itself, seeking power laws for the statistical moments of absolute increments over arbitrarily large scales, leading to a characterization of the prevailing nonstationarity in both quantitative and qualitative terms. We explain graphically, step by step, both multifractal statistics which are largely complementary to each other. 45 refs., 13 figs., 2 tabs
Nuclear Hartree-Fock approximation testing and other related approximations
International Nuclear Information System (INIS)
Cohenca, J.M.
1970-01-01
Hartree-Fock, and Tamm-Dancoff approximations are tested for angular momentum of even-even nuclei. Wave functions, energy levels and momenta are comparatively evaluated. Quadripole interactions are studied following the Elliott model. Results are applied to Ne 20 [pt
NLO error propagation exercise: statistical results
International Nuclear Information System (INIS)
Pack, D.J.; Downing, D.J.
1985-09-01
Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods
Shearlets and Optimally Sparse Approximations
DEFF Research Database (Denmark)
Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q
2012-01-01
Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Approximations to camera sensor noise
Jin, Xiaodan; Hirakawa, Keigo
2013-02-01
Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.
Rational approximations for tomographic reconstructions
International Nuclear Information System (INIS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-01-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp–Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image. (paper)
Approximation methods in probability theory
Čekanavičius, Vydas
2016-01-01
This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.
... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...
2012-03-01
This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...
International Nuclear Information System (INIS)
Jeach, J.L.
1976-01-01
When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables
Spotting software errors sooner
International Nuclear Information System (INIS)
Munro, D.
1989-01-01
Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)
International Nuclear Information System (INIS)
Kop, L.
2001-01-01
On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses
Medical Errors Reduction Initiative
National Research Council Canada - National Science Library
Mutter, Michael L
2005-01-01
The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to
DEFF Research Database (Denmark)
Rasmussen, Jens
1983-01-01
An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....
Approximate reasoning in physical systems
International Nuclear Information System (INIS)
Mutihac, R.
1991-01-01
The theory of fuzzy sets provides excellent ground to deal with fuzzy observations (uncertain or imprecise signals, wavelengths, temperatures,etc.) fuzzy functions (spectra and depth profiles) and fuzzy logic and approximate reasoning. First, the basic ideas of fuzzy set theory are briefly presented. Secondly, stress is put on application of simple fuzzy set operations for matching candidate reference spectra of a spectral library to an unknown sample spectrum (e.g. IR spectroscopy). Thirdly, approximate reasoning is applied to infer an unknown property from information available in a database (e.g. crystal systems). Finally, multi-dimensional fuzzy reasoning techniques are suggested. (Author)