WorldWideScience

Sample records for model order estimation

  1. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  2. Comparisons of Modeling and State of Charge Estimation for Lithium-Ion Battery Based on Fractional Order and Integral Order Methods

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2016-03-01

    Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.

  3. Disturbance estimation of nuclear power plant by using reduced-order model

    International Nuclear Information System (INIS)

    Tashima, Shin-ichi; Wakabayashi, Jiro

    1983-01-01

    An estimation method is proposed of multiplex disturbances which occur in a nuclear power plant. The method is composed of two parts: (i) the identification of a simplified model of multi-input and multi-output to describe the related system response, and (ii) the design of a Kalman filter to estimate the multiplex disturbance. Concerning the simplified model, several observed signals are firstly selected as output variables which can well represent the system response caused by the disturbances. A reduced-order model is utilized for designing the disturbance estimator. This is based on the following two considerations. The first is that the disturbance is assumed to be of a quasistatic nature. The other is based on the intuition that there exist a few dominant modes between the disturbances and the selected observed signals and that most of the non-dominant modes which remain may not affect the accuracy of the disturbance estimator. The reduced-order model is furtherly transformed to a single-output model using a linear combination of the output signals, where the standard procedure of the structural identification is evaded. The parameters of the model thus transformed are calculated by the generalized least square method. As for the multiplex disturbance estimator, the Kalman filtering method is applied by compromising the following three items : (a) quick response to disturbance, (b) reduction of estimation error in the presence of observation noises, and (c) the elimination of cross-interference between the disturbances to the plant and the estimates from the Kalman filter. The effectiveness of the proposed method is verified through some computer experiments using a BWR plant simulator. (author)

  4. SECOND ORDER LEAST SQUARE ESTIMATION ON ARCH(1 MODEL WITH BOX-COX TRANSFORMED DEPENDENT VARIABLE

    Directory of Open Access Journals (Sweden)

    Herni Utami

    2014-03-01

    Full Text Available Box-Cox transformation is often used to reduce heterogeneity and to achieve a symmetric distribution of response variable. In this paper, we estimate the parameters of Box-Cox transformed ARCH(1 model using second-order leastsquare method and then we study the consistency and asymptotic normality for second-order least square (SLS estimators. The SLS estimation was introduced byWang (2003, 2004 to estimate the parameters of nonlinear regression models with independent and identically distributed errors

  5. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  6. Reduced order ARMA spectral estimation of ocean waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Witz, J.A.; Lyons, G.J.

    . After selecting the initial model order based on the Akaike Information Criterion method, a novel model order reduction technique is applied to obtain the final reduced order ARMA model. First estimates of the higher order autoregressive coefficients... of the reduced order ARMA model is obtained. The moving average part is determined based on partial fraction and recursive methods. The above system identification models and model order reduction technique are shown here to be successfully applied...

  7. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  8. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima

    2015-07-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  9. Parameters and Fractional Differentiation Orders Estimation for Linear Continuous-Time Non-Commensurate Fractional Order Systems

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.

  10. Parameters and Fractional Differentiation Orders Estimation for Linear Continuous-Time Non-Commensurate Fractional Order Systems

    KAUST Repository

    Belkhatir, Zehor

    2017-05-31

    This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.

  11. Concurrent hyperthermia estimation schemes based on extended Kalman filtering and reduced-order modelling.

    Science.gov (United States)

    Potocki, J K; Tharp, H S

    1993-01-01

    The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.

  12. Estimation of the order of an autoregressive time series: a Bayesian approach

    International Nuclear Information System (INIS)

    Robb, L.J.

    1980-01-01

    Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases

  13. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  14. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  15. Joint fundamental frequency and order estimation using optimal filtering

    Directory of Open Access Journals (Sweden)

    Jakobsson Andreas

    2011-01-01

    Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.

  16. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  17. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    Science.gov (United States)

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  18. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    Science.gov (United States)

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  19. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  20. On nonlinear reduced order modeling

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.

    2011-01-01

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  1. Low-order model of the Loss-of-Fluid Test (LOFT) reactor plant for use in Kalman filter-based optimal estimators

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1980-01-01

    A low-order, nonlinear model of the Loss-of-Fluid Test (LOFT) reactor plant, for use in Kalman filter estimators, is developed, described, and evaluated. This model consists of 31 differential equations and represents all major subsystems of both the primary and secondary sides of the LOFT plant. Comparisons between model calculations and available LOFT power range testing transients demonstrate the accuracy of the low-order model. The nonlinear model is numerically linearized for future implementation in Kalman filter and optimal control algorithms. The linearized model is shown to be an adequate representation of the nonlinear plant dynamics

  2. High-order computer-assisted estimates of topological entropy

    Science.gov (United States)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  3. Investigation of Effectiveness of Order Review and Release Models in Make to Order Supply Chain

    Directory of Open Access Journals (Sweden)

    Kundu Kaustav

    2016-01-01

    Full Text Available Nowadays customisation becomes more common due to vast requirement from the customers for which industries are trying to use make-to-order (MTO strategy. Due to high variation in the process, workload control models are extensively used for jobshop companies which usually adapt MTO strategy. Some authors tried to implement workload control models, order review and release systems, in non-repetitive manufacturing companies, where there is a dominant flow in production. Those models are better in shop floor but their performances are never been investigated in high variation situations like MTO supply chain. This paper starts with the introduction of particular issues in MTO companies and a general overview of order review and release systems widely used in the industries. Two order review and release systems, the Limited and Balanced models, particularly suitable for flow shop system are applied to MTO supply chain, where the processing times are difficult to estimate due to high variation. Simulation results show that the Balanced model performs much better than the Limited model if the processing times can be estimated preciously.

  4. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  5. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    Science.gov (United States)

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  6. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Reduced Order Modeling in General Relativity

    Science.gov (United States)

    Tiglio, Manuel

    2014-03-01

    Reduced Order Modeling is an emerging yet fast developing filed in gravitational wave physics. The main goals are to enable fast modeling and parameter estimation of any detected signal, along with rapid matched filtering detecting. I will focus on the first two. Some accomplishments include being able to replace, with essentially no lost of physical accuracy, the original models with surrogate ones (which are not effective ones, that is, they do not simplify the physics but go on a very different track, exploiting the particulars of the waveform family under consideration and state of the art dimensional reduction techniques) which are very fast to evaluate. For example, for EOB models they are at least around 3 orders of magnitude faster than solving the original equations, with physically equivalent results. For numerical simulations the speedup is at least 11 orders of magnitude. For parameter estimation our current numbers are about bringing ~100 days for a single SPA inspiral binary neutron star Bayesian parameter estimation analysis to under a day. More recently, it has been shown that the full precessing problem for, say, 200 cycles, can be represented, through some new ideas, by a remarkably compact set of carefully chosen reduced basis waveforms (~10-100, depending on the accuracy requirements). I will highlight what I personally believe are the challenges to face next in this subarea of GW physics and where efforts should be directed. This talk will summarize work in collaboration with: Harbir Antil (GMU), Jonathan Blackman (Caltech), Priscila Canizares (IoA, Cambridge, UK), Sarah Caudill (UWM), Jonathan Gair (IoA. Cambridge. UK), Scott Field (UMD), Chad R. Galley (Caltech), Frank Herrmann (Germany), Han Hestahven (EPFL, Switzerland), Jason Kaye (Brown, Stanford & Courant). Evan Ochsner (UWM), Ricardo Nochetto (UMD), Vivien Raymond (LIGO, Caltech), Rory Smith (LIGO, Caltech) Bela Ssilagyi (Caltech) and MT (UMD & Caltech).

  8. Joint estimation of the fractional differentiation orders and the unknown input for linear fractional non-commensurate system

    KAUST Repository

    Belkhatir, Zehor

    2015-11-05

    This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.

  9. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    Science.gov (United States)

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  10. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  11. REGIONAL FIRST ORDER PERIODIC AUTOREGRESSIVE MODELS FOR MONTHLY FLOWS

    Directory of Open Access Journals (Sweden)

    Ceyhun ÖZÇELİK

    2008-01-01

    Full Text Available First order periodic autoregressive models is of mostly used models in modeling of time dependency of hydrological flow processes. In these models, periodicity of the correlogram is preserved as well as time dependency of processes. However, the parameters of these models, namely, inter-monthly lag-1 autocorrelation coefficients may be often estimated erroneously from short samples, since they are statistics of high order moments. Therefore, to constitute a regional model may be a solution that can produce more reliable and decisive estimates, and derive models and model parameters in any required point of the basin considered. In this study, definitions of homogeneous region for lag-1 autocorrelation coefficients are made; five parametric and non parametric models are proposed to set regional models of lag-1 autocorrelation coefficients. Regional models are applied on 30 stream flow gauging stations in Seyhan and Ceyhan basins, and tested by criteria of relative absolute bias, simple and relative root of mean square errors.

  12. On the parameter estimation of first order IMA model corrupted with ...

    African Journals Online (AJOL)

    In this paper, we showed how the autocovariance functions can be used to estimate the true parameters of IMA(1) models corrupted with white noise . We performed simulation studies to demonstrate our findings. The simulation studies showed that under the presence of errors in not more than 30% of total data points, our ...

  13. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  14. Parameter Estimates in Differential Equation Models for Chemical Kinetics

    Science.gov (United States)

    Winkel, Brian

    2011-01-01

    We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…

  15. Estimation of the convergence order of rigorous coupled-wave analysis for OCD metrology

    Science.gov (United States)

    Ma, Yuan; Liu, Shiyuan; Chen, Xiuguo; Zhang, Chuanwei

    2011-12-01

    In most cases of optical critical dimension (OCD) metrology, when applying rigorous coupled-wave analysis (RCWA) to optical modeling, a high order of Fourier harmonics is usually set up to guarantee the convergence of the final results. However, the total number of floating point operations grows dramatically as the truncation order increases. Therefore, it is critical to choose an appropriate order to obtain high computational efficiency without losing much accuracy in the meantime. In this paper, the convergence order associated with the structural and optical parameters has been estimated through simulation. The results indicate that the convergence order is linear with the period of the sample when fixing the other parameters, both for planar diffraction and conical diffraction. The illuminated wavelength also affects the convergence of a final result. With further investigations concentrated on the ratio of illuminated wavelength to period, it is discovered that the convergence order decreases with the growth of the ratio, and when the ratio is fixed, convergence order jumps slightly, especially in a specific range of wavelength. This characteristic could be applied to estimate the optimum convergence order of given samples to obtain high computational efficiency.

  16. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  17. Connection between weighted LPC and higher-order statistics for AR model estimation

    NARCIS (Netherlands)

    Kamp, Y.; Ma, C.

    1993-01-01

    This paper establishes the relationship between a weighted linear prediction method used for robust analysis of voiced speech and the autoregressive modelling based on higher-order statistics, known as cumulants

  18. A simplified parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.

  19. Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling

    Directory of Open Access Journals (Sweden)

    Ioana Cornel

    2005-01-01

    Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.

  20. A tridiagonal parsimonious higher order multivariate Markov chain model

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.

  1. Optimizing lengths of confidence intervals: fourth-order efficiency in location models

    NARCIS (Netherlands)

    Klaassen, C.; Venetiaan, S.

    2010-01-01

    Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation

  2. A Reduced-Order Successive Linear Estimator for Geostatistical Inversion and its Application in Hydraulic Tomography

    Science.gov (United States)

    Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng

    2018-03-01

    Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.

  3. Estimation of uncertainties from missing higher orders in perturbative calculations

    International Nuclear Information System (INIS)

    Bagnaschi, E.

    2015-05-01

    In this proceeding we present the results of our recent study (hep-ph/1409.5036) of the statistical performances of two different approaches, Scale Variation (SV) and the Bayesian model of Cacciari and Houdeau (CH)(hep-ph/1105.5152) (which we also extend to observables with initial state hadrons), to the estimation of Missing Higher-Order Uncertainties (MHOUs)(hep-ph/1307.1843) in perturbation theory. The behavior of the models is determined by analyzing, on a wide set of observables, how the MHOU intervals they produce are successful in predicting the next orders. We observe that the Bayesian model behaves consistently, producing intervals at 68% Degree of Belief (DoB) comparable with the scale variation intervals with a rescaling factor r larger than 2 and closer to 4. Concerning SV, our analysis allows the derivation of a heuristic Confidence Level (CL) for the intervals. We find that assigning a CL of 68% to the intervals obtained with the conventional choice of varying the scales within a factor of two with respect to the central scale could potentially lead to an underestimation of the uncertainties in the case of observables with initial state hadrons.

  4. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  5. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  6. Health Parameter Estimation with Second-Order Sliding Mode Observer for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Xiaodong Chang

    2017-07-01

    Full Text Available In this paper the problem of health parameter estimation in an aero-engine is investigated by using an unknown input observer-based methodology, implemented by a second-order sliding mode observer (SOSMO. Unlike the conventional state estimator-based schemes, such as Kalman filters (KF and sliding mode observers (SMO, the proposed scheme uses a “reconstruction signal” to estimate health parameters modeled as artificial inputs, and is not only applicable to long-time health degradation, but reacts much quicker in handling abrupt fault cases. In view of the inevitable uncertainties in engine dynamics and modeling, a weighting matrix is created to minimize such effect on estimation by using the linear matrix inequalities (LMI. A big step toward uncertainty modeling is taken compared with our previous SMO-based work, in that uncertainties are considered in a more practical form. Moreover, to avoid chattering in sliding modes, the super-twisting algorithm (STA is employed in observer design. Various simulations are carried out, based on the comparisons between the KF-based scheme, the SMO-based scheme in our earlier research, and the proposed method. The results consistently demonstrate the capabilities and advantages of the proposed approach in health parameter estimation.

  7. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    Science.gov (United States)

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate

  8. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    Science.gov (United States)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of

  9. Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery

    OpenAIRE

    King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.

    2018-01-01

    Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substa...

  10. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan; Morel, Jean-Michel; Schö nlieb, Carola-Bibiane

    2013-01-01

    features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE

  11. Marginal and Interaction Effects in Ordered Response Models

    OpenAIRE

    Debdulal Mallick

    2009-01-01

    In discrete choice models the marginal effect of a variable of interest that is interacted with another variable differs from the marginal effect of a variable that is not interacted with any variable. The magnitude of the interaction effect is also not equal to the marginal effect of the interaction term. I present consistent estimators of both marginal and interaction effects in ordered response models. This procedure is general and can easily be extended to other discrete choice models. I ...

  12. Aeroelastic simulation using CFD based reduced order models

    International Nuclear Information System (INIS)

    Zhang, W.; Ye, Z.; Li, H.; Yang, Q.

    2005-01-01

    This paper aims at providing an accurate and efficient method for aeroelastic simulation. System identification is used to get the reduced order models of unsteady aerodynamics. Unsteady Euler codes are used to compute the output signals while 3211 multistep input signals are utilized. LS(Least Squares) method is used to estimate the coefficients of the input-output difference model. The reduced order models are then used in place of the unsteady CFD code for aeroelastic simulation. The aeroelastic equations are marched by an improved 4th order Runge-Kutta method that only needs to compute the aerodynamic loads one time at every time step. The computed results agree well with that of the direct coupling CFD/CSD methods. The computational efficiency is improved 1∼2 orders while still retaining the high accuracy. A standard aeroelastic computing example (isogai wing) with S type flutter boundary is computed and analyzed. It is due to the system has more than one neutral points at the Mach range of 0.875∼0.9. (author)

  13. Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints

    DEFF Research Database (Denmark)

    Lanne, Markku; Luoto, Jani

    We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....

  14. A Novel Observer for Lithium-Ion Battery State of Charge Estimation in Electric Vehicles Based on a Second-Order Equivalent Circuit Model

    Directory of Open Access Journals (Sweden)

    Bizhong Xia

    2017-08-01

    Full Text Available Accurate state of charge (SOC estimation can prolong lithium-ion battery life and improve its performance in practice. This paper proposes a new method for SOC estimation. The second-order resistor-capacitor (2RC equivalent circuit model (ECM is applied to describe the dynamic behavior of lithium-ion battery on deriving state space equations. A novel method for SOC estimation is then presented. This method does not require any matrix calculation, so the computation cost can be very low, making it more suitable for hardware implementation. The Federal Urban Driving Schedule (FUDS, The New European Driving Cycle (NEDC, and the West Virginia Suburban Driving Schedule (WVUSUB experiments are carried to evaluate the performance of the proposed method. Experimental results show that the SOC estimation error can converge to 3% error boundary within 30 seconds when the initial SOC estimation error is 20%, and the proposed method can maintain an estimation error less than 3% with 1% voltage noise and 5% current noise. Further, the proposed method has excellent robustness against parameter disturbance. Also, it has higher estimation accuracy than the extended Kalman filter (EKF, but with decreased hardware requirements and faster convergence rate.

  15. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  16. Optimal heavy tail estimation – Part 1: Order selection

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2017-12-01

    Full Text Available The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1 serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  17. Group-ICA model order highlights patterns of functional brain connectivity

    Directory of Open Access Journals (Sweden)

    Ahmed eAbou Elseoud

    2011-06-01

    Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.

  18. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  19. Validity testing of third-order nonlinear models for synchronous generators

    Energy Technology Data Exchange (ETDEWEB)

    Arjona, M.A. [Division de Estudios de Posgrado e Investigacion, Instituto Tecnologico de La Laguna Torreon, Coah. (Mexico); Escarela-Perez, R. [Universidad Autonoma Metropolitana - Azcapotzalco, Departamento de Energia, Av. San Pablo 180, Col. Reynosa, C.P. 02200 (Mexico); Espinosa-Perez, G. [Division de Estudios Posgrado de la Facultad de Ingenieria Universidad Nacional Autonoma de Mexico (Mexico); Alvarez-Ramirez, J. [Universidad Autonoma Metropolitana -Iztapalapa, Division de Ciencias Basicas e Ingenieria (Mexico)

    2009-06-15

    Third-order nonlinear models are commonly used in control theory for the analysis of the stability of both open-loop and closed-loop synchronous machines. However, the ability of these models to describe the electrical machine dynamics has not been tested experimentally. This work focuses on this issue by addressing the parameters identification problem for third-order models for synchronous generators. For a third-order model describing the dynamics of power angle {delta}, rotor speed {omega} and quadrature axis transient EMF E{sub q}{sup '}, it is shown that the parameters cannot be identified because of the effects of the unknown initial condition of E{sub q}{sup '}. To avoid this situation, a model that incorporates the measured electrical power dynamics is considered, showing that state measurements guarantee the identification of the model parameters. Data obtained from a 7 kVA lab-scale synchronous generator and from a 150 MVA finite-element simulation were used to show that, at least for the worked examples, the estimated parameters display only moderate variations over the operating region. This suggests that third-order models can suffice to describe the main dynamical features of synchronous generators, and that third-order models can be used to design and tune power system stabilizers and voltage regulators. (author)

  20. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  1. Global weighted estimates for second-order nondivergence elliptic ...

    Indian Academy of Sciences (India)

    Fengping Yao

    2018-03-21

    Mar 21, 2018 ... One of the key a priori estimates in the theory of second-order elliptic .... It is well known that the maximal functions satisfy strong p–p .... Here we prove the following auxiliary result, which will be a crucial ingredient in the proof.

  2. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    Science.gov (United States)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  3. Using Count Data and Ordered Models in National Forest Recreation Demand Analysis

    Science.gov (United States)

    Simões, Paula; Barata, Eduardo; Cruz, Luis

    2013-11-01

    This research addresses the need to improve our knowledge on the demand for national forests for recreation and offers an in-depth data analysis supported by the complementary use of count data and ordered models. From a policy-making perspective, while count data models enable the estimation of monetary welfare measures, ordered models allow for the wider use of the database and provide a more flexible analysis of data. The main purpose of this article is to analyse the individual forest recreation demand and to derive a measure of its current use value. To allow a more complete analysis of the forest recreation demand structure the econometric approach supplements the use of count data models with ordered category models using data obtained by means of an on-site survey in the Bussaco National Forest (Portugal). Overall, both models reveal that travel cost and substitute prices are important explanatory variables, visits are a normal good and demographic variables seem to have no influence on demand. In particular, estimated price and income elasticities of demand are quite low. Accordingly, it is possible to argue that travel cost (price) in isolation may be expected to have a low impact on visitation levels.

  4. Estimates for lower order eigenvalues of a clamped plate problem

    OpenAIRE

    Cheng, Qing-Ming; Huang, Guangyue; Wei, Guoxin

    2009-01-01

    For a bounded domain $\\Omega$ in a complete Riemannian manifold $M^n$, we study estimates for lower order eigenvalues of a clamped plate problem. We obtain universal inequalities for lower order eigenvalues. We would like to remark that our results are sharp.

  5. Identification of reduced-order model for an aeroelastic system from flutter test data

    Directory of Open Access Journals (Sweden)

    Wei Tang

    2017-02-01

    Full Text Available Recently, flutter active control using linear parameter varying (LPV framework has attracted a lot of attention. LPV control synthesis usually generates controllers that are at least of the same order as the aeroelastic models. Therefore, the reduced-order model is required by synthesis for avoidance of large computation cost and high-order controller. This paper proposes a new procedure for generation of accurate reduced-order linear time-invariant (LTI models by using system identification from flutter testing data. The proposed approach is in two steps. The well-known poly-reference least squares complex frequency (p-LSCF algorithm is firstly employed for modal parameter identification from frequency response measurement. After parameter identification, the dominant physical modes are determined by clear stabilization diagrams and clustering technique. In the second step, with prior knowledge of physical poles, the improved frequency-domain maximum likelihood (ML estimator is presented for building accurate reduced-order model. Before ML estimation, an improved subspace identification considering the poles constraint is also proposed for initializing the iterative procedure. Finally, the performance of the proposed procedure is validated by real flight flutter test data.

  6. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    Science.gov (United States)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  7. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  8. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Pituk Mihály

    2005-01-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  9. Asymptotic estimates and exponential stability for higher-order monotone difference equations

    Directory of Open Access Journals (Sweden)

    Mihály Pituk

    2005-03-01

    Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.

  10. IDC Reengineering Phase 2 & 3 Rough Order of Magnitude (ROM) Cost Estimate Summary (Leveraged NDC Case).

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James M.; Prescott, Ryan; Dawson, Jericah M.; Huelskamp, Robert M.

    2014-11-01

    Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.

  11. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.

  12. Robust Estimation for a CSTR Using a High Order Sliding Mode Observer and an Observer-Based Estimator

    Directory of Open Access Journals (Sweden)

    Esteban Jiménez-Rodríguez

    2016-12-01

    Full Text Available This paper presents an estimation structure for a continuous stirred-tank reactor, which is comprised of a sliding mode observer-based estimator coupled with a high-order sliding-mode observer. The whole scheme allows the robust estimation of the state and some parameters, specifically the concentration of the reactive mass, the heat of reaction and the global coefficient of heat transfer, by measuring the temperature inside the reactor and the temperature inside the jacket. In order to verify the results, the convergence proof of the proposed structure is done, and numerical simulations are presented with noiseless and noisy measurements, suggesting the applicability of the posed approach.

  13. A novel Gaussian model based battery state estimation approach: State-of-Energy

    International Nuclear Information System (INIS)

    He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun

    2015-01-01

    Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets

  14. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    Science.gov (United States)

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  15. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  16. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    Science.gov (United States)

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  17. Reduced-Order Computational Model for Low-Frequency Dynamics of Automobiles

    Directory of Open Access Journals (Sweden)

    A. Arnoux

    2013-01-01

    Full Text Available A reduced-order model is constructed to predict, for the low-frequency range, the dynamical responses in the stiff parts of an automobile constituted of stiff and flexible parts. The vehicle has then many elastic modes in this range due to the presence of many flexible parts and equipment. A nonusual reduced-order model is introduced. The family of the elastic modes is not used and is replaced by an adapted vector basis of the admissible space of global displacements. Such a construction requires a decomposition of the domain of the structure in subdomains in order to control the spatial wave length of the global displacements. The fast marching method is used to carry out the subdomain decomposition. A probabilistic model of uncertainties is introduced. The parameters controlling the level of uncertainties are estimated solving a statistical inverse problem. The methodology is validated with a large computational model of an automobile.

  18. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan

    2013-01-01

    We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data. © 2013 Springer-Verlag.

  19. Are Low-order Covariance Estimates Useful in Error Analyses?

    Science.gov (United States)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  20. A reduced-order adaptive neuro-fuzzy inference system model as a software sensor for rapid estimation of five-day biochemical oxygen demand

    Science.gov (United States)

    Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin

    2013-07-01

    The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.

  1. Kinetic parameter estimation model for anaerobic co-digestion of waste activated sludge and microalgae.

    Science.gov (United States)

    Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong

    2017-03-01

    Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Multivariable robust adaptive controller using reduced-order model

    Directory of Open Access Journals (Sweden)

    Wei Wang

    1990-04-01

    Full Text Available In this paper a multivariable robust adaptive controller is presented for a plant with bounded disturbances and unmodeled dynamics due to plant-model order mismatches. The robust stability of the closed-loop system is achieved by using the normalization technique and the least squares parameter estimation scheme with dead zones. The weighting polynomial matrices are incorporated into the control law, so that the open-loop unstable or/and nonminimum phase plants can be handled.

  3. Motion estimation by data assimilation in reduced dynamic models

    International Nuclear Information System (INIS)

    Drifi, Karim

    2013-01-01

    Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr

  4. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  5. Towards the Development of a Second-Order Approximation in Activity Coefficient Models Based on Group Contributions

    DEFF Research Database (Denmark)

    Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul

    1996-01-01

    A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...... correlation/prediction capabilities, distinction between isomers and ability to overcome proximity effects....

  6. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  7. A Mathematical Modelling Approach to One-Day Cricket Batting Orders

    Science.gov (United States)

    Bukiet, Bruce; Ovens, Matthews

    2006-01-01

    While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key Points Batting order does effect the expected runs distribution in one-day cricket. One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities. Dismissals rare and probabilities very small by comparison to baseball. Probability distribution for lower order batsmen is potentially skewed due to increased risk taking. Full enumeration of all possible line-ups is impractical using a single average computer. PMID:24357943

  8. Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation

    International Nuclear Information System (INIS)

    Gao, Y; Guo, Y; Chi, Y L; Qin, S R

    2006-01-01

    Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable

  9. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  10. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  11. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  12. Robust estimation and moment selection in dynamic fixed-effects panel data models

    NARCIS (Netherlands)

    Cizek, Pavel; Aquaro, Michele

    Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their

  13. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills

    DEFF Research Database (Denmark)

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-01-01

    Methane (CH4) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required...... categories. In general, the single-phase model, LandGEM, significantly overestimated CH4 generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH4 potential (BMP) and CH4 generation rate constant (k...... landfills (from the start of disposal until 2020 and until 2100). Through a CH4 mass balance approach, fugitive CH4 emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field...

  14. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  15. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric

    2015-01-01

    . In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved

  16. Ordering dynamics of microscopic models with nonconserved order parameter of continuous symmetry

    DEFF Research Database (Denmark)

    Zhang, Z.; Mouritsen, Ole G.; Zuckermann, Martin J.

    1993-01-01

    crystals. For both models, which have a nonconserved order parameter, it is found that the linear scale, R(t), of the evolving order, following quenches to below the transition temperature, grows at late times in an effectively algebraic fashion, R(t)∼tn, with exponent values which are strongly temperature......Numerical Monte Carlo temperature-quenching experiments have been performed on two three-dimensional classical lattice models with continuous ordering symmetry: the Lebwohl-Lasher model [Phys. Rev. A 6, 426 (1972)] and the ferromagnetic isotropic Heisenberg model. Both models describe a transition...... from a disordered phase to an orientationally ordered phase of continuous symmetry. The Lebwohl-Lasher model accounts for the orientational ordering properties of the nematic-isotropic transition in liquid crystals and the Heisenberg model for the ferromagnetic-paramagnetic transition in magnetic...

  17. A case study to estimate costs using Neural Networks and regression based models

    Directory of Open Access Journals (Sweden)

    Nadia Bhuiyan

    2012-07-01

    Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.

  18. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  19. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem

    2017-01-01

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating

  20. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  1. A simplified parsimonious higher order multivariate Markov chain model with new convergence condition

    Science.gov (United States)

    Wang, Chao; Yang, Chuan-sheng

    2017-09-01

    In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.

  2. Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model

    Directory of Open Access Journals (Sweden)

    Kese Pontes Freitas Alberton

    2015-01-01

    Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.

  3. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  4. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  5. Performance of a reduced-order FSI model for flow-induced vocal fold vibration

    Science.gov (United States)

    Luo, Haoxiang; Chang, Siyuan; Chen, Ye; Rousseau, Bernard; PhonoSim Team

    2017-11-01

    Vocal fold vibration during speech production involves a three-dimensional unsteady glottal jet flow and three-dimensional nonlinear tissue mechanics. A full 3D fluid-structure interaction (FSI) model is computationally expensive even though it provides most accurate information about the system. On the other hand, an efficient reduced-order FSI model is useful for fast simulation and analysis of the vocal fold dynamics, which can be applied in procedures such as optimization and parameter estimation. In this work, we study performance of a reduced-order model as compared with the corresponding full 3D model in terms of its accuracy in predicting the vibration frequency and deformation mode. In the reduced-order model, we use a 1D flow model coupled with a 3D tissue model that is the same as in the full 3D model. Two different hyperelastic tissue behaviors are assumed. In addition, the vocal fold thickness and subglottal pressure are varied for systematic comparison. The result shows that the reduced-order model provides consistent predictions as the full 3D model across different tissue material assumptions and subglottal pressures. However, the vocal fold thickness has most effect on the model accuracy, especially when the vocal fold is thin.

  6. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Directory of Open Access Journals (Sweden)

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  7. A Real-Time Joint Estimator for Model Parameters and State of Charge of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Jianping Gao

    2015-08-01

    Full Text Available Accurate state of charge (SoC estimation of batteries plays an important role in promoting the commercialization of electric vehicles. The main work to be done in accurately determining battery SoC can be summarized in three parts. (1 In view of the model-based SoC estimation flow diagram, the n-order resistance-capacitance (RC battery model is proposed and expected to accurately simulate the battery’s major time-variable, nonlinear characteristics. Then, the mathematical equations for model parameter identification and SoC estimation of this model are constructed. (2 The Akaike information criterion is used to determine an optimal tradeoff between battery model complexity and prediction precision for the n-order RC battery model. Results from a comparative analysis show that the first-order RC battery model is thought to be the best based on the Akaike information criterion (AIC values. (3 The real-time joint estimator for the model parameter and SoC is constructed, and the application based on two battery types indicates that the proposed SoC estimator is a closed-loop identification system where the model parameter identification and SoC estimation are corrected mutually, adaptively and simultaneously according to the observer values. The maximum SoC estimation error is less than 1% for both battery types, even against the inaccurate initial SoC.

  8. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a

  9. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

  10. DYNAMIC ESTIMATION FOR PARAMETERS OF INTERFERENCE SIGNALS BY THE SECOND ORDER EXTENDED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    P. A. Ermolaev

    2014-03-01

    Full Text Available Data processing in the interferometer systems requires high-resolution and high-speed algorithms. Recurrence algorithms based on parametric representation of signals execute consequent processing of signal samples. In some cases recurrence algorithms make it possible to increase speed and quality of data processing as compared with classic processing methods. Dependence of the measured interferometer signal on parameters of its model and stochastic nature of noise formation in the system is, in general, nonlinear. The usage of nonlinear stochastic filtering algorithms is expedient for such signals processing. Extended Kalman filter with linearization of state and output equations by the first vector parameters derivatives is an example of these algorithms. To decrease approximation error of this method the second order extended Kalman filtering is suggested with additionally usage of the second vector parameters derivatives of model equations. Examples of algorithm implementation with the different sets of estimated parameters are described. The proposed algorithm gives the possibility to increase the quality of data processing in interferometer systems in which signals are forming according to considered models. Obtained standard deviation of estimated amplitude envelope does not exceed 4% of the maximum. It is shown that signal-to-noise ratio of reconstructed signal is increased by 60%.

  11. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  12. Stochastic Parameter Estimation of Non-Linear Systems Using Only Higher Order Spectra of the Measured Response

    Science.gov (United States)

    Vasta, M.; Roberts, J. B.

    1998-06-01

    Methods for using fourth order spectral quantities to estimate the unknown parameters in non-linear, randomly excited dynamic systems are developed. Attention is focused on the case where only the response is measurable and the excitation is unmeasurable and known only in terms of a stochastic process model. The approach is illustrated through application to a non-linear oscillator with both non-linear damping and stiffness and with excitation modelled as a stationary Gaussian white noise process. The methods have applications in studies of the response of structures to random environmental loads, such as wind and ocean wave forces.

  13. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla

    2015-08-12

    This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.

  14. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  15. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Czech Academy of Sciences Publication Activity Database

    Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří

    2008-01-01

    Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf

  16. Fuel Burn Estimation Model

    Science.gov (United States)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  17. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  18. Partially ordered models

    NARCIS (Netherlands)

    Fernandez, R.; Deveaux, V.

    2010-01-01

    We provide a formal definition and study the basic properties of partially ordered chains (POC). These systems were proposed to model textures in image processing and to represent independence relations between random variables in statistics (in the later case they are known as Bayesian networks).

  19. Source term boundary adaptive estimation in a first-order 1D hyperbolic PDE: Application to a one loop solar collector through

    KAUST Repository

    Mechhoud, Sarra; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term

  20. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  1. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    Science.gov (United States)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  2. Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery

    Science.gov (United States)

    King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.

    2018-02-01

    Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.

  3. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  4. Gridded rainfall estimation for distributed modeling in western mountainous areas

    Science.gov (United States)

    Moreda, F.; Cong, S.; Schaake, J.; Smith, M.

    2006-05-01

    Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall

  5. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    International Nuclear Information System (INIS)

    Takaishi, Tetsuya

    2014-01-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model

  6. Modeling and estimation of a low degree geopotential model from terrestrial gravity data

    Science.gov (United States)

    Pavlis, Nikolaos K.

    1988-01-01

    The development of appropriate modeling and adjustment procedures for the estimation of harmonic coefficients of the geopotential, from surface gravity data was studied, in order to provide an optimum way of utilizing the terrestrial gravity information in combination solutions currently developed at NASA/Goddard Space Flight Center, for use in the TOPEX/POSEIDON mission. The mathematical modeling was based on the fundamental boundary condition of the linearized Molodensky boundary value problem. Atmospheric and ellipsoidal corrections were applied to the surface anomalies. Terrestrial gravity solutions were found to be in good agreement with the satellite ones over areas which are well surveyed (gravimetrically), such as North America or Australia. However, systematic differences between the terrestrial only models and GEMT1, over extended regions in Africa, the Soviet Union, and China were found. In Africa, gravity anomaly differences on the order of 20 mgals and undulation differences on the order of 15 meters, over regions extending 2000 km in diameter, occur. Comparisons of the GEMT1 implied undulations with 32 well distributed Doppler derived undulations gave an RMS difference of 2.6 m, while corresponding comparison with undulations implied by the terrestrial solution gave RMS difference on the order of 15 m, which implies that the terrestrial data in that region are substantially in error.

  7. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  8. New models for estimating the carbon sink capacity of Spanish softwood species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Rio, M. del; Montero, G.

    2011-07-01

    Quantifying the carbon balance in forests is one of the main challenges in forest management. Forest carbon stocks are usually estimated indirectly through biomass equations applied to forest inventories, frequently considering different tree biomass components. The aim of this study is to develop systems of equations for predicting tree biomass components for the main forest softwood species in Spain: Abies alba Mill., A. pinsapo Boiss., Juniperus thurifera L., Pinus canariensis Sweet ex Spreng., P. halepensis Mill., P. nigra Arn., P. pinaster Ait., P. pinea L., P. sylvestris L., P. uncinata Mill. For each species, a system of additive biomass models was fitted using seemingly unrelated regression. Diameter at the breast height and total height were used as independent variables. Diameter appears in all component models, while tree height was included in the stem component model of all species and in some branch component equations. Total height was included in order to improve biomass estimations at different sites. These biomass models were compared to previously available equations in order to test their accuracy and it was found that they yielded better fitting statistics in all cases. Moreover, the models fulfil the additivity property. We also developed root:shoot ratios in order to determine the partitioning into aboveground and belowground biomass. A number of differences were found between species, with a minimum of 0.183 for A. alba and a maximum of 0.385 for P. uncinata. The mean value for the softwood species studied was 0.265. Since the Spanish National Forest Inventory (NFI) records species, tree diameter and height of sample trees, these biomass models and ratios can be used to accurately estimate carbon stocks from NFI data. (Author) 55 refs.

  9. New models for estimating the carbon sink capacity of Spanish softwood species

    International Nuclear Information System (INIS)

    Ruiz-Peinado, R.; Rio, M. del; Montero, G.

    2011-01-01

    Quantifying the carbon balance in forests is one of the main challenges in forest management. Forest carbon stocks are usually estimated indirectly through biomass equations applied to forest inventories, frequently considering different tree biomass components. The aim of this study is to develop systems of equations for predicting tree biomass components for the main forest softwood species in Spain: Abies alba Mill., A. pinsapo Boiss., Juniperus thurifera L., Pinus canariensis Sweet ex Spreng., P. halepensis Mill., P. nigra Arn., P. pinaster Ait., P. pinea L., P. sylvestris L., P. uncinata Mill. For each species, a system of additive biomass models was fitted using seemingly unrelated regression. Diameter at the breast height and total height were used as independent variables. Diameter appears in all component models, while tree height was included in the stem component model of all species and in some branch component equations. Total height was included in order to improve biomass estimations at different sites. These biomass models were compared to previously available equations in order to test their accuracy and it was found that they yielded better fitting statistics in all cases. Moreover, the models fulfil the additivity property. We also developed root:shoot ratios in order to determine the partitioning into aboveground and belowground biomass. A number of differences were found between species, with a minimum of 0.183 for A. alba and a maximum of 0.385 for P. uncinata. The mean value for the softwood species studied was 0.265. Since the Spanish National Forest Inventory (NFI) records species, tree diameter and height of sample trees, these biomass models and ratios can be used to accurately estimate carbon stocks from NFI data. (Author) 55 refs.

  10. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    Science.gov (United States)

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  11. Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model.

    Science.gov (United States)

    Bolster, Carl H; Vadas, Peter A

    2013-07-01

    Models are often used to predict phosphorus (P) loss from agricultural fields. Although it is commonly recognized that model predictions are inherently uncertain, few studies have addressed prediction uncertainties using P loss models. In this study we assessed the effect of model input error on predictions of annual P loss by the Annual P Loss Estimator (APLE) model. Our objectives were (i) to conduct a sensitivity analyses for all APLE input variables to determine which variables the model is most sensitive to, (ii) to determine whether the relatively easy-to-implement first-order approximation (FOA) method provides accurate estimates of model prediction uncertainties by comparing results with the more accurate Monte Carlo simulation (MCS) method, and (iii) to evaluate the performance of the APLE model against measured P loss data when uncertainties in model predictions and measured data are included. Our results showed that for low to moderate uncertainties in APLE input variables, the FOA method yields reasonable estimates of model prediction uncertainties, although for cases where manure solid content is between 14 and 17%, the FOA method may not be as accurate as the MCS method due to a discontinuity in the manure P loss component of APLE at a manure solid content of 15%. The estimated uncertainties in APLE predictions based on assumed errors in the input variables ranged from ±2 to 64% of the predicted value. Results from this study highlight the importance of including reasonable estimates of model uncertainty when using models to predict P loss. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  12. Source term boundary adaptive estimation in a first-order 1D hyperbolic PDE: Application to a one loop solar collector through

    KAUST Repository

    Mechhoud, Sarra

    2016-08-04

    In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term and only boundary measurements are available. Using boundary injection, the estimator is developed in the Lyapunov approach and consists of a combination of a state observer and a parameter adaptation law which guarantee the asymptotic convergence of the state and parameter estimation errors. Simulation results are provided to illustrate the performance of the proposed identifier.

  13. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  14. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  15. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    Directory of Open Access Journals (Sweden)

    C. I. Cho

    2016-05-01

    Full Text Available The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs, and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK, fat yield (FAT, protein yield (PROT, and solids-not-fat yield (SNF. The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP of the third to fifth order (L3–L5, fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order. The residual variances in the models were either homogeneous (HOM or heterogeneous (15 classes, HET15; 60 classes, HET60. A total of nine models (3 orders of polynomials×3 types of residual variance including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC and/or Schwarz Bayesian information criteria (BIC statistics to identify the model(s of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF and L4-HET15 (FAT, which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first

  16. Fractional-order in a macroeconomic dynamic model

    Science.gov (United States)

    David, S. A.; Quintino, D. D.; Soliani, J.

    2013-10-01

    In this paper, we applied the Riemann-Liouville approach in order to realize the numerical simulations to a set of equations that represent a fractional-order macroeconomic dynamic model. It is a generalization of a dynamic model recently reported in the literature. The aforementioned equations have been simulated for several cases involving integer and non-integer order analysis, with some different values to fractional order. The time histories and the phase diagrams have been plotted to visualize the effect of fractional order approach. The new contribution of this work arises from the fact that the macroeconomic dynamic model proposed here involves the public sector deficit equation, which renders the model more realistic and complete when compared with the ones encountered in the literature. The results reveal that the fractional-order macroeconomic model can exhibit a real reasonable behavior to macroeconomics systems and might offer greater insights towards the understanding of these complex dynamic systems.

  17. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  18. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  19. Simple, Efficient Estimators of Treatment Effects in Randomized Trials Using Generalized Linear Models to Leverage Baseline Variables

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J.

    2010-01-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636

  20. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  1. Efficient estimation of feedback effects with application to climate models

    International Nuclear Information System (INIS)

    Cacugi, D.G.; Hall, M.C.G.

    1984-01-01

    This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical

  2. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    International Nuclear Information System (INIS)

    Galanti, Eli; Kaspi, Yohai; Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano

    2017-01-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  3. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Galanti, Eli; Kaspi, Yohai [Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot (Israel); Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano, E-mail: eli.galanti@weizmann.ac.il [Dipartimento di Ingegneria Meccanica e Aerospaziale, Sapienza Universita di Roma, Rome (Italy)

    2017-07-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  4. Nitrogen Removal in a Horizontal Subsurface Flow Constructed Wetland Estimated Using the First-Order Kinetic Model

    Directory of Open Access Journals (Sweden)

    Lijuan Cui

    2016-11-01

    Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.

  5. Jacobian projection reduced-order models for dynamic systems with contact nonlinearities

    Science.gov (United States)

    Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.

    2018-02-01

    In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.

  6. A posteriori model validation for the temporal order of directed functional connectivity maps.

    Science.gov (United States)

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  7. A posteriori model validation for the temporal order of directed functional connectivity maps

    Directory of Open Access Journals (Sweden)

    Adriene M. Beltz

    2015-08-01

    Full Text Available A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests, and (b to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates and substantive implications (e.g., higher order lags may be common in resting state data.

  8. Ordered random variables theory and applications

    CERN Document Server

    Shahbaz, Muhammad Qaiser; Hanif Shahbaz, Saman; Al-Zahrani, Bander M

    2016-01-01

    Ordered Random Variables have attracted several authors. The basic building block of Ordered Random Variables is Order Statistics which has several applications in extreme value theory and ordered estimation. The general model for ordered random variables, known as Generalized Order Statistics has been introduced relatively recently by Kamps (1995).

  9. Perspectives on the application of order-statistics in best-estimate plus uncertainty nuclear safety analysis

    International Nuclear Information System (INIS)

    Martin, Robert P.; Nutt, William T.

    2011-01-01

    Research highlights: → Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. → Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. → Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. → Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.

  10. A neural network model for estimating soil phosphorus using terrain analysis

    Directory of Open Access Journals (Sweden)

    Ali Keshavarzi

    2015-12-01

    Full Text Available Artificial neural network (ANN model was developed and tested for estimating soil phosphorus (P in Kouhin watershed area (1000 ha, Qazvin province, Iran using terrain analysis. Based on the soil distribution correlation, vegetation growth pattern across the topographically heterogeneous landscape, the topographic and vegetation attributes were used in addition to pedologic information for the development of ANN model in area for estimating of soil phosphorus. Totally, 85 samples were collected and tested for phosphorus contents and corresponding attributes were estimated by the digital elevation model (DEM. In order to develop the pedo-transfer functions, data linearity was checked, correlated and 80% was used for modeling and ANN was tested using 20% of collected data. Results indicate that 68% of the variation in soil phosphorus could be explained by elevation and Band 1 data and significant correlation was observed between input variables and phosphorus contents. There was a significant correlation between soil P and terrain attributes which can be used to derive the pedo-transfer function for soil P estimation to manage nutrient deficiency. Results showed that P values can be calculated more accurately with the ANN-based pedo-transfer function with the input topographic variables along with the Band 1.

  11. Linear and nonlinear stability analysis in BWRs applying a reduced order model

    Energy Technology Data Exchange (ETDEWEB)

    Olvera G, O. A.; Espinosa P, G.; Prieto G, A., E-mail: omar_olverag@hotmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico)

    2016-09-15

    Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)

  12. Linear and nonlinear stability analysis in BWRs applying a reduced order model

    International Nuclear Information System (INIS)

    Olvera G, O. A.; Espinosa P, G.; Prieto G, A.

    2016-09-01

    Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)

  13. Variations in wave direction estimated using first and second order Fourier coefficients

    Digital Repository Service at National Institute of Oceanography (India)

    SanilKumar, V.; Anand, N.M.

    to the peak frequency are used in practice. In the present study, comparison is made on wave directions estimated based on first and second order Fourier coefficients using data collected at four locations in the west and east coasts of India. Study shows...

  14. Multi-Criteria Model for Determining Order Size

    Directory of Open Access Journals (Sweden)

    Katarzyna Jakowska-Suwalska

    2013-01-01

    Full Text Available A multi-criteria model for determining the order size for materials used in production has been presented. It was assumed that the consumption rate of each material is a random variable with a known probability distribution. Using such a model, in which the purchase cost of materials ordered is limited, three criteria were considered: order size, probability of a lack of materials in the production process, and deviations in the order size from the consumption rate in past periods. Based on an example, it has been shown how to use the model to determine the order sizes for polyurethane adhesive and wood in a hard-coal mine. (original abstract

  15. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  16. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  17. Second Order Kinetic Modeling of Headspace Solid Phase Microextraction of Flavors Released from Selected Food Model Systems

    Directory of Open Access Journals (Sweden)

    Jiyuan Zhang

    2014-09-01

    Full Text Available The application of headspace-solid phase microextraction (HS-SPME has been widely used in various fields as a simple and versatile method, yet challenging in quantification. In order to improve the reproducibility in quantification, a mathematical model with its root in psychological modeling and chemical reactor modeling was developed, describing the kinetic behavior of aroma active compounds extracted by SPME from two different food model systems, i.e., a semi-solid food and a liquid food. The model accounted for both adsorption and release of the analytes from SPME fiber, which occurred simultaneously but were counter-directed. The model had four parameters and their estimated values were found to be more reproducible than the direct measurement of the compounds themselves by instrumental analysis. With the relative standard deviations (RSD of each parameter less than 5% and root mean square error (RMSE less than 0.15, the model was proved to be a robust one in estimating the release of a wide range of low molecular weight acetates at three environmental temperatures i.e., 30, 40 and 60 °C. More insights of SPME behavior regarding the small molecule analytes were also obtained through the kinetic parameters and the model itself.

  18. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  19. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  20. Review of uncertainty estimates associated with models for assessing the impact of breeder reactor radioactivity releases

    International Nuclear Information System (INIS)

    Miller, C.; Little, C.A.

    1982-08-01

    The purpose is to summarize estimates based on currently available data of the uncertainty associated with radiological assessment models. The models being examined herein are those recommended previously for use in breeder reactor assessments. Uncertainty estimates are presented for models of atmospheric and hydrologic transport, terrestrial and aquatic food-chain bioaccumulation, and internal and external dosimetry. Both long-term and short-term release conditions are discussed. The uncertainty estimates presented in this report indicate that, for many sites, generic models and representative parameter values may be used to calculate doses from annual average radionuclide releases when these calculated doses are on the order of one-tenth or less of a relevant dose limit. For short-term, accidental releases, especially those from breeder reactors located in sites dominated by complex terrain and/or coastal meteorology, the uncertainty in the dose calculations may be much larger than an order of magnitude. As a result, it may be necessary to incorporate site-specific information into the dose calculation under these circumstances to reduce this uncertainty. However, even using site-specific information, natural variability and the uncertainties in the dose conversion factor will likely result in an overall uncertainty of greater than an order of magnitude for predictions of dose or concentration in environmental media following shortterm releases

  1. Software Cost-Estimation Model

    Science.gov (United States)

    Tausworthe, R. C.

    1985-01-01

    Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.

  2. DETERMINANTS OF SOVEREIGN RATING: FACTOR BASED ORDERED PROBIT MODELS FOR PANEL DATA ANALYSIS MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Dilek Teker

    2013-01-01

    Full Text Available The aim of this research is to compose a new rating methodology and provide credit notches to 23 countries which of 13 are developed and 10 are emerging. There are various literature that explains the determinants of credit ratings. Following the literature, we select 11 variables for our model which of 5 are eliminated by the factor analysis. We use specific dummies to investigate the structural breaks in time and cross section such as pre crises, post crises, BRIC membership, EU membership, OPEC membership, shipbuilder country and platinum reserved country. Then we run an ordered probit model and give credit notches to the countries. We use FITCH ratings as benchmark. Thus, at the end we compare the notches of FITCH with the ones we derive out of our estimated model.

  3. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  4. The estimation of geometry and motion of a surface from image sequences by means of linearisation of a paramatric model

    NARCIS (Netherlands)

    Korsten, Maarten J.; Houkes, Z.

    1990-01-01

    A method is given to estimate the geometry and motion of a moving body surface from image sequences. To this aim a parametric model of the surface is used, in order to reformulate the problem to one of parameter estimation. After linearization of the model standard linear estimation methods can be

  5. Model selection criteria : how to evaluate order restrictions

    NARCIS (Netherlands)

    Kuiper, R.M.

    2012-01-01

    Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might

  6. The model for estimation production cost of embroidery handicraft

    Science.gov (United States)

    Nofierni; Sriwana, IK; Septriani, Y.

    2017-12-01

    Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.

  7. Inadmissibility of Usual and Mixed Estimators of Two Ordered Gamma Scale Parameters Under Reflected Gamma Loss Function

    Directory of Open Access Journals (Sweden)

    Z. Meghnatisi

    2009-06-01

    Full Text Available Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a class of mixed estimators when it is known that the parameters are ordered. Also, the inadmissible estimators in the class of mixed estimators are derived. Finally the results are extended to some subclass of exponential family

  8. Estimating Reaction Rate Coefficients Within a Travel-Time Modeling Framework

    Energy Technology Data Exchange (ETDEWEB)

    Gong, R [Georgia Institute of Technology; Lu, C [Georgia Institute of Technology; Luo, Jian [Georgia Institute of Technology; Wu, Wei-min [Stanford University; Cheng, H. [Stanford University; Criddle, Craig [Stanford University; Kitanidis, Peter K. [Stanford University; Gu, Baohua [ORNL; Watson, David B [ORNL; Jardine, Philip M [ORNL; Brooks, Scott C [ORNL

    2011-03-01

    A generalized, efficient, and practical approach based on the travel-time modeling framework is developed to estimate in situ reaction rate coefficients for groundwater remediation in heterogeneous aquifers. The required information for this approach can be obtained by conducting tracer tests with injection of a mixture of conservative and reactive tracers and measurements of both breakthrough curves (BTCs). The conservative BTC is used to infer the travel-time distribution from the injection point to the observation point. For advection-dominant reactive transport with well-mixed reactive species and a constant travel-time distribution, the reactive BTC is obtained by integrating the solutions to advective-reactive transport over the entire travel-time distribution, and then is used in optimization to determine the in situ reaction rate coefficients. By directly working on the conservative and reactive BTCs, this approach avoids costly aquifer characterization and improves the estimation for transport in heterogeneous aquifers which may not be sufficiently described by traditional mechanistic transport models with constant transport parameters. Simplified schemes are proposed for reactive transport with zero-, first-, nth-order, and Michaelis-Menten reactions. The proposed approach is validated by a reactive transport case in a two-dimensional synthetic heterogeneous aquifer and a field-scale bioremediation experiment conducted at Oak Ridge, Tennessee. The field application indicates that ethanol degradation for U(VI)-bioremediation is better approximated by zero-order reaction kinetics than first-order reaction kinetics.

  9. A model predictive control approach combined unscented Kalman filter vehicle state estimation in intelligent vehicle trajectory tracking

    Directory of Open Access Journals (Sweden)

    Hongxiao Yu

    2015-05-01

    Full Text Available Trajectory tracking and state estimation are significant in the motion planning and intelligent vehicle control. This article focuses on the model predictive control approach for the trajectory tracking of the intelligent vehicles and state estimation of the nonlinear vehicle system. The constraints of the system states are considered when applying the model predictive control method to the practical problem, while 4-degree-of-freedom vehicle model and unscented Kalman filter are proposed to estimate the vehicle states. The estimated states of the vehicle are used to provide model predictive control with real-time control and judge vehicle stability. Furthermore, in order to decrease the cost of solving the nonlinear optimization, the linear time-varying model predictive control is used at each time step. The effectiveness of the proposed vehicle state estimation and model predictive control method is tested by driving simulator. The results of simulations and experiments show that great and robust performance is achieved for trajectory tracking and state estimation in different scenarios.

  10. Random regression models to estimate genetic parameters for milk production of Guzerat cows using orthogonal Legendre polynomials

    Directory of Open Access Journals (Sweden)

    Maria Gabriela Campolina Diniz Peixoto

    2014-05-01

    Full Text Available The objective of this work was to compare random regression models for the estimation of genetic parameters for Guzerat milk production, using orthogonal Legendre polynomials. Records (20,524 of test-day milk yield (TDMY from 2,816 first-lactation Guzerat cows were used. TDMY grouped into 10-monthly classes were analyzed for additive genetic effect and for environmental and residual permanent effects (random effects, whereas the contemporary group, calving age (linear and quadratic effects and mean lactation curve were analized as fixed effects. Trajectories for the additive genetic and permanent environmental effects were modeled by means of a covariance function employing orthogonal Legendre polynomials ranging from the second to the fifth order. Residual variances were considered in one, four, six, or ten variance classes. The best model had six residual variance classes. The heritability estimates for the TDMY records varied from 0.19 to 0.32. The random regression model that used a second-order Legendre polynomial for the additive genetic effect, and a fifth-order polynomial for the permanent environmental effect is adequate for comparison by the main employed criteria. The model with a second-order Legendre polynomial for the additive genetic effect, and that with a fourth-order for the permanent environmental effect could also be employed in these analyses.

  11. Enzymatic Synthesis of Ampicillin: Nonlinear Modeling, Kinetics Estimation, and Adaptive Control

    Directory of Open Access Journals (Sweden)

    Monica Roman

    2012-01-01

    Full Text Available Nowadays, the use of advanced control strategies in biotechnology is quite low. A main reason is the lack of quality of the data, and the fact that more sophisticated control strategies must be based on a model of the dynamics of bioprocesses. The nonlinearity of the bioprocesses and the absence of cheap and reliable instrumentation require an enhanced modeling effort and identification strategies for the kinetics. The present work approaches modeling and control strategies for the enzymatic synthesis of ampicillin that is carried out inside a fed-batch bioreactor. First, a nonlinear dynamical model of this bioprocess is obtained by using a novel modeling procedure for biotechnology: the bond graph methodology. Second, a high gain observer is designed for the estimation of the imprecisely known kinetics of the synthesis process. Third, by combining an exact linearizing control law with the on-line estimation kinetics algorithm, a nonlinear adaptive control law is designed. The case study discussed shows that a nonlinear feedback control strategy applied to the ampicillin synthesis bioprocess can cope with disturbances, noisy measurements, and parametric uncertainties. Numerical simulations performed with MATLAB environment are included in order to test the behavior and the performances of the proposed estimation and control strategies.

  12. Knock probability estimation through an in-cylinder temperature model with exogenous noise

    Science.gov (United States)

    Bares, P.; Selmanaj, D.; Guardiola, C.; Onder, C.

    2018-01-01

    This paper presents a new knock model which combines a deterministic knock model based on the in-cylinder temperature and an exogenous noise disturbing this temperature. The autoignition of the end-gas is modelled by an Arrhenius-like function and the knock probability is estimated by propagating a virtual error probability distribution. Results show that the random nature of knock can be explained by uncertainties at the in-cylinder temperature estimation. The model only has one parameter for calibration and thus can be easily adapted online. In order to reduce the measurement uncertainties associated with the air mass flow sensor, the trapped mass is derived from the in-cylinder pressure resonance, which improves the knock probability estimation and reduces the number of sensors needed for the model. A four stroke SI engine was used for model validation. By varying the intake temperature, the engine speed, the injected fuel mass, and the spark advance, specific tests were conducted, which furnished data with various knock intensities and probabilities. The new model is able to predict the knock probability within a sufficient range at various operating conditions. The trapped mass obtained by the acoustical model was compared in steady conditions by using a fuel balance and a lambda sensor and differences below 1 % were found.

  13. Heterogeneous autoregressive model with structural break using nearest neighbor truncation volatility estimators for DAX.

    Science.gov (United States)

    Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching

    2016-01-01

    High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.

  14. Daily Discharge Estimation in Talar River Using Lazy Learning Model

    Directory of Open Access Journals (Sweden)

    Zahra Abdollahi

    2017-03-01

    Full Text Available Introduction: River discharge as one of the most important hydrology factors has a vital role in physical, ecological, social and economic processes. So, accurate and reliable prediction and estimation of river discharge have been widely considered by many researchers in different fields such as surface water management, design of hydraulic structures, flood control and ecological studies in spetialand temporal scale. Therefore, in last decades different techniques for short-term and long-term estimation of hourly, daily, monthly and annual discharge have been developed for many years. However, short-term estimation models are less sophisticated and more accurate.Various global and local algorithms have been widely used to estimate hydrologic variables. The current study effort to use Lazy Learning approach to evaluate the adequacy of input data in order to follow the variation of discharge and also simulate next-day discharge in Talar River in KasilianBasinwhere is located in north of Iran with an area of 66.75 km2. Lazy learning is a local linear modelling approach in which generalization beyond the training data is delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries Materials and Methods: The current study was conducted in Kasilian Basin, where is located in north of Iran with an area of 66.75 km2. The main river of this basin joins to Talar River near Valicbon village and then exit from the watershed. Hydrometric station located near Valicbon village is equipped with Parshall flume and Limnogragh which can record river discharge of about 20 cubic meters per second.In this study, daily data of discharge recorded in Valicbon station related to 2002 to 2012 was used to estimate the discharge of 19 September 2012. The mean annual discharge of considered river was also calculated by using available data about 0.441 cubic meters per second. To

  15. Estimating true evolutionary distances under the DCJ model.

    Science.gov (United States)

    Lin, Yu; Moret, Bernard M E

    2008-07-01

    Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.

  16. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  17. XY model with higher-order exchange.

    Science.gov (United States)

    Žukovič, Milan; Kalagov, Georgii

    2017-08-01

    An XY model, generalized by inclusion of up to an infinite number of higher-order pairwise interactions with an exponentially decreasing strength, is studied by spin-wave theory and Monte Carlo simulations. At low temperatures the model displays a quasi-long-range-order phase characterized by an algebraically decaying correlation function with the exponent η=T/[2πJ(p,α)], nonlinearly dependent on the parameters p and α that control the number of the higher-order terms and the decay rate of their intensity, respectively. At higher temperatures the system shows a crossover from the continuous Berezinskii-Kosterlitz-Thouless to the first-order transition for the parameter values corresponding to a highly nonlinear shape of the potential well. The role of topological excitations (vortices) in changing the nature of the transition is discussed.

  18. Reserves' potential of sedimentary basin: modeling and estimation; Potentiel de reserves d'un bassin petrolier: modelisation et estimation

    Energy Technology Data Exchange (ETDEWEB)

    Lepez, V.

    2002-12-01

    The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)

  19. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    International Nuclear Information System (INIS)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-01-01

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers

  20. Frequency-domain reduced order models for gravitational waves from aligned-spin compact binaries

    International Nuclear Information System (INIS)

    Pürrer, Michael

    2014-01-01

    Black-hole binary coalescences are one of the most promising sources for the first detection of gravitational waves. Fast and accurate theoretical models of the gravitational radiation emitted from these coalescences are highly important for the detection and extraction of physical parameters. Spinning effective-one-body models for binaries with aligned-spins have been shown to be highly faithful, but are slow to generate and thus have not yet been used for parameter estimation (PE) studies. I provide a frequency-domain singular value decomposition-based surrogate reduced order model that is thousands of times faster for typical system masses and has a faithfulness mismatch of better than ∼0.1% with the original SEOBNRv1 model for advanced LIGO detectors. This model enables PE studies up to signal-to-noise ratios (SNRs) of 20 and even up to 50 for total masses below 50 M ⊙ . This paper discusses various choices for approximations and interpolation over the parameter space that can be made for reduced order models of spinning compact binaries, provides a detailed discussion of errors arising in the construction and assesses the fidelity of such models. (paper)

  1. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  2. A Ramp Cosine Cepstrum Model for the Parameter Estimation of Autoregressive Systems at Low SNR

    Directory of Open Access Journals (Sweden)

    Zhu Wei-Ping

    2010-01-01

    Full Text Available A new cosine cepstrum model-based scheme is presented for the parameter estimation of a minimum-phase autoregressive (AR system under low levels of signal-to-noise ratio (SNR. A ramp cosine cepstrum (RCC model for the one-sided autocorrelation function (OSACF of an AR signal is first proposed by considering both white noise and periodic impulse-train excitations. Using the RCC model, a residue-based least-squares optimization technique that guarantees the stability of the system is then presented in order to estimate the AR parameters from noisy output observations. For the purpose of implementation, the discrete cosine transform, which can efficiently handle the phase unwrapping problem and offer computational advantages as compared to the discrete Fourier transform, is employed. From extensive experimentations on AR systems of different orders, it is shown that the proposed method is capable of estimating parameters accurately and consistently in comparison to some of the existing methods for the SNR levels as low as −5 dB. As a practical application of the proposed technique, simulation results are also provided for the identification of a human vocal tract system using noise-corrupted natural speech signals demonstrating a superior estimation performance in terms of the power spectral density of the synthesized speech signals.

  3. A Note on the Large Sample Properties of Estimators Based on Generalized Linear Models for Correlated Pseudo-observations

    DEFF Research Database (Denmark)

    Jacobsen, Martin; Martinussen, Torben

    2016-01-01

    Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These r......Pseudo-values have proven very useful in censored data analysis in complex settings such as multi-state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results....... These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second-order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo-values still seem unclear. In this paper......, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U-statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error...

  4. Comparison of regression models for estimation of isometric wrist joint torques using surface electromyography

    Directory of Open Access Journals (Sweden)

    Menon Carlo

    2011-09-01

    Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.

  5. A Polarimetric First-Order Model of Soil Moisture Effects on the DInSAR Coherence

    Directory of Open Access Journals (Sweden)

    Simon Zwieback

    2015-06-01

    Full Text Available Changes in soil moisture between two radar acquisitions can impact the observed coherence in differential interferometry: both coherence magnitude |Υ| and phase Φ are affected. The influence on the latter potentially biases the estimation of deformations. These effects have been found to be variable in magnitude and sign, as well as dependent on polarization, as opposed to predictions by existing models. Such diversity can be explained when the soil is modelled as a half-space with spatially varying dielectric properties and a rough interface. The first-order perturbative solution achieves–upon calibration with airborne L band data–median correlations ρ at HH polarization of 0.77 for the phase Φ, of 0.50 for |Υ|, and for the phase triplets ≡ of 0.56. The predictions are sensitive to the choice of dielectric mixing model, in particular the absorptive properties; the differences between the mixing models are found to be partially compensatable by varying the relative importance of surface and volume scattering. However, for half of the agricultural fields the Hallikainen mixing model cannot reproduce the observed sensitivities of the phase to soil moisture. In addition, the first-order expansion does not predict any impact on the HV coherence, which is however empirically found to display similar sensitivities to soil moisture as the co-pol channels HH and VV. These results indicate that the first-order solution, while not able to reproduce all observed phenomena, can capture some of the more salient patterns of the effect of soil moisture changes on the HH and VV DInSAR signals. Hence it may prove useful in separating the deformations from the moisture signals, thus yielding improved displacement estimates or new ways for inferring soil moisture.

  6. Strategies for Reduced-Order Models in Uncertainty Quantification of Complex Turbulent Dynamical Systems

    Science.gov (United States)

    Qi, Di

    Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are

  7. A comparison of zero-order, first-order, and Monod biotransformation models

    International Nuclear Information System (INIS)

    Bekins, B.A.; Warren, E.; Godsy, E.M.

    1998-01-01

    Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K S , this assumption is often made without verification of this condition. The authors present a formal error analysis showing that the relative error in the first-order approximation is S/K S and in the zero-order approximation the error is K S /S. They then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K S , it may be better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of K S for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, the authors apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set

  8. Parameter Estimation of Partial Differential Equation Models.

    Science.gov (United States)

    Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab

    2013-01-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.

  9. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  10. Large deviation estimates for a Non-Markovian Lévy generator of big order

    International Nuclear Information System (INIS)

    Léandre, Rémi

    2015-01-01

    We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)

  11. Comparison of two recent models for estimating actual evapotranspiration using only regularly recorded data

    Science.gov (United States)

    Ali, M. F.; Mawdsley, J. A.

    1987-09-01

    An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)

  12. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  13. Estimates on the minimal period for periodic solutions of nonlinear second order Hamiltonian systems

    International Nuclear Information System (INIS)

    Yiming Long.

    1994-11-01

    In this paper, we prove a sharper estimate on the minimal period for periodic solutions of autonomous second order Hamiltonian systems under precisely Rabinowitz' superquadratic condition. (author). 20 refs, 1 fig

  14. Mixed-order phase transition in a one-dimensional model.

    Science.gov (United States)

    Bar, Amir; Mukamel, David

    2014-01-10

    We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.

  15. Best estimate modeling of fuel thermomechanical behaviour in WWER 1000 LB LOCA

    International Nuclear Information System (INIS)

    Valach, M.; Klouzal, J.; Zymak, J.; Dostal, M.

    2009-01-01

    The paper summarizes our calculations of the performance of the WWER 1000 NPP fuel rods during postulated LB LOCA. The thermomechanical modeling was performed by FRAPTRAN using the FRACAS-I mechanical model using the boundary conditions calculated by the ATHLET code. The results and their statistical evaluation are presented, the process of the generalization of gained insight into the best-estimate thermal-hydraulic analyses (BE TM) predictions in order to define a generic BE TM methodology is outlined (authors)

  16. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  17. Estimating Gravity Biases with Wavelets in Support of a 1-cm Accurate Geoid Model

    Science.gov (United States)

    Ahlgren, K.; Li, X.

    2017-12-01

    Systematic errors that reside in surface gravity datasets are one of the major hurdles in constructing a high-accuracy geoid model at high resolutions. The National Oceanic and Atmospheric Administration's (NOAA) National Geodetic Survey (NGS) has an extensive historical surface gravity dataset consisting of approximately 10 million gravity points that are known to have systematic biases at the mGal level (Saleh et al. 2013). As most relevant metadata is absent, estimating and removing these errors to be consistent with a global geopotential model and airborne data in the corresponding wavelength is quite a difficult endeavor. However, this is crucial to support a 1-cm accurate geoid model for the United States. With recently available independent gravity information from GRACE/GOCE and airborne gravity from the NGS Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, several different methods of bias estimation are investigated which utilize radial basis functions and wavelet decomposition. We estimate a surface gravity value by incorporating a satellite gravity model, airborne gravity data, and forward-modeled topography at wavelet levels according to each dataset's spatial wavelength. Considering the estimated gravity values over an entire gravity survey, an estimate of the bias and/or correction for the entire survey can be found and applied. In order to assess the accuracy of each bias estimation method, two techniques are used. First, each bias estimation method is used to predict the bias for two high-quality (unbiased and high accuracy) geoid slope validation surveys (GSVS) (Smith et al. 2013 & Wang et al. 2017). Since these surveys are unbiased, the various bias estimation methods should reflect that and provide an absolute accuracy metric for each of the bias estimation methods. Secondly, the corrected gravity datasets from each of the bias estimation methods are used to build a geoid model. The accuracy of each geoid model

  18. Adaptive order search and tangent-weighted trade-off for motion estimation in H.264

    Directory of Open Access Journals (Sweden)

    Srinivas Bachu

    2018-04-01

    Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding

  19. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  20. Flexible regression models for estimating postmortem interval (PMI) in forensic medicine.

    Science.gov (United States)

    Muñoz Barús, José Ignacio; Febrero-Bande, Manuel; Cadarso-Suárez, Carmen

    2008-10-30

    Correct determination of time of death is an important goal in forensic medicine. Numerous methods have been described for estimating postmortem interval (PMI), but most are imprecise, poorly reproducible and/or have not been validated with real data. In recent years, however, some progress in PMI estimation has been made, notably through the use of new biochemical methods for quantifying relevant indicator compounds in the vitreous humour. The best, but unverified, results have been obtained with [K+] and hypoxanthine [Hx], using simple linear regression (LR) models. The main aim of this paper is to offer more flexible alternatives to LR, such as generalized additive models (GAMs) and support vector machines (SVMs) in order to obtain improved PMI estimates. The present study, based on detailed analysis of [K+] and [Hx] in more than 200 vitreous humour samples from subjects with known PMI, compared classical LR methodology with GAM and SVM methodologies. Both proved better than LR for estimation of PMI. SVM showed somewhat greater precision than GAM, but GAM offers a readily interpretable graphical output, facilitating understanding of findings by legal professionals; there are thus arguments for using both types of models. R code for these methods is available from the authors, permitting accurate prediction of PMI from vitreous humour [K+], [Hx] and [U], with confidence intervals and graphical output provided. Copyright 2008 John Wiley & Sons, Ltd.

  1.  Higher Order Improvements for Approximate Estimators

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Salanié, Bernard

    Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...

  2. Oracle estimation of parametric models under boundary constraints.

    Science.gov (United States)

    Wong, Kin Yau; Goldberg, Yair; Fine, Jason P

    2016-12-01

    In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.

  3. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  4. Reduced order modeling and parameter identification of a building energy system model through an optimization routine

    International Nuclear Information System (INIS)

    Harish, V.S.K.V.; Kumar, Arun

    2016-01-01

    Highlights: • A BES model based on 1st principles is developed and solved numerically. • Parameters of lumped capacitance model are fitted using the proposed optimization routine. • Validations are showed for different types of building construction elements. • Step response excitations for outdoor air temperature and relative humidity are analyzed. - Abstract: Different control techniques together with intelligent building technology (Building Automation Systems) are used to improve energy efficiency of buildings. In almost all control projects, it is crucial to have building energy models with high computational efficiency in order to design and tune the controllers and simulate their performance. In this paper, a set of partial differential equations are formulated accounting for energy flow within the building space. These equations are then solved as conventional finite difference equations using Crank–Nicholson scheme. Such a model of a higher order is regarded as a benchmark model. An optimization algorithm has been developed, depicted through a flowchart, which minimizes the sum squared error between the step responses of the numerical and the optimal model. Optimal model of the construction element is nothing but a RC-network model with the values of Rs and Cs estimated using the non-linear time invariant constrained optimization routine. The model is validated with comparing the step responses with other two RC-network models whose parameter values are selected based on a certain criteria. Validations are showed for different types of building construction elements viz., low, medium and heavy thermal capacity elements. Simulation results show that the optimal model closely follow the step responses of the numerical model as compared to the responses of other two models.

  5. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  6. A termination criterion for parameter estimation in stochastic models in systems biology.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2015-11-01

    Parameter estimation procedures are a central aspect of modeling approaches in systems biology. They are often computationally expensive, especially when the models take stochasticity into account. Typically parameter estimation involves the iterative optimization of an objective function that describes how well the model fits some measured data with a certain set of parameter values. In order to limit the computational expenses it is therefore important to apply an adequate stopping criterion for the optimization process, so that the optimization continues at least until a reasonable fit is obtained, but not much longer. In the case of stochastic modeling, at least some parameter estimation schemes involve an objective function that is itself a random variable. This means that plain convergence tests are not a priori suitable as stopping criteria. This article suggests a termination criterion suited to optimization problems in parameter estimation arising from stochastic models in systems biology. The termination criterion is developed for optimization algorithms that involve populations of parameter sets, such as particle swarm or evolutionary algorithms. It is based on comparing the variance of the objective function over the whole population of parameter sets with the variance of repeated evaluations of the objective function at the best parameter set. The performance is demonstrated for several different algorithms. To test the termination criterion we choose polynomial test functions as well as systems biology models such as an Immigration-Death model and a bistable genetic toggle switch. The genetic toggle switch is an especially challenging test case as it shows a stochastic switching between two steady states which is qualitatively different from the model behavior in a deterministic model. Copyright © 2015. Published by Elsevier Ireland Ltd.

  7. Synthesis of models for order-sorted first-order theories using linear algebra and constraint solving

    Directory of Open Access Journals (Sweden)

    Salvador Lucas

    2015-12-01

    Full Text Available Recent developments in termination analysis for declarative programs emphasize the use of appropriate models for the logical theory representing the program at stake as a generic approach to prove termination of declarative programs. In this setting, Order-Sorted First-Order Logic provides a powerful framework to represent declarative programs. It also provides a target logic to obtain models for other logics via transformations. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis, in particular in termination analysis of declarative programs. We use convex domains to give domains to the different sorts of an order-sorted signature; we interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra and arithmetic constraint solving to synthesize the models.

  8. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  9. A self-organizing state-space-model approach for parameter estimation in hodgkin-huxley-type models of single neurons.

    Directory of Open Access Journals (Sweden)

    Dimitrios V Vavoulis

    Full Text Available Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm, often in combination with a local search method (such as gradient descent in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a

  10. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  11. Estimating methane emissions from landfills based on rainfall, ambient temperature, and waste composition: The CLEEN model.

    Science.gov (United States)

    Karanjekar, Richa V; Bhatt, Arpita; Altouqui, Said; Jangikhatoonabad, Neda; Durai, Vennila; Sattler, Melanie L; Hossain, M D Sahadat; Chen, Victoria

    2015-12-01

    Accurately estimating landfill methane emissions is important for quantifying a landfill's greenhouse gas emissions and power generation potential. Current models, including LandGEM and IPCC, often greatly simplify treatment of factors like rainfall and ambient temperature, which can substantially impact gas production. The newly developed Capturing Landfill Emissions for Energy Needs (CLEEN) model aims to improve landfill methane generation estimates, but still require inputs that are fairly easy to obtain: waste composition, annual rainfall, and ambient temperature. To develop the model, methane generation was measured from 27 laboratory scale landfill reactors, with varying waste compositions (ranging from 0% to 100%); average rainfall rates of 2, 6, and 12 mm/day; and temperatures of 20, 30, and 37°C, according to a statistical experimental design. Refuse components considered were the major biodegradable wastes, food, paper, yard/wood, and textile, as well as inert inorganic waste. Based on the data collected, a multiple linear regression equation (R(2)=0.75) was developed to predict first-order methane generation rate constant values k as functions of waste composition, annual rainfall, and temperature. Because, laboratory methane generation rates exceed field rates, a second scale-up regression equation for k was developed using actual gas-recovery data from 11 landfills in high-income countries with conventional operation. The Capturing Landfill Emissions for Energy Needs (CLEEN) model was developed by incorporating both regression equations into the first-order decay based model for estimating methane generation rates from landfills. CLEEN model values were compared to actual field data from 6 US landfills, and to estimates from LandGEM and IPCC. For 4 of the 6 cases, CLEEN model estimates were the closest to actual. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Estimating order-picking times for return heuristic - equations and simulations

    Directory of Open Access Journals (Sweden)

    Grzegorz Tarczyński

    2015-09-01

    Full Text Available Background: A key element of the evaluation of warehouse operation is the average order-picking time. In warehouses where the order-picking process is carried out according to the "picker-to-part" rule the order-picking time is usually proportional to the distance covered by the picker while picking items. This distance can by estimated by simulations or using mathematical equations. In the paper only the best described in the literature one-block rectangular warehouses are considered. Material and methods: For the one-block rectangular warehouses there are well known five routing heuristics. In the paper the author considers the return heuristic in two variants. The paper presents well known Hall's and De Koster's equations for the average distance traveled by the picker while completing items from one pick list. The author presents own proposals for calculating the expected distance. Results: the results calculated by the use of mathematical equations (the formulas of Hall, De Koster and own propositions were compared with the average values obtained using computer simulations. For the most cases the average error does not exceed 1% (except for Hall's equations. To carry out simulation the computer software Warehouse Real-Time Simulator was used. Conclusions: the order-picking time is a function of many variables and its optimization is not easy. It can be done in two stages: firstly using mathematical equations the set of the potentially best variants is established, next the results are verified using simulations. The results calculated by the use of equations are not precise, but possible to achieve immediately. The simulations are more time-consuming, but allow to analyze the order-picking process more accurately.

  13. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  14. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    Science.gov (United States)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  15. Time and order estimation of paintings based on visual features and expert priors

    Science.gov (United States)

    Cabral, Ricardo S.; Costeira, João P.; de La Torre, Fernando; Bernardino, Alexandre; Carneiro, Gustavo

    2011-03-01

    Time and order are considered crucial information in the art domain, and subject of many research efforts by historians. In this paper, we present a framework for estimating the ordering and date information of paintings and drawings. We formulate this problem as the embedding into a one dimension manifold, which aims to place paintings far or close to each other according to a measure of similarity. Our formulation can be seen as a manifold learning algorithm, albeit properly adapted to deal with existing questions in the art community. To solve this problem, we propose an approach based in Laplacian Eigenmaps and a convex optimization formulation. Both methods are able to incorporate art expertise as priors to the estimation, in the form of constraints. Types of information include exact or approximate dating and partial orderings. We explore the use of soft penalty terms to allow for constraint violation to account for the fact that prior knowledge may contain small errors. Our problem is tested within the scope of the PrintART project, which aims to assist art historians in tracing Portuguese Tile art "Azulejos" back to the engravings that inspired them. Furthermore, we describe other possible applications where time information (and hence, this method) could be of use in art history, fake detection or curatorial treatment.

  16. Deterministic three-half-order kinetic model for microbial degradation of added carbon substrates in soil

    International Nuclear Information System (INIS)

    Brunner, W.; Focht, D.D.

    1984-01-01

    The kinetics of mineralization of carbonaceous substrates has been explained by a deterministic model which is applicable to either growth or nongrowth conditions in soils. The mixed-order nature of the model does not require a priori decisions about reaction order, discontinuity period of lag or stationary phase, or correction for endogenous mineralization rates. The integrated equation is simpler than the integrated form of the Monod equation because of the following: (i) only two, rather than four, interdependent constants have to be determined by nonlinear regression analysis, (ii) substrate or product formation can be expressed explicitly as a function of time, (iii) biomass concentration does not have to be known, and (iv) the required initial estimate for the nonlinear regression analysis can be easily obtained from a linearized form rather than from an interval estimate of a differential equation. 14 CO 2 evolution data from soil have been fitted to the model equation. All data except those from irradiated soil gave us better fits by residual sum of squares (RSS) by assuming growth in soil was linear (RSS =0.71) as opposed to exponential (RSS = 2.87). The underlying reasons for growth (exponential versus linear), no growth, and relative degradation rates of substrates are consistent with the basic mechanisms from which the model is derived. 21 references

  17. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  18. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  19. On-board adaptive model for state of charge estimation of lithium-ion batteries based on Kalman filter with proportional integral-based error adjustment

    Science.gov (United States)

    Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai

    2017-10-01

    With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.

  20. Model-order reduction of lumped parameter systems via fractional calculus

    Science.gov (United States)

    Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio

    2018-04-01

    This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.

  1. Application of Higher Order Fission Matrix for Real Variance Estimation in McCARD Monte Carlo Eigenvalue Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-05-15

    In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.

  2. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  3. Linear models of coregionalization for multivariate lattice data: Order-dependent and order-free cMCARs.

    Science.gov (United States)

    MacNab, Ying C

    2016-08-01

    This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.

  4. Fast prediction and evaluation of eccentric inspirals using reduced-order models

    Science.gov (United States)

    Barta, Dániel; Vasúth, Mátyás

    2018-06-01

    A large number of theoretically predicted waveforms are required by matched-filtering searches for the gravitational-wave signals produced by compact binary coalescence. In order to substantially alleviate the computational burden in gravitational-wave searches and parameter estimation without degrading the signal detectability, we propose a novel reduced-order-model (ROM) approach with applications to adiabatic 3PN-accurate inspiral waveforms of nonspinning sources that evolve on either highly or slightly eccentric orbits. We provide a singular-value decomposition-based reduced-basis method in the frequency domain to generate reduced-order approximations of any gravitational waves with acceptable accuracy and precision within the parameter range of the model. We construct efficient reduced bases comprised of a relatively small number of the most relevant waveforms over three-dimensional parameter-space covered by the template bank (total mass 2.15 M⊙≤M ≤215 M⊙ , mass ratio 0.01 ≤q ≤1 , and initial orbital eccentricity 0 ≤e0≤0.95 ). The ROM is designed to predict signals in the frequency band from 10 Hz to 2 kHz for aLIGO and aVirgo design sensitivity. Beside moderating the data reduction, finer sampling of fiducial templates improves the accuracy of surrogates. Considerable increase in the speedup from several hundreds to thousands can be achieved by evaluating surrogates for low-mass systems especially when combined with high-eccentricity.

  5. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  6. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  7. A Physically—Based Geometry Model for Transport Distance Estimation of Rainfall-Eroded Soil Sediment

    Directory of Open Access Journals (Sweden)

    Qian-Gui Zhang

    2016-01-01

    Full Text Available Estimations of rainfall-induced soil erosion are mostly derived from the weight of sediment measured in natural runoff. The transport distance of eroded soil is important for evaluating landscape evolution but is difficult to estimate, mainly because it cannot be linked directly to the eroded sediment weight. The volume of eroded soil is easier to calculate visually using popular imaging tools, which can aid in estimating the transport distance of eroded soil through geometry relationships. In this study, we present a straightforward geometry model to predict the maximum sediment transport distance incurred by rainfall events of various intensity and duration. In order to verify our geometry prediction model, a series of experiments are reported in the form of a sediment volume. The results show that cumulative rainfall has a linear relationship with the total volume of eroded soil. The geometry model can accurately estimate the maximum transport distance of eroded soil by cumulative rainfall, with a low root-mean-square error (4.7–4.8 and a strong linear correlation (0.74–0.86.

  8. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  9. Robust estimation for ordinary differential equation models.

    Science.gov (United States)

    Cao, J; Wang, L; Xu, J

    2011-12-01

    Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.

  10. Spiking and bursting patterns of fractional-order Izhikevich model

    Science.gov (United States)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  11. Offline estimation of decay time for an optical cavity with a low pass filter cavity model.

    Science.gov (United States)

    Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C

    2012-08-01

    This Letter presents offline estimation results for the decay-time constant for an experimental Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The cavity dynamics are modeled in terms of a low pass filter (LPF) with unity DC gain. This model is used by an extended Kalman filter (EKF) along with the recorded light intensity at the output of the cavity in order to estimate the decay-time constant. The estimation results using the LPF cavity model are compared to those obtained using the quadrature model for the cavity presented in previous work by Kallapur et al. The estimation process derived using the LPF model comprises two states as opposed to three states in the quadrature model. When considering the EKF, this means propagating two states and a (2×2) covariance matrix using the LPF model, as opposed to propagating three states and a (3×3) covariance matrix using the quadrature model. This gives the former model a computational advantage over the latter and leads to faster execution times for the corresponding EKF. It is shown in this Letter that the LPF model for the cavity with two filter states is computationally more efficient, converges faster, and is hence a more suitable method than the three-state quadrature model presented in previous work for real-time estimation of the decay-time constant for the cavity.

  12. Estimation of genetic parameters related to eggshell strength using random regression models.

    Science.gov (United States)

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  13. The Channel Estimation and Modeling in High Altitude Platform Station Wireless Communication Dynamic Network

    Directory of Open Access Journals (Sweden)

    Xiaoyang Liu

    2017-01-01

    Full Text Available In order to analyze the channel estimation performance of near space high altitude platform station (HAPS in wireless communication system, the structure and formation of HAPS are studied in this paper. The traditional Least Squares (LS channel estimation method and Singular Value Decomposition-Linear Minimum Mean-Squared (SVD-LMMS channel estimation method are compared and investigated. A novel channel estimation method and model are proposed. The channel estimation performance of HAPS is studied deeply. The simulation and theoretical analysis results show that the performance of the proposed method is better than the traditional methods. The lower Bit Error Rate (BER and higher Signal Noise Ratio (SNR can be obtained by the proposed method compared with the LS and SVD-LMMS methods.

  14. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    International Nuclear Information System (INIS)

    Cui, Xiao; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo; Mei, Chunbo

    2017-01-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles. (paper)

  15. A unified model for transfer alignment at random misalignment angles based on second-order EKF

    Science.gov (United States)

    Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo

    2017-04-01

    In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.

  16. A reduced order model of a quadruped walking system

    International Nuclear Information System (INIS)

    Sano, Akihito; Furusho, Junji; Naganuma, Nobuyuki

    1990-01-01

    Trot walking has recently been studied by several groups because of its stability and realizability. In the trot, diagonally opposed legs form pairs. While one pair of legs provides support, the other pair of legs swings forward in preparation for the next step. In this paper, we propose a reduced order model for the trot walking. The reduced order model is derived by using two dominant modes of the closed loop system in which the local feedback at each joint is implemented. It is shown by numerical examples that the obtained reduced order model can well approximate the original higher order model. (author)

  17. Modeling, estimation and identification methods for static shape determination of flexible structures. [for large space structure design

    Science.gov (United States)

    Rodriguez, G.; Scheid, R. E., Jr.

    1986-01-01

    This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.

  18. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    Science.gov (United States)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  19. Model-Based Estimation of Ankle Joint Stiffness.

    Science.gov (United States)

    Misgeld, Berno J E; Zhang, Tony; Lüken, Markus J; Leonhardt, Steffen

    2017-03-29

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model's inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  20. Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model

    KAUST Repository

    Mo, Qianxing

    2010-01-29

    ChIP-chip experiments are procedures that combine chromatin immunoprecipitation (ChIP) and DNA microarray (chip) technology to study a variety of biological problems, including protein-DNA interaction, histone modification, and DNA methylation. The most important feature of ChIP-chip data is that the intensity measurements of probes are spatially correlated because the DNA fragments are hybridized to neighboring probes in the experiments. We propose a simple, but powerful Bayesian hierarchical approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic resolutions. The model parameters are estimated using the Gibbs sampler. The proposed method is illustrated using two publicly available data sets from Affymetrix and Agilent platforms, and compared with three alternative Bayesian methods, namely, Bayesian hierarchical model, hierarchical gamma mixture model, and Tilemap hidden Markov model. The numerical results indicate that the proposed method performs as well as the other three methods for the data from Affymetrix tiling arrays, but significantly outperforms the other three methods for the data from Agilent promoter arrays. In addition, we find that the proposed method has better operating characteristics in terms of sensitivities and false discovery rates under various scenarios. © 2010, The International Biometric Society.

  1. Parameters estimation for reactive transport: A way to test the validity of a reactive model

    Science.gov (United States)

    Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme

    The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.

  2. Uncertainty estimation and ensemble forecast with a chemistry-transport model - Application to air-quality modeling and simulation

    International Nuclear Information System (INIS)

    Mallet, Vivien

    2005-01-01

    The thesis deals with the evaluation of a chemistry-transport model, not primarily with classical comparisons to observations, but through the estimation of its a priori uncertainties due to input data, model formulation and numerical approximations. These three uncertainty sources are studied respectively on the basis of Monte Carlos simulations, multi-models simulations and numerical schemes inter-comparisons. A high uncertainty is found, in output ozone concentrations. In order to overtake the limitations due to the uncertainty, a solution is ensemble forecast. Through combinations of several models (up to forty-eight models) on the basis of past observations, the forecast can be significantly improved. The achievement of this work has also led to develop the innovative modelling-system Polyphemus. (author) [fr

  3. Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation

    International Nuclear Information System (INIS)

    Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže

    2015-01-01

    We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2

  4. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    Science.gov (United States)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  5. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.

    2017-07-25

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.

  6. A Consistent Methodology Based Parameter Estimation for a Lactic Acid Bacteria Fermentation Model

    DEFF Research Database (Denmark)

    Spann, Robert; Roca, Christophe; Kold, David

    2017-01-01

    Lactic acid bacteria are used in many industrial applications, e.g. as starter cultures in the dairy industry or as probiotics, and research on their cell production is highly required. A first principles kinetic model was developed to describe and understand the biological, physical, and chemical...... mechanisms in a lactic acid bacteria fermentation. We present here a consistent approach for a methodology based parameter estimation for a lactic acid fermentation. In the beginning, just an initial knowledge based guess of parameters was available and an initial parameter estimation of the complete set...... of parameters was performed in order to get a good model fit to the data. However, not all parameters are identifiable with the given data set and model structure. Sensitivity, identifiability, and uncertainty analysis were completed and a relevant identifiable subset of parameters was determined for a new...

  7. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  8. Default Bayesian Estimation of the Fundamental Frequency

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    Joint fundamental frequency and model order esti- mation is an important problem in several applications. In this paper, a default estimation algorithm based on a minimum of prior information is presented. The algorithm is developed in a Bayesian framework, and it can be applied to both real....... Moreover, several approximations of the posterior distributions on the fundamental frequency and the model order are derived, and one of the state-of-the-art joint fundamental frequency and model order estimators is demonstrated to be a special case of one of these approximations. The performance...

  9. Reduced order modeling, statistical analysis and system identification for a bladed rotor with geometric mistuning

    Science.gov (United States)

    Vishwakarma, Vinod

    Modified Modal Domain Analysis (MMDA) is a novel method for the development of a reduced-order model (ROM) of a bladed rotor. This method utilizes proper orthogonal decomposition (POD) of Coordinate Measurement Machine (CMM) data of blades' geometries and sector analyses using ANSYS. For the first time ROM of a geometrically mistuned industrial scale rotor (Transonic rotor) with large size of Finite Element (FE) model is generated using MMDA. Two methods for estimating mass and stiffness mistuning matrices are used a) exact computation from sector FE analysis, b) estimates based on POD mistuning parameters. Modal characteristics such as mistuned natural frequencies, mode shapes and forced harmonic response are obtained from ROM for various cases, and results are compared with full rotor ANSYS analysis and other ROM methods such as Subset of Nominal Modes (SNM) and Fundamental Model of Mistuning (FMM). Accuracy of MMDA ROM is demonstrated with variations in number of POD features and geometric mistuning parameters. It is shown for the aforementioned case b) that the high accuracy of ROM studied in previous work with Academic rotor does not directly translate to the Transonic rotor. Reasons for such mismatch in results are investigated and attributed to higher mistuning in Transonic rotor. Alternate solutions such as estimation of sensitivities via least squares, and interpolation of mass and stiffness matrices on manifolds are developed, and their results are discussed. Statistics such as mean and standard deviations of forced harmonic response peak amplitude are obtained from random permutations, and are shown to have similar results as those of Monte Carlo simulations. These statistics are obtained and compared for 3 degree of freedom (DOF) lumped parameter model (LPM) of rotor, Academic rotor and Transonic rotor. A state -- estimator based on MMDA ROM and Kalman filter is also developed for offline or online estimation of harmonic forcing function from

  10. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  11. A Hybrid of Optical Remote Sensing and Hydrological Modeling Improves Water Balance Estimation

    Science.gov (United States)

    Gleason, Colin J.; Wada, Yoshihide; Wang, Jida

    2018-01-01

    Declining gauging infrastructure and fractious water politics have decreased available information about river flows globally. Remote sensing and water balance modeling are frequently cited as potential solutions, but these techniques largely rely on these same in-decline gauge data to make accurate discharge estimates. A different approach is therefore needed, and we here combine remotely sensed discharge estimates made via at-many-stations hydraulic geometry (AMHG) and the PCR-GLOBWB hydrological model to estimate discharge over the Lower Nile. Specifically, we first estimate initial discharges from 87 Landsat images and AMHG (1984-2015), and then use these flow estimates to tune the model, all without using gauge data. The resulting tuned modeled hydrograph shows a large improvement in flow magnitude: validation of the tuned monthly hydrograph against a historical gauge (1978-1984) yields an RMSE of 439 m3/s (40.8%). By contrast, the original simulation had an order-of-magnitude flow error. This improvement is substantial but not perfect: tuned flows have a 1-2 month wet season lag and a negative base flow bias. Accounting for this 2 month lag yields a hydrograph RMSE of 270 m3/s (25.7%). Thus, our results coupling physical models and remote sensing is a promising first step and proof of concept toward future modeling of ungauged flows, especially as developments in cloud computing for remote sensing make our method easily applicable to any basin. Finally, we purposefully do not offer prescriptive solutions for Nile management, and rather hope that the methods demonstrated herein can prove useful to river stakeholders in managing their own water.

  12. Dynamical models of happiness with fractional order

    Science.gov (United States)

    Song, Lei; Xu, Shiyun; Yang, Jianying

    2010-03-01

    This present study focuses on a dynamical model of happiness described through fractional-order differential equations. By categorizing people of different personality and different impact factor of memory (IFM) with different set of model parameters, it is demonstrated via numerical simulations that such fractional-order models could exhibit various behaviors with and without external circumstance. Moreover, control and synchronization problems of this model are discussed, which correspond to the control of emotion as well as emotion synchronization in real life. This study is an endeavor to combine the psychological knowledge with control problems and system theories, and some implications for psychotherapy as well as hints of a personal approach to life are both proposed.

  13. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  14. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  15. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2015-12-15

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  16. Hybrid reduced order modeling for assembly calculations

    International Nuclear Information System (INIS)

    Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur

    2015-01-01

    Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.

  17. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  18. Cognitive profiles and heritability estimates in the Old Order Amish.

    Science.gov (United States)

    Kuehner, Ryan M; Kochunov, Peter; Nugent, Katie L; Jurius, Deanna E; Savransky, Anya; Gaudiot, Christopher; Bruce, Heather A; Gold, James; Shuldiner, Alan R; Mitchell, Braxton D; Hong, L Elliot

    2016-08-01

    This study aimed to establish the applicability of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) in the Old Order Amish (OOA) and to assess the genetic contribution toward the RBANS total score and its cognitive domains using a large family-based sample of OOA. RBANS data were collected in 103 OOA individuals from Lancaster County, Pennsylvania, including 85 individuals without psychiatric illness and 18 individuals with current psychiatric diagnoses. The RBANS total score and all five cognitive domains of in nonpsychiatric OOA were within half a SD of the normative data of the general population. The RBANS total score was highly heritable (h=0.51, P=0.019). OOA with psychiatric diagnoses had a numerically lower RBANS total score and domain scores compared with the nonpsychiatric participants. The RBANS appears to be a suitable cognitive battery for the OOA population as measurements obtained from the OOA are comparable with normative data in the US population. The heritability estimated from the OOA is in line with heritabilities of other cognitive batteries estimated in other populations. These results support the use of RBANS in cognitive assessment, clinical care, and behavioral genetic studies of neuropsychological functioning in this population.

  19. Direction-of-Arrival Estimation Based on Sparse Recovery with Second-Order Statistics

    Directory of Open Access Journals (Sweden)

    H. Chen

    2015-04-01

    Full Text Available Traditional direction-of-arrival (DOA estimation techniques perform Nyquist-rate sampling of the received signals and as a result they require high storage. To reduce sampling ratio, we introduce level-crossing (LC sampling which captures samples whenever the signal crosses predetermined reference levels, and the LC-based analog-to-digital converter (LC ADC has been shown to efficiently sample certain classes of signals. In this paper, we focus on the DOA estimation problem by using second-order statistics based on the LC samplings recording on one sensor, along with the synchronous samplings of the another sensors, a sparse angle space scenario can be found by solving an $ell_1$ minimization problem, giving the number of sources and their DOA's. The experimental results show that our proposed method, when compared with some existing norm-based constrained optimization compressive sensing (CS algorithms, as well as subspace method, improves the DOA estimation performance, while using less samples when compared with Nyquist-rate sampling and reducing sensor activity especially for long time silence signal.

  20. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    Science.gov (United States)

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  1. A time series model: First-order integer-valued autoregressive (INAR(1))

    Science.gov (United States)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  2. Basic Investigations of Dynamic Travel Time Estimation Model for Traffic Signals Control Using Information from Optical Beacons

    Science.gov (United States)

    Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke

    In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.

  3. A guide for estimating dynamic panel models: the macroeconomics models specifiness

    International Nuclear Information System (INIS)

    Coletta, Gaetano

    2005-10-01

    The aim of this paper is to review estimators for dynamic panel data models, a topic in which the interest has grown recently. As a consequence 01 this late interest, different estimation techniques have been proposed in the last few years and, given the last development of the subject, there is still a lack 01 a comprehensive guide for panel data applications, and for macroeconomics panel data models in particular. Finally, we also provide some indications about the Stata software commands to estimate dynamic panel data models with the techniques illustrated in the paper [it

  4. Fractional Order Models of Industrial Pneumatic Controllers

    Directory of Open Access Journals (Sweden)

    Abolhassan Razminia

    2014-01-01

    Full Text Available This paper addresses a new approach for modeling of versatile controllers in industrial automation and process control systems such as pneumatic controllers. Some fractional order dynamical models are developed for pressure and pneumatic systems with bellows-nozzle-flapper configuration. In the light of fractional calculus, a fractional order derivative-derivative (FrDD controller and integral-derivative (FrID are remodeled. Numerical simulations illustrate the application of the obtained theoretical results in simple examples.

  5. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    International Nuclear Information System (INIS)

    Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun; Joy, Minu

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m eff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation ν t and change in spectral index for scalar perturbation ν st to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n s =0.96 by having a non-zero value of effective mass of the inflaton field m 2 eff /H 2 . The analysis with WP + Planck likelihood shows a non-zero detection of m 2 eff /H 2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m 2 eff /H 2  = −0.0237 ± 0.0135 which is consistent with zero

  6. Roof planes detection via a second-order variational model

    Science.gov (United States)

    Benciolini, Battista; Ruggiero, Valeria; Vitti, Alfonso; Zanetti, Massimo

    2018-04-01

    The paper describes a unified automatic procedure for the detection of roof planes in gridded height data. The procedure exploits the Blake-Zisserman (BZ) model for segmentation in both 2D and 1D, and aims to detect, to model and to label roof planes. The BZ model relies on the minimization of a functional that depends on first- and second-order derivatives, free discontinuities and free gradient discontinuities. During the minimization, the relative strength of each competitor is controlled by a set of weight parameters. By finding the minimum of the approximated BZ functional, one obtains: (1) an approximation of the data that is smoothed solely within regions of homogeneous gradient, and (2) an explicit detection of the discontinuities and gradient discontinuities of the approximation. Firstly, input data is segmented using the 2D BZ. The maps of data and gradient discontinuities are used to isolate building candidates and planar patches (i.e. regions with homogeneous gradient) that correspond to roof planes. Connected regions that can not be considered as buildings are filtered according to both patch dimension and distribution of the directions of the normals to the boundary. The 1D BZ model is applied to the curvilinear coordinates of boundary points of building candidates in order to reduce the effect of data granularity when the normals are evaluated. In particular, corners are preserved and can be detected by means of gradient discontinuity. Lastly, a total least squares model is applied to estimate the parameters of the plane that best fits the points of each planar patch (orthogonal regression with planar model). Refinement of planar patches is performed by assigning those points that are close to the boundaries to the planar patch for which a given proximity measure assumes the smallest value. The proximity measure is defined to account for the variance of a fitting plane and a weighted distance of a point from the plane. The effectiveness of the

  7. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    Science.gov (United States)

    Perez, Hector Eduardo

    notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro

  8. Adaptive Model Predictive Vibration Control of a Cantilever Beam with Real-Time Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Gergely Takács

    2014-01-01

    Full Text Available This paper presents an adaptive-predictive vibration control system using extended Kalman filtering for the joint estimation of system states and model parameters. A fixed-free cantilever beam equipped with piezoceramic actuators serves as a test platform to validate the proposed control strategy. Deflection readings taken at the end of the beam have been used to reconstruct the position and velocity information for a second-order state-space model. In addition to the states, the dynamic system has been augmented by the unknown model parameters: stiffness, damping constant, and a voltage/force conversion constant, characterizing the actuating effect of the piezoceramic transducers. The states and parameters of this augmented system have been estimated in real time, using the hybrid extended Kalman filter. The estimated model parameters have been applied to define the continuous state-space model of the vibrating system, which in turn is discretized for the predictive controller. The model predictive control algorithm generates state predictions and dual-mode quadratic cost prediction matrices based on the updated discrete state-space models. The resulting cost function is then minimized using quadratic programming to find the sequence of optimal but constrained control inputs. The proposed active vibration control system is implemented and evaluated experimentally to investigate the viability of the control method.

  9. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  10. Reduced order for nuclear reactor model in frequency and time domain

    International Nuclear Information System (INIS)

    Nugroho, D.H.

    1997-01-01

    In control system theory, a model can be represented by frequency or time domain. In frequency domain, the model was represented by transfer function. in time domain, the model was represented by state space. for the sake of simplification in computation, it is necessary to reduce the model order. the main aim of this research is to find the best in nuclear reactor model. Model order reduction in frequency domain can be done utilizing pole-zero cancellation method; while in time domain utilizing balanced aggregation method the balanced aggregation method was developed by moore (1981). In this paper, the two kinds of method were applied to reduce a nuclear reactor model which was constructed by neutron dynamics and heat transfer equations. to validate that the model characteristics were not change when model order reduction applied, the response was utilized for full and reduced order. it was shown that the nuclear reactor order model can be reduced from order 8 to 2 order 2 is the best order for nuclear reactor model

  11. Order-of-magnitude physics of neutron stars. Estimating their properties from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Reisenegger, Andreas; Zepeda, Felipe S. [Pontificia Universidad Catolica de Chile, Instituto de Astrofisica, Facultad de Fisica, Macul (Chile)

    2016-03-15

    We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of ''everyday'' matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties. (orig.)

  12. Model-Based Estimation of Ankle Joint Stiffness

    Directory of Open Access Journals (Sweden)

    Berno J. E. Misgeld

    2017-03-01

    Full Text Available We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements.

  13. Model-Based Estimation of Ankle Joint Stiffness

    Science.gov (United States)

    Misgeld, Berno J. E.; Zhang, Tony; Lüken, Markus J.; Leonhardt, Steffen

    2017-01-01

    We address the estimation of biomechanical parameters with wearable measurement technologies. In particular, we focus on the estimation of sagittal plane ankle joint stiffness in dorsiflexion/plantar flexion. For this estimation, a novel nonlinear biomechanical model of the lower leg was formulated that is driven by electromyographic signals. The model incorporates a two-dimensional kinematic description in the sagittal plane for the calculation of muscle lever arms and torques. To reduce estimation errors due to model uncertainties, a filtering algorithm is necessary that employs segmental orientation sensor measurements. Because of the model’s inherent nonlinearities and nonsmooth dynamics, a square-root cubature Kalman filter was developed. The performance of the novel estimation approach was evaluated in silico and in an experimental procedure. The experimental study was conducted with body-worn sensors and a test-bench that was specifically designed to obtain reference angle and torque measurements for a single joint. Results show that the filter is able to reconstruct joint angle positions, velocities and torque, as well as, joint stiffness during experimental test bench movements. PMID:28353683

  14. Short-term estimation of GNSS TEC using a neural network model in Brazil

    Science.gov (United States)

    Ferreira, Arthur Amaral; Borges, Renato Alves; Paparini, Claudia; Ciraolo, Luigi; Radicella, Sandro M.

    2017-10-01

    This work presents a novel Neural Network (NN) model to estimate Total Electron Content (TEC) from Global Navigation Satellite Systems (GNSS) measurements in three distinct sectors in Brazil. The purpose of this work is to start the investigations on the development of a regional model that can be used to determine the vertical TEC over Brazil, aiming future applications on a near real-time frame estimations and short-term forecasting. The NN is used to estimate the GNSS TEC values at void locations, where no dual-frequency GNSS receiver that may be used as a source of data to GNSS TEC estimation is available. This approach is particularly useful for GNSS single-frequency users that rely on corrections of ionospheric range errors by TEC models. GNSS data from the first GLONASS network for research and development (GLONASS R&D network) installed in Latin America, and from the Brazilian Network for Continuous Monitoring of the GNSS (RMBC) were used on TEC calibration. The input parameters of the NN model are based on features known to influence TEC values, such as geographic location of the GNSS receiver, magnetic activity, seasonal and diurnal variations, and solar activity. Data from two ten-days periods (from DoY 154 to 163 and from 282 to 291) are used to train the network. Three distinct analyses have been carried out in order to assess time-varying and spatial performance of the model. At the spatial performance analysis, for each region, a set of stations is chosen to provide training data to the NN, and after the training procedure, the NN is used to estimate vTEC behavior for the test station which data were not presented to the NN in training process. An analysis is done by comparing, for each testing station, the estimated NN vTEC delivered by the NN and reference calibrated vTEC. Also, as a second analysis, the network ability to forecast one day after the time interval (DoY 292) based on information of the second period of investigation is also assessed

  15. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  16. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  17. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  18. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  19. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    Science.gov (United States)

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  20. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  1. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    Science.gov (United States)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  2. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  3. Model-based estimation for dynamic cardiac studies using ECT.

    Science.gov (United States)

    Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O

    1994-01-01

    The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.

  4. Nonlinear Parameter Estimation in Microbiological Degradation Systems and Statistic Test for Common Estimation

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Holst, Helle; Spliid, Henrik

    1995-01-01

    Three identical microbiological experiments were carried out and analysed in order to examine the variability of the parameter estimates. The microbiological system consisted of a substrate (toluene) and a biomass (pure culture) mixed together in an aquifer medium. The degradation of the substrate...... and the growth of the biomass are described by the Monod model consisting of two nonlinear coupled first-order differential equations. The objective of this study was to estimate the kinetic parameters in the Monod model and to test whether the parameters from the three identical experiments have the same values....... Estimation of the parameters was obtained using an iterative maximum likelihood method and the test used was an approximative likelihood ratio test. The test showed that the three sets of parameters were identical only on a 4% alpha level....

  5. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  6. Modified Dual Second-order Generalized Integrator FLL for Frequency Estimation Under Various Grid Abnormalities

    Directory of Open Access Journals (Sweden)

    Kalpeshkumar Rohitbhai Patil

    2016-10-01

    Full Text Available Proper synchronization of Distributed Generator with grid and its performance in grid-connected mode relies on fast and precise estimation of phase and amplitude of the fundamental component of grid voltage. However, the accuracy with which the frequency is estimated is dependent on the type of grid voltage abnormalities and structure of the phase-locked loop or frequency locked loop control schemes. Among various control schemes, second-order generalized integrator based frequency- locked loop (SOGI-FLL is reported to have the most promising performance. It tracks the frequency of grid voltage accurately even when grid voltage is characterized by sag, swell, harmonics, imbalance, frequency variations etc. However, estimated frequency contains low frequency oscillations in case when sensed grid-voltage has a dc offset. This paper presents a modified dual second-order generalized integrator frequency-locked loop (MDSOGI-FLL for three-phase systems to cope with the non-ideal three-phase grid voltages having all type of abnormalities including the dc offset. The complexity in control scheme is almost the same as the standard dual SOGI-FLL, but the performance is enhanced. Simulation results show that the proposed MDSOGI-FLL is effective under all abnormal grid voltage conditions. The results are validated experimentally to justify the superior performance of MDSOGI-FLL under adverse conditions.

  7. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  8. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  9. Hybrid reduced order modeling for assembly calculations

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2013-07-01

    While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)

  10. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  11. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  12. Re-evaluating neonatal-age models for ungulates: does model choice affect survival estimates?

    Directory of Open Access Journals (Sweden)

    Troy W Grovenburg

    Full Text Available New-hoof growth is regarded as the most reliable metric for predicting age of newborn ungulates, but variation in estimated age among hoof-growth equations that have been developed may affect estimates of survival in staggered-entry models. We used known-age newborns to evaluate variation in age estimates among existing hoof-growth equations and to determine the consequences of that variation on survival estimates. During 2001-2009, we captured and radiocollared 174 newborn (≤24-hrs old ungulates: 76 white-tailed deer (Odocoileus virginianus in Minnesota and South Dakota, 61 mule deer (O. hemionus in California, and 37 pronghorn (Antilocapra americana in South Dakota. Estimated age of known-age newborns differed among hoof-growth models and varied by >15 days for white-tailed deer, >20 days for mule deer, and >10 days for pronghorn. Accuracy (i.e., the proportion of neonates assigned to the correct age in aging newborns using published equations ranged from 0.0% to 39.4% in white-tailed deer, 0.0% to 3.3% in mule deer, and was 0.0% for pronghorns. Results of survival modeling indicated that variability in estimates of age-at-capture affected short-term estimates of survival (i.e., 30 days for white-tailed deer and mule deer, and survival estimates over a longer time frame (i.e., 120 days for mule deer. Conversely, survival estimates for pronghorn were not affected by estimates of age. Our analyses indicate that modeling survival in daily intervals is too fine a temporal scale when age-at-capture is unknown given the potential inaccuracies among equations used to estimate age of neonates. Instead, weekly survival intervals are more appropriate because most models accurately predicted ages within 1 week of the known age. Variation among results of neonatal-age models on short- and long-term estimates of survival for known-age young emphasizes the importance of selecting an appropriate hoof-growth equation and appropriately defining intervals (i

  13. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  14. Low-order models of wave interactions in the transition to baroclinic chaos

    Directory of Open Access Journals (Sweden)

    W.-G. Früh

    1996-01-01

    Full Text Available A hierarchy of low-order models, based on the quasi-geostrophic two-layer model, is used to investigate complex multi-mode flows. The different models were used to study distinct types of nonlinear interactions, namely wave- wave interactions through resonant triads, and zonal flow-wave interactions. The coupling strength of individual triads is estimated using a phase locking probability density function. The flow of primary interest is a strongly modulated amplitude vacillation, whose modulation is coupled to intermittent bursts of weaker wave modes. This flow was found to emerge in a discontinuous bifurcation directly from a steady wave solution. Two mechanism were found to result in this flow, one involving resonant triads, and the other involving zonal flow-wave interactions together with a strong β-effect. The results will be compared with recent laboratory experiments of multi-mode baroclinic waves in a rotating annulus of fluid subjected to a horizontal temperature gradient.

  15. Model-based estimation for dynamic cardiac studies using ECT

    International Nuclear Information System (INIS)

    Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.

    1994-01-01

    In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed

  16. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  17. NanoSafer vs. 1.1 - Nanomaterial risk assessment using first order modeling

    DEFF Research Database (Denmark)

    Jensen, Keld A.; Saber, Anne T.; Kristensen, Henrik V.

    2013-01-01

    for safe use of MN based on first order modeling. The hazard and case specific exposure as sessments are combined for an integrated risk evaluation and final control banding. Requested material da ta are typically available from the producers’ technical information sheets. The hazard data are given...... using the work room dimensions , ventilation rate, powder use rate, duration, and calculated or given emission rates. The hazard sc aling is based on direct assessment. The exposure band is derived from estimated acute and work day expo sure levels divided by a nano OEL calculated from the OEL...... to construct user specific work scenarios for exposure assessment is considered a highly versatile approach....

  18. Fractional-order mathematical model of an irrigation main canal pool

    Directory of Open Access Journals (Sweden)

    Shlomi N. Calderon-Valdez

    2015-09-01

    Full Text Available In this paper a fractional order model for an irrigation main canal is proposed. It is based on the experiments developed in a laboratory prototype of a hydraulic canal and the application of a direct system identification methodology. The hydraulic processes that take place in this canal are equivalent to those that occur in real main irrigation canals and the results obtained here can therefore be easily extended to real canals. The accuracy of the proposed fractional order model is compared by deriving two other integer-order models of the canal of a complexity similar to that proposed here. The parameters of these three mathematical models have been identified by minimizing the Integral Square Error (ISE performance index existing between the models and the real-time experimental data obtained from the canal prototype. A comparison of the performances of these three models shows that the fractional-order model has the lowest error and therefore the higher accuracy. Experiments showed that our model outperformed the accuracy of the integer-order models by about 25%, which is a significant improvement as regards to capturing the canal dynamics.

  19. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  20. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  1. Direct Importance Estimation with Gaussian Mixture Models

    Science.gov (United States)

    Yamada, Makoto; Sugiyama, Masashi

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  2. Interaction forces model on a bubble growing for nuclear best estimate computer codes

    International Nuclear Information System (INIS)

    Espinosa-Paredes, Gilberto; Nunez-Carrera, Alejandro; Martinez-Mendez, Elizabeth J.

    2005-01-01

    This paper presents a mathematical model that takes into account the bubble radius variation that take place in a boiling water nuclear reactor during transients with changes in the pressure vessel, changes in the inlet core mass flow rate, density-wave phenomena or flow regime instability. The model with expansion effects was developed considering the interaction force between a dilute dispersion of gas bubbles and a continuous liquid phase. The closure relationships were formulated as an associated problem with the spatial deviation around averaging variables as a function of known variables. In order to solve the closure problem, a geometric model given by an eccentric unit cell was applied as an approach of heterogeneous structure of the two-phase flow. The closure relationship includes additional terms that represent combined effects between translation and pulsation due to displacement and size variation of the bubbles, respectively. This result can be implanted straightforward in best estimate thermo-hydraulics models. An example, the implementation of the closure relationships into TRAC best estimate computer code is presented

  3. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  4. 2nd-order optical model of the isotopic dependence of heavy ion absorption cross sections for radiation transport studies

    Science.gov (United States)

    Cucinotta, Francis A.; Yan, Congchong; Saganti, Premkumar B.

    2018-01-01

    Heavy ion absorption cross sections play an important role in radiation transport codes used in risk assessment and for shielding studies of galactic cosmic ray (GCR) exposures. Due to the GCR primary nuclei composition and nuclear fragmentation leading to secondary nuclei heavy ions of charge number, Z with 3 ≤ Z ≥ 28 and mass numbers, A with 6 ≤ A ≥ 60 representing about 190 isotopes occur in GCR transport calculations. In this report we describe methods for developing a data-base of isotopic dependent heavy ion absorption cross sections for interactions. Calculations of a 2nd-order optical model solution to coupled-channel solutions to the Eikonal form of the nucleus-nucleus scattering amplitude are compared to 1st-order optical model solutions. The 2nd-order model takes into account two-body correlations in the projectile and target ground-states, which are ignored in the 1st-order optical model. Parameter free predictions are described using one-body and two-body ground state form factors for the isotopes considered and the free nucleon-nucleon scattering amplitude. Root mean square (RMS) matter radii for protons and neutrons are taken from electron and muon scattering data and nuclear structure models. We report on extensive comparisons to experimental data for energy-dependent absorption cross sections for over 100 isotopes of elements from Li to Fe interacting with carbon and aluminum targets. Agreement between model and experiments are generally within 10% for the 1st-order optical model and improved to less than 5% in the 2nd-order optical model in the majority of comparisons. Overall the 2nd-order optical model leads to a reduction in absorption compared to the 1st-order optical model for heavy ion interactions, which influences estimates of nuclear matter radii.

  5. A simplified model for the estimation of energy production of PV systems

    International Nuclear Information System (INIS)

    Aste, Niccolò; Del Pero, Claudio; Leonforte, Fabrizio; Manfren, Massimiliano

    2013-01-01

    The potential of solar energy is far higher than any other renewable source, although several limits exist. In detail the fundamental factors that must be analyzed by investors and policy makers are the cost-effectiveness and the production of PV power plants, respectively, for the decision of investment schemes and energy policy strategies. Tools suitable to be used even by non-specialists, are therefore becoming increasingly important. Many research and development effort have been devoted to this goal in recent years. In this study, a simplified model for PV annual production estimation that can provide results with a level of accuracy comparable with the more sophisticated simulation tools from which it derives is fundamental data. The main advantage of the presented model is that it can be used by virtually anyone, without requiring a specific field expertise. The inherent limits of the model are related to its empirical base, but the methodology presented can be effectively reproduced in the future with a different spectrum of data in order to assess, for example, the effect of technological evolution on the overall performance of PV power generation or establishing performance benchmarks for a much larger variety kinds of PV plants and technologies. - Highlights: • We have analyzed the main methods for estimating the electricity production of photovoltaic systems. • We simulated the same system with two different software in different European locations and estimated the electric production. • We have studied the main losses of a plant PV. • We provide a simplified model to estimate the electrical production of any PV system well designed. • We validated the data obtained by the proposed model with experimental data from three PV systems

  6. Fractional-Order Nonlinear Systems Modeling, Analysis and Simulation

    CERN Document Server

    Petráš, Ivo

    2011-01-01

    "Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation" presents a study of fractional-order chaotic systems accompanied by Matlab programs for simulating their state space trajectories, which are shown in the illustrations in the book. Description of the chaotic systems is clearly presented and their analysis and numerical solution are done in an easy-to-follow manner. Simulink models for the selected fractional-order systems are also presented. The readers will understand the fundamentals of the fractional calculus, how real dynamical systems can be described using fractional derivatives and fractional differential equations, how such equations can be solved, and how to simulate and explore chaotic systems of fractional order. The book addresses to mathematicians, physicists, engineers, and other scientists interested in chaos phenomena or in fractional-order systems. It can be used in courses on dynamical systems, control theory, and applied mathematics at graduate or postgraduate level. ...

  7. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  8. A novel multi-model probability battery state of charge estimation approach for electric vehicles using H-infinity algorithm

    International Nuclear Information System (INIS)

    Lin, Cheng; Mu, Hao; Xiong, Rui; Shen, Weixiang

    2016-01-01

    Highlights: • A novel multi-model probability battery SOC fusion estimation approach was proposed. • The linear matrix inequality-based H∞ technique is employed to estimate the SOC. • The Bayes theorem has been employed to realize the optimal weight for the fusion. • The robustness of the proposed approach is verified by different batteries. • The results show that the proposed method can promote global estimation accuracy. - Abstract: Due to the strong nonlinearity and complex time-variant property of batteries, the existing state of charge (SOC) estimation approaches based on a single equivalent circuit model (ECM) cannot provide the accurate SOC for the entire discharging period. This paper aims to present a novel SOC estimation approach based on a multiple ECMs fusion method for improving the practical application performance. In the proposed approach, three battery ECMs, namely the Thevenin model, the double polarization model and the 3rd order RC model, are selected to describe the dynamic voltage of lithium-ion batteries and the genetic algorithm is then used to determine the model parameters. The linear matrix inequality-based H-infinity technique is employed to estimate the SOC from the three models and the Bayes theorem-based probability method is employed to determine the optimal weights for synthesizing the SOCs estimated from the three models. Two types of lithium-ion batteries are used to verify the feasibility and robustness of the proposed approach. The results indicate that the proposed approach can improve the accuracy and reliability of the SOC estimation against uncertain battery materials and inaccurate initial states.

  9. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  10. Parameter Estimation of Nonlinear Models in Forestry.

    OpenAIRE

    Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.

    1999-01-01

    Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...

  11. APPLYING TEACHING-LEARNING TO ARTIFICIAL BEE COLONY FOR PARAMETER OPTIMIZATION OF SOFTWARE EFFORT ESTIMATION MODEL

    Directory of Open Access Journals (Sweden)

    THANH TUNG KHUAT

    2017-05-01

    Full Text Available Artificial Bee Colony inspired by the foraging behaviour of honey bees is a novel meta-heuristic optimization algorithm in the community of swarm intelligence algorithms. Nevertheless, it is still insufficient in the speed of convergence and the quality of solutions. This paper proposes an approach in order to tackle these downsides by combining the positive aspects of TeachingLearning based optimization and Artificial Bee Colony. The performance of the proposed method is assessed on the software effort estimation problem, which is the complex and important issue in the project management. Software developers often carry out the software estimation in the early stages of the software development life cycle to derive the required cost and schedule for a project. There are a large number of methods for effort estimation in which COCOMO II is one of the most widely used models. However, this model has some restricts because its parameters have not been optimized yet. In this work, therefore, we will present the approach to overcome this limitation of COCOMO II model. The experiments have been conducted on NASA software project dataset and the obtained results indicated that the improvement of parameters provided better estimation capabilities compared to the original COCOMO II model.

  12. Efficient Estimation of Non-Linear Dynamic Panel Data Models with Application to Smooth Transition Models

    DEFF Research Database (Denmark)

    Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan

    This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...

  13. Parameters estimation of the single and double diode photovoltaic models using a Gauss–Seidel algorithm and analytical method: A comparative study

    International Nuclear Information System (INIS)

    Et-torabi, K.; Nassar-eddine, I.; Obbadi, A.; Errami, Y.; Rmaily, R.; Sahnoun, S.; El fajri, A.; Agunaou, M.

    2017-01-01

    Highlights: • Comparative study of two methods: a Gauss Seidel method and an analytical method. • Five models are implemented to estimate the five parameters for single diode. • Two models are used to estimate the seven parameters for double diode. • The parameters are estimated under changing environmental conditions. • To choose method/model combination more adequate for each PV module technology. - Abstract: In the photovoltaic (PV) panels modeling field, this paper presents a comparative study of two parameter estimation methods: the iterative method called Gauss Seidel, applied on the single diode model, and the analytical method used on the double diode model. These parameter estimation methods are based on the manufacturer's datasheets. They are also tested on three PV modules of different technologies: multicrystalline (kyocera KC200GT), monocrystalline (Shell SQ80), and thin film (Shell ST40). For the iterative method, five existing mathematical models classified from 1 to 5 are used to estimate the parameters of these PV modules under varying environmental conditions. Only two models of them are used for the analytical method. Each model is based on the combination of the photocurrent and the reverse saturation current’s expressions in terms of temperature and irradiance. In addition, the results of the models’ simulation are compared with the experimental data obtained from the PV modules’ datasheets, in order to evaluate the accuracy of the models. The simulation shows that the I-V characteristics obtained are matching to the experimental data. In order to validate the reliability of the two methods, both the Absolute Error (AE) and the Root Mean Square Error (RMSE) were calculated. The results suggest that the analytical method can be very useful for monocrystalline and multicrystalline modules, but for the thin film module, the iterative method is the most suitable.

  14. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  15. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  16. Sums and products of sets and estimates of rational trigonometric sums in fields of prime order

    Energy Technology Data Exchange (ETDEWEB)

    Garaev, Mubaris Z [National Autonomous University of Mexico, Institute of Mathematics (Mexico)

    2010-11-16

    This paper is a survey of main results on the problem of sums and products of sets in fields of prime order and their applications to estimates of rational trigonometric sums. Bibliography: 85 titles.

  17. SUN-RAH: a nucleoelectric BWR university simulator based in reduced order models

    International Nuclear Information System (INIS)

    Morales S, J.B.; Lopez R, A.; Sanchez B, A.; Sanchez S, R.; Hernandez S, A.

    2003-01-01

    The development of a simulator that allows to represent the dynamics of a nucleo electric central, with nuclear reactor of the BWR type, using reduced order models is presented. These models present the characteristics defined by the dominant poles of the system (1) and most of those premature operation transitories in a power station can be reproduced with considerable fidelity if the models are identified with data of plant or references of a code of better estimate like RAMONA, TRAC (2) or RELAP. The models of the simulator are developments or own simplifications starting from the physical laws and retaining the main terms. This work describes the objective of the project and the general specifications of the University student of Nucleo electric simulator with Boiling Water Reactor type (SUN-RAH) as well as the finished parts that fundamentally are the nuclear reactor, the one of steam supply (NSSS), the plant balance (BOP), the main controllers of the plant and the implemented graphic interfaces. The pendent goals as well as the future developments and applications of SUN-RAH are described. (Author)

  18. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  19. Perspectives on Modelling BIM-enabled Estimating Practices

    Directory of Open Access Journals (Sweden)

    Willy Sher

    2014-12-01

    Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes.  It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management.  Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality.  Areas for future research are also identified in the paper.

  20. Reference Evapotranspiration Retrievals from a Mesoscale Model Based Weather Variables for Soil Moisture Deficit Estimation

    Directory of Open Access Journals (Sweden)

    Prashant K. Srivastava

    2017-10-01

    Full Text Available Reference Evapotranspiration (ETo and soil moisture deficit (SMD are vital for understanding the hydrological processes, particularly in the context of sustainable water use efficiency in the globe. Precise estimation of ETo and SMD are required for developing appropriate forecasting systems, in hydrological modeling and also in precision agriculture. In this study, the surface temperature downscaled from Weather Research and Forecasting (WRF model is used to estimate ETo using the boundary conditions that are provided by the European Center for Medium Range Weather Forecast (ECMWF. In order to understand the performance, the Hamon’s method is employed to estimate the ETo using the temperature from meteorological station and WRF derived variables. After estimating the ETo, a range of linear and non-linear models is utilized to retrieve SMD. The performance statistics such as RMSE, %Bias, and Nash Sutcliffe Efficiency (NSE indicates that the exponential model (RMSE = 0.226; %Bias = −0.077; NSE = 0.616 is efficient for SMD estimation by using the Observed ETo in comparison to the other linear and non-linear models (RMSE range = 0.019–0.667; %Bias range = 2.821–6.894; NSE = 0.013–0.419 used in this study. On the other hand, in the scenario where SMD is estimated using WRF downscaled meteorological variables based ETo, the linear model is found promising (RMSE = 0.017; %Bias = 5.280; NSE = 0.448 as compared to the non-linear models (RMSE range = 0.022–0.707; %Bias range = −0.207–−6.088; NSE range = 0.013–0.149. Our findings also suggest that all the models are performing better during the growing season (RMSE range = 0.024–0.025; %Bias range = −4.982–−3.431; r = 0.245–0.281 than the non−growing season (RMSE range = 0.011–0.12; %Bias range = 33.073–32.701; r = 0.161–0.244 for SMD estimation.

  1. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Advanced fuel cycle cost estimation model and its cost estimation results for three nuclear fuel cycles using a dynamic model in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sungki, E-mail: sgkim1@kaeri.re.kr [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Ko, Wonil [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Youn, Saerom; Gao, Ruxing [University of Science and Technology, 217 Gajungro, Yuseong-gu, Daejeon 305-350 (Korea, Republic of); Bang, Sungsig, E-mail: ssbang@kaist.ac.kr [Korea Advanced Institute of Science and Technology, Department of Business and Technology Management, 291 Deahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2015-11-15

    Highlights: • The nuclear fuel cycle cost using a new cost estimation model was analyzed. • The material flows of three nuclear fuel cycle options were calculated. • The generation cost of once-through was estimated to be 66.88 mills/kW h. • The generation cost of pyro-SFR recycling was estimated to be 78.06 mills/kW h. • The reactor cost was identified as the main cost driver of pyro-SFR recycling. - Abstract: The present study analyzes advanced nuclear fuel cycle cost estimation models such as the different discount rate model and its cost estimation results. To do so, an analysis of the nuclear fuel cycle cost of three options (direct disposal (once through), PWR–MOX (Mixed OXide fuel), and Pyro-SFR (Sodium-cooled Fast Reactor)) from the viewpoint of economic sense, focusing on the cost estimation model, was conducted using a dynamic model. From an analysis of the fuel cycle cost estimation results, it was found that some cost gap exists between the traditional same discount rate model and the advanced different discount rate model. However, this gap does not change the priority of the nuclear fuel cycle option from the viewpoint of economics. In addition, the fuel cycle costs of OT (Once-Through) and Pyro-SFR recycling based on the most likely value using a probabilistic cost estimation except for reactor costs were calculated to be 8.75 mills/kW h and 8.30 mills/kW h, respectively. Namely, the Pyro-SFR recycling option was more economical than the direct disposal option. However, if the reactor cost is considered, the economic sense in the generation cost between the two options (direct disposal vs. Pyro-SFR recycling) can be changed because of the high reactor cost of an SFR.

  3. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  4. Reducing uncertainty for estimating forest carbon stocks and dynamics using integrated remote sensing, forest inventory and process-based modeling

    Science.gov (United States)

    Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.

    2015-12-01

    Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.

  5. A method for state of energy estimation of lithium-ion batteries based on neural network model

    International Nuclear Information System (INIS)

    Dong, Guangzhong; Zhang, Xu; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    The state-of-energy is an important evaluation index for energy optimization and management of power battery systems in electric vehicles. Unlike the state-of-charge which represents the residual energy of the battery in traditional applications, state-of-energy is integral result of battery power, which is the product of current and terminal voltage. On the other hand, like state-of-charge, the state-of-energy has an effect on terminal voltage. Therefore, it is hard to solve the nonlinear problems between state-of-energy and terminal voltage, which will complicate the estimation of a battery's state-of-energy. To address this issue, a method based on wavelet-neural-network-based battery model and particle filter estimator is presented for the state-of-energy estimation. The wavelet-neural-network based battery model is used to simulate the entire dynamic electrical characteristics of batteries. The temperature and discharge rate are also taken into account to improve model accuracy. Besides, in order to suppress the measurement noises of current and voltage, a particle filter estimator is applied to estimate cell state-of-energy. Experimental results on LiFePO_4 batteries indicate that the wavelet-neural-network based battery model simulates battery dynamics robustly with high accuracy and the estimation value based on the particle filter estimator converges to the real state-of-energy within an error of ±4%. - Highlights: • State-of-charge is replaced by state-of-energy to determine cells residual energy. • The battery state-space model is established based on a neural network. • Temperature and current influence are considered to improve the model accuracy. • The particle filter is used for state-of-energy estimation to improve accuracy. • The robustness of new method is validated under dynamic experimental conditions.

  6. Understanding the limit order book: Conditioning on trade informativeness

    OpenAIRE

    Beltran, Héléna; Grammig, Joachim; Menkveld, Albert J.

    2005-01-01

    Electronic limit order books are ubiquitous in markets today. However, theoretical models for limit order markets fail to explain the real world data well. Sandas (2001) tests the classic Glosten (1994) model for order book equilibrium and rejects it. We reconfirm this result for one of the largest European stock markets. We then relax one of the model's assumptions and allow the informational content of trades to change over time. Adapting Hasbrouck's (1991a,b) methodology to estimate time v...

  7. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  8. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  9. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  10. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  11. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  12. Algebraic Specifications, Higher-order Types and Set-theoretic Models

    DEFF Research Database (Denmark)

    Kirchner, Hélène; Mosses, Peter David

    2001-01-01

    , and power-sets. This paper presents a simple framework for algebraic specifications with higher-order types and set-theoretic models. It may be regarded as the basis for a Horn-clause approximation to the Z framework, and has the advantage of being amenable to prototyping and automated reasoning. Standard......In most algebraic  specification frameworks, the type system is restricted to sorts, subsorts, and first-order function types. This is in marked contrast to the so-called model-oriented frameworks, which provide higer-order types, interpreted set-theoretically as Cartesian products, function spaces...... set-theoretic models are considered, and conditions are given for the existence of initial reduct's of such models. Algebraic specifications for various set-theoretic concepts are considered....

  13. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  14. Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.

  15. Reduced order modeling of flashing two-phase jets

    Energy Technology Data Exchange (ETDEWEB)

    Gurecky, William, E-mail: william.gurecky@utexas.edu; Schneider, Erich, E-mail: eschneider@mail.utexas.edu; Ballew, Davis, E-mail: davisballew@utexas.edu

    2015-12-01

    Highlights: • Accident simulation requires ability to quickly predict two-phase flashing jet's damage potential. • A reduced order modeling methodology informed by experimental or computational data is described. • Zone of influence volumes are calculated for jets of various upstream thermodynamic conditions. - Abstract: In the event of a Loss of Coolant Accident (LOCA) in a pressurized water reactor, the escaping coolant produces a highly energetic flashing jet with the potential to damage surrounding structures. In LOCA analysis, the goal is often to evaluate many break scenarios in a Monte Carlo style simulation to evaluate the resilience of a reactor design. Therefore, in order to quickly predict the damage potential of flashing jets, it is of interest to develop a reduced order model that relates the damage potential of a jet to the pressure and temperature upstream of the break and the distance from the break to a given object upon which the jet is impinging. This work presents framework for producing a Reduced Order Model (ROM) that may be informed by measured data, Computational Fluid Dynamics (CFD) simulations, or a combination of both. The model is constructed by performing regression analysis on the pressure field data, allowing the impingement pressure to be quickly reconstructed for any given upstream thermodynamic condition within the range of input data. The model is applicable to both free and fully impinging two-phase flashing jets.

  16. Partial-Order Reduction for GPU Model Checking

    NARCIS (Netherlands)

    Neele, T.; Wijs, A.; Bosnacki, D.; van de Pol, Jan Cornelis; Artho, C; Legay, A.; Peled, D.

    2016-01-01

    Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicit-state model checking, we improve memory efficiency by applying partial-order reduction. We propose

  17. CLEAR (Calculates Logical Evacuation And Response): A generic transportation network model for the calculation of evacuation time estimates

    International Nuclear Information System (INIS)

    Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

  18. CLEAR (Calculates Logical Evacuation And Response): A Generic Transportation Network Model for the Calculation of Evacuation Time Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.

  19. Lagrangian generic second order traffic flow models for node

    Directory of Open Access Journals (Sweden)

    Asma Khelifi

    2018-02-01

    Full Text Available This study sheds light on higher order macroscopic traffic flow modeling on road networks, thanks to the generic second order models (GSOM family which embeds a myriad of traffic models. It has been demonstrated that such higher order models are easily solved in Lagrangian coordinates which are compatible with both microscopic and macroscopic descriptions. The generalized GSOM model is reformulated in the Lagrangian coordinate system to develop a more efficient numerical method. The difficulty in applying this approach on networks basically resides in dealing with node dynamics. Traffic flow characteristics at node are different from that on homogeneous links. Different geometry features can lead to different critical research issues. For instance, discontinuity in traffic stream can be an important issue for traffic signal operations, while capacity drop may be crucial for lane-merges. The current paper aims to establish and analyze a new adapted node model for macroscopic traffic flow models by applying upstream and downstream boundary conditions on the Lagrangian coordinates in order to perform simulations on networks of roads, and accompanying numerical method. The internal node dynamics between upstream and downstream links are taken into account of the node model. Therefore, a numerical example is provided to underscore the efficiency of this approach. Simulations show that the discretized node model yields accurate results. Additional kinematic waves and contact discontinuities are induced by the variation of the driver attribute.

  20. Multi-skyrmion solutions of a sixth order Skyrme model

    International Nuclear Information System (INIS)

    Floratos, I.

    2001-08-01

    In this thesis, we study some of the classical properties of an extension of the Skyrme model defined by adding a sixth order derivative term to the Lagrangian. In chapter 1, we review the physical as well as the mathematical motivation behind the study of the Skyrme model and in chapter 2, we give a brief summary of various extended Skyrme models that have been proposed over the last few years. We then define a new sixth order Skyrme model by introducing a dimensionless parameter λ that denotes the mixing between the two higher order terms, the Skyrme term and the sixth order term. In chapter 3 we compute numerically the multi-skyrmion solutions of this extended model and show that they have the same symmetries with the usual skyrmion solutions. In addition, we analyse the dependence of the energy and radius of these classical solutions with respect to the coupling constant λ. We compare our results with experimental data and determine whether this modified model can provide us with better theoretical predictions than the original one. In chapter 4, we use the rational map ansatz, introduced by Houghton, Manton and Sutcliffe, to approximate minimum energy multi-skyrmion solutions with B ≤ 9 of the SU(2) model and with B ≤ 6 of the SU(3) model. We compare our results with the ones obtained numerically and show that the rational map ansatz works just as well for the generalised model as for the pure Skyrme model, at least for B ≤ 5. In chapter 5, we use a generalisation of the rational map ansatz, introduced by loannidou, Piette and Zakrzewski, to construct analytically some topologically non-trivial solutions of the extended model in SU(3). These solutions are spherically symmetric and some of them can be interpreted as bound states of skyrmions. Finally, we use the same ansatz to construct low energy configurations of the SU(N) sixth order Skyrme model. (author)

  1. Partial Orders and Fully Abstract Models for Concurrency

    DEFF Research Database (Denmark)

    Engberg, Uffe Henrik

    1990-01-01

    In this thesis sets of labelled partial orders are employed as fundamental mathematical entities for modelling nondeterministic and concurrent processes thereby obtaining so-called noninterleaving semantics. Based on different closures of sets of labelled partial orders, simple algebraic language...

  2. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  3. Higher-order RANS turbulence models for separated flows

    Data.gov (United States)

    National Aeronautics and Space Administration — Higher-order Reynolds-averaged Navier-Stokes (RANS) models are developed to overcome the shortcomings of second-moment RANS models in predicting separated flows....

  4. Inadmissibility of Usual and Mixed Estimators of Two Ordered Gamma Scale Parameters Under Reflected Gamma Loss Function

    OpenAIRE

    Z. Meghnatisi; N. Nematollahi

    2009-01-01

    Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a cla...

  5. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  6. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  7. Estimating varying coefficients for partial differential equation models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2017-09-01

    Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.

  8. Composite symmetry-protected topological order and effective models

    Science.gov (United States)

    Nietner, A.; Krumnow, C.; Bergholtz, E. J.; Eisert, J.

    2017-12-01

    Strongly correlated quantum many-body systems at low dimension exhibit a wealth of phenomena, ranging from features of geometric frustration to signatures of symmetry-protected topological order. In suitable descriptions of such systems, it can be helpful to resort to effective models, which focus on the essential degrees of freedom of the given model. In this work, we analyze how to determine the validity of an effective model by demanding it to be in the same phase as the original model. We focus our study on one-dimensional spin-1 /2 systems and explain how nontrivial symmetry-protected topologically ordered (SPT) phases of an effective spin-1 model can arise depending on the couplings in the original Hamiltonian. In this analysis, tensor network methods feature in two ways: on the one hand, we make use of recent techniques for the classification of SPT phases using matrix product states in order to identify the phases in the effective model with those in the underlying physical system, employing Künneth's theorem for cohomology. As an intuitive paradigmatic model we exemplify the developed methodology by investigating the bilayered Δ chain. For strong ferromagnetic interlayer couplings, we find the system to transit into exactly the same phase as an effective spin-1 model. However, for weak but finite coupling strength, we identify a symmetry broken phase differing from this effective spin-1 description. On the other hand, we underpin our argument with a numerical analysis making use of matrix product states.

  9. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  10. Random balance designs for the estimation of first order global sensitivity indices

    International Nuclear Information System (INIS)

    Tarantola, S.; Gatelli, D.; Mara, T.A.

    2006-01-01

    We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA

  11. Optimal inventory management and order book modeling

    KAUST Repository

    Baradel, Nicolas

    2018-02-16

    We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic of the order book, similar to the one considered in the Queue-Reactive models [14, 20, 21], the MM and the HFT define their trading strategy by optimizing the expected utility of terminal wealth, while the IB has a prescheduled task to sell or buy many shares of the considered asset. We derive the variational partial differential equations that characterize the value functions of the MM and HFT and explain how almost optimal control can be deduced from them. We then provide a first illustration of the interactions that can take place between these different market participants by simulating the dynamic of an order book in which each of them plays his own (optimal) strategy.

  12. Kalman filtering state of charge estimation for battery management system based on a stochastic fuzzy neural network battery model

    International Nuclear Information System (INIS)

    Xu Long; Wang Junping; Chen Quanshi

    2012-01-01

    Highlights: ► A novel extended Kalman Filtering SOC estimation method based on a stochastic fuzzy neural network (SFNN) battery model is proposed. ► The SFNN which has filtering effect on noisy input can model the battery nonlinear dynamic with high accuracy. ► A robust parameter learning algorithm for SFNN is studied so that the parameters can converge to its true value with noisy data. ► The maximum SOC estimation error based on the proposed method is 0.6%. - Abstract: Extended Kalman filtering is an intelligent and optimal means for estimating the state of a dynamic system. In order to use extended Kalman filtering to estimate the state of charge (SOC), we require a mathematical model that can accurately capture the dynamics of battery pack. In this paper, we propose a stochastic fuzzy neural network (SFNN) instead of the traditional neural network that has filtering effect on noisy input to model the battery nonlinear dynamic. Then, the paper studies the extended Kalman filtering SOC estimation method based on a SFNN model. The modeling test is realized on an 80 Ah Ni/MH battery pack and the Federal Urban Driving Schedule (FUDS) cycle is used to verify the SOC estimation method. The maximum SOC estimation error is 0.6% compared with the real SOC obtained from the discharging test.

  13. Biochemical transport modeling, estimation, and detection in realistic environments

    Science.gov (United States)

    Ortner, Mathias; Nehorai, Arye

    2006-05-01

    Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.

  14. Customized Steady-State Constraints for Parameter Estimation in Non-Linear Ordinary Differential Equation Models.

    Science.gov (United States)

    Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel

    2016-01-01

    Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.

  15. ON TESTING OF CRYPTOGRAPHYC GENERATORS OUTPUT SEQUENCES USING MARKOV CHAINS OF CONDITIONAL ORDER

    Directory of Open Access Journals (Sweden)

    M. V. Maltsev

    2013-01-01

    Full Text Available The paper deals with the Markov chain of conditional order, which is used for statisticaltesting of cryptographic generators. Statistical estimations of model parameters are given. Consistency of the order estimator is proved. Results of computer experiments are presented.

  16. Flocking of Second-Order Multiagent Systems With Connectivity Preservation Based on Algebraic Connectivity Estimation.

    Science.gov (United States)

    Fang, Hao; Wei, Yue; Chen, Jie; Xin, Bin

    2017-04-01

    The problem of flocking of second-order multiagent systems with connectivity preservation is investigated in this paper. First, for estimating the algebraic connectivity as well as the corresponding eigenvector, a new decentralized inverse power iteration scheme is formulated. Then, based on the estimation of the algebraic connectivity, a set of distributed gradient-based flocking control protocols is built with a new class of generalized hybrid potential fields which could guarantee collision avoidance, desired distance stabilization, and the connectivity of the underlying communication network simultaneously. What is important is that the proposed control scheme allows the existing edges to be broken without violation of connectivity constraints, and thus yields more flexibility of motions and reduces the communication cost for the multiagent system. In the end, nontrivial comparative simulations and experimental results are performed to demonstrate the effectiveness of the theoretical results and highlight the advantages of the proposed estimation scheme and control algorithm.

  17. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Science.gov (United States)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  18. Applicability of models to estimate traffic noise for urban roads.

    Science.gov (United States)

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  19. Advanced Fluid Reduced Order Models for Compressible Flow.

    Energy Technology Data Exchange (ETDEWEB)

    Tezaur, Irina Kalashnikova [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maddix, Danielle [Stanford Univ., CA (United States); Mussoni, Erin E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Balajewicz, Maciej [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2017-09-01

    This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.

  20. Reduced-order modelling of wind turbines

    NARCIS (Netherlands)

    Elkington, K.; Slootweg, J.G.; Ghandhari, M.; Kling, W.L.; Ackermann, T.

    2012-01-01

    In this chapter power system dynamics simulation(PSDS) isused to study the dynamics of large-scale power systems. It is necessary to incorporate models of wind turbine generating systems into PSDS software packages in order to analyse the impact of high wind power penetration on electrical power

  1. A low-order coupled chemistry meteorology model for testing online and offline data assimilation schemes

    Science.gov (United States)

    Haussaire, J.-M.; Bocquet, M.

    2015-08-01

    Bocquet and Sakov (2013) have introduced a low-order model based on the coupling of the chaotic Lorenz-95 model which simulates winds along a mid-latitude circle, with the transport of a tracer species advected by this zonal wind field. This model, named L95-T, can serve as a playground for testing data assimilation schemes with an online model. Here, the tracer part of the model is extended to a reduced photochemistry module. This coupled chemistry meteorology model (CCMM), the L95-GRS model, mimics continental and transcontinental transport and the photochemistry of ozone, volatile organic compounds and nitrogen oxides. Its numerical implementation is described. The model is shown to reproduce the major physical and chemical processes being considered. L95-T and L95-GRS are specifically designed and useful for testing advanced data assimilation schemes, such as the iterative ensemble Kalman smoother (IEnKS) which combines the best of ensemble and variational methods. These models provide useful insights prior to the implementation of data assimilation methods on larger models. We illustrate their use with data assimilation schemes on preliminary, yet instructive numerical experiments. In particular, online and offline data assimilation strategies can be conveniently tested and discussed with this low-order CCMM. The impact of observed chemical species concentrations on the wind field can be quantitatively estimated. The impacts of the wind chaotic dynamics and of the chemical species non-chaotic but highly nonlinear dynamics on the data assimilation strategies are illustrated.

  2. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  3. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  4. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  5. Estimation of unemployment rates using small area estimation model by combining time series and cross-sectional data

    Science.gov (United States)

    Muchlisoh, Siti; Kurnia, Anang; Notodiputro, Khairil Anwar; Mangku, I. Wayan

    2016-02-01

    Labor force surveys conducted over time by the rotating panel design have been carried out in many countries, including Indonesia. Labor force survey in Indonesia is regularly conducted by Statistics Indonesia (Badan Pusat Statistik-BPS) and has been known as the National Labor Force Survey (Sakernas). The main purpose of Sakernas is to obtain information about unemployment rates and its changes over time. Sakernas is a quarterly survey. The quarterly survey is designed only for estimating the parameters at the provincial level. The quarterly unemployment rate published by BPS (official statistics) is calculated based on only cross-sectional methods, despite the fact that the data is collected under rotating panel design. The study purpose to estimate a quarterly unemployment rate at the district level used small area estimation (SAE) model by combining time series and cross-sectional data. The study focused on the application and comparison between the Rao-Yu model and dynamic model in context estimating the unemployment rate based on a rotating panel survey. The goodness of fit of both models was almost similar. Both models produced an almost similar estimation and better than direct estimation, but the dynamic model was more capable than the Rao-Yu model to capture a heterogeneity across area, although it was reduced over time.

  6. Parameter and State Estimation of an Anaerobic Digestion of Organic Wastes Model with Addition of Stimulating Substances

    Directory of Open Access Journals (Sweden)

    Velislava Lubenova

    2009-03-01

    Full Text Available New control inputs are introduced in the 5th order mass-balance non-linear model of the anaerobic digestion, which reflects the addition of stimulating substances (acetate and glucose. Laboratory experiments have been done with step-wise and pulse changes of these new inputs. On the basis of the step responses of the measured variables (biogas flow rate and acetate concentration in the bioreactor and iterative methodology, involving non-linear optimisation and simulations, the model coefficients have been estimated. The model validity has been proved by another set of experiments. The observation part is built on a two-step structure. One estimator and two observers are designed on the basis of this process model. Their stability has been proved and their performances have been investigated with experimental data and simulations.

  7. From models to measurements: comparing downed dead wood carbon stock estimates in the U.S. forest inventory.

    Directory of Open Access Journals (Sweden)

    Grant M Domke

    Full Text Available The inventory and monitoring of coarse woody debris (CWD carbon (C stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI. Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles in the conterminous U.S. was 9 percent (145.1 Tg greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.

  8. From models to measurements: comparing downed dead wood carbon stock estimates in the U.S. forest inventory.

    Science.gov (United States)

    Domke, Grant M; Woodall, Christopher W; Walters, Brian F; Smith, James E

    2013-01-01

    The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.

  9. Diffuse solar radiation estimation models for Turkey's big cities

    International Nuclear Information System (INIS)

    Ulgen, Koray; Hepbasli, Arif

    2009-01-01

    A reasonably accurate knowledge of the availability of the solar resource at any place is required by solar engineers, architects, agriculturists, and hydrologists in many applications of solar energy such as solar furnaces, concentrating collectors, and interior illumination of buildings. For this purpose, in the past, various empirical models (or correlations) have been developed in order to estimate the solar radiation around the world. This study deals with diffuse solar radiation estimation models along with statistical test methods used to statistically evaluate their performance. Models used to predict monthly average daily values of diffuse solar radiation are classified in four groups as follows: (i) From the diffuse fraction or cloudness index, function of the clearness index, (ii) From the diffuse fraction or cloudness index, function of the relative sunshine duration or sunshine fraction, (iii) From the diffuse coefficient, function of the clearness index, and (iv) From the diffuse coefficient, function of the relative sunshine duration or sunshine fraction. Empirical correlations are also developed to establish a relationship between the monthly average daily diffuse fraction or cloudness index (K d ) and monthly average daily diffuse coefficient (K dd ) with the monthly average daily clearness index (K T ) and monthly average daily sunshine fraction (S/S o ) for the three big cities by population in Turkey (Istanbul, Ankara and Izmir). Although the global solar radiation on a horizontal surface and sunshine duration has been measured by the Turkish State Meteorological Service (STMS) over all country since 1964, the diffuse solar radiation has not been measured. The eight new models for estimating the monthly average daily diffuse solar radiation on a horizontal surface in three big cites are validated, and thus, the most accurate model is selected for guiding future projects. The new models are then compared with the 32 models available in the

  10. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  11. Hydrological model uncertainty due to spatial evapotranspiration estimation methods

    Science.gov (United States)

    Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub

    2016-05-01

    Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.

  12. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  13. Reduced-Order Modeling of 3D Rayleigh-Benard Turbulent Convection

    Science.gov (United States)

    Hassanzadeh, Pedram; Grover, Piyush; Nabi, Saleh

    2017-11-01

    Accurate Reduced-Order Models (ROMs) of turbulent geophysical flows have broad applications in science and engineering; for example, to study the climate system or to perform real-time flow control/optimization in energy systems. Here we focus on 3D Rayleigh-Benard turbulent convection at the Rayleigh number of 106 as a prototype for turbulent geophysical flows, which are dominantly buoyancy driven. The purpose of the study is to evaluate and improve the performance of different model reduction techniques using this setting. One-dimensional ROMs for horizontally averaged temperature are calculated using several methods. Specifically, the Linear Response Function (LRF) of the system is calculated from a large DNS dataset using Dynamic Mode Decomposition (DMD) and Fluctuation-Dissipation Theorem (FDT). The LRF is also calculated using the Green's function method of Hassanzadeh and Kuang (2016, J. Atmos. Sci.), which is based on using numerous forced DNS runs. The performance of these LRFs in estimating the system's response to weak external forcings or controlling the time-mean flow are compared and contrasted. The spectral properties of the LRFs and the scaling of the accuracy with the length of the dataset (for the data-driven methods) are also discussed.

  14. Full Field and Anomaly Initialisation using a low order climate model: a comparison, and proposals for advanced formulations

    Science.gov (United States)

    Weber, Robin; Carrassi, Alberto; Guemas, Virginie; Doblas-Reyes, Francisco; Volpi, Danila

    2014-05-01

    Full Field (FFI) and Anomaly Initialisation (AI) are two schemes used to initialise seasonal-to-decadal (s2d) prediction. FFI initialises the model on the best estimate of the actual climate state and minimises the initial error. However, due to inevitable model deficiencies, the trajectories drift away from the observations towards the model's own attractor, inducing a bias in the forecast. AI has been devised to tackle the impact of drift through the addition of this bias onto the observations, in the hope of gaining an initial state closer to the model attractor. Its goal is to forecast climate anomalies. The large variety of experimental setups, global coupled models, and observational networks adopted world-wide have led to varying results with regards to the relative performance of AI and FFI. Our research is firstly motivated in a comparison of these two initialisation approaches under varying circumstances of observational errors, observational distributions, and model errors. We also propose and compare two advanced schemes for s2d prediction. Least Square Initialisation (LSI) intends to propagate observational information of partially initialized systems to the whole model domain, based on standard practices in data assimilation and using the covariance of the model anomalies. Exploring the Parameters Uncertainty (EPU) is an online drift correction technique applied during the forecast run after initialisation. It is designed to estimate, and subtract, the bias in the forecast related to parametric error. Experiments are carried out using an idealized coupled dynamics in order to facilitate better control and robust statistical inference. Results show that an improvement of FFI will necessitate refinements in the observations, whereas improvements in AI are subject to model advances. A successful approximation of the model attractor using AI is guaranteed only when the differences between model and nature probability distribution functions (PDFs) are

  15. Semivariogram models for estimating fig fly population density throughout the year

    Directory of Open Access Journals (Sweden)

    Mauricio Paulo Batistella Pasini

    2014-07-01

    Full Text Available The objective of this work was to select semivariogram models to estimate the population density of fig fly (Zaprionus indianus; Diptera: Drosophilidae throughout the year, using ordinary kriging. Nineteen monitoring sites were demarcated in an area of 8,200 m2, cropped with six fruit tree species: persimmon, citrus, fig, guava, apple, and peach. During a 24 month period, 106 weekly evaluations were done in these sites. The average number of adult fig flies captured weekly per trap, during each month, was subjected to the circular, spherical, pentaspherical, exponential, Gaussian, rational quadratic, hole effect, K-Bessel, J-Bessel, and stable semivariogram models, using ordinary kriging interpolation. The models with the best fit were selected by cross-validation. Each data set (months has a particular spatial dependence structure, which makes it necessary to define specific models of semivariograms in order to enhance the adjustment to the experimental semivariogram. Therefore, it was not possible to determine a standard semivariogram model; instead, six theoretical models were selected: circular, Gaussian, hole effect, K-Bessel, J-Bessel, and stable.

  16. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    Science.gov (United States)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  17. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  18. Using satellite-based rainfall estimates for streamflow modelling: Bagmati Basin

    Science.gov (United States)

    Shrestha, M.S.; Artan, Guleid A.; Bajracharya, S.R.; Sharma, R. R.

    2008-01-01

    In this study, we have described a hydrologic modelling system that uses satellite-based rainfall estimates and weather forecast data for the Bagmati River Basin of Nepal. The hydrologic model described is the US Geological Survey (USGS) Geospatial Stream Flow Model (GeoSFM). The GeoSFM is a spatially semidistributed, physically based hydrologic model. We have used the GeoSFM to estimate the streamflow of the Bagmati Basin at Pandhera Dovan hydrometric station. To determine the hydrologic connectivity, we have used the USGS Hydro1k DEM dataset. The model was forced by daily estimates of rainfall and evapotranspiration derived from weather model data. The rainfall estimates used for the modelling are those produced by the National Oceanic and Atmospheric Administration Climate Prediction Centre and observed at ground rain gauge stations. The model parameters were estimated from globally available soil and land cover datasets – the Digital Soil Map of the World by FAO and the USGS Global Land Cover dataset. The model predicted the daily streamflow at Pandhera Dovan gauging station. The comparison of the simulated and observed flows at Pandhera Dovan showed that the GeoSFM model performed well in simulating the flows of the Bagmati Basin.

  19. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  20. A MATHEMATICAL MODELLING APPROACH TO ONE-DAY CRICKET BATTING ORDERS

    Directory of Open Access Journals (Sweden)

    Matthews Ovens1

    2006-12-01

    Full Text Available While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players

  1. Random balance designs for the estimation of first order global sensitivity indices

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola, S. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (VA) (Italy); Mara, T.A. [Laboratory of Industrial engineering, University of Reunion Island, BP 7151, 15 avenue Rene Cassin, 97 715 Saint-Denis (France)

    2006-06-15

    We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA.

  2. Radiative transfer model for estimation of global solar radiation; Modelo de transferencia radiativa para la estimacion de la radiacion solar global

    Energy Technology Data Exchange (ETDEWEB)

    Pettazzi, A.; Sabon, C. S.; Souto, G. J. A.

    2004-07-01

    In this work, the efficiency of a radiative transfer model in estimating the annual solar global radiation has been evaluated, over different locations at Galicia, Spain, in clear sky periods. Due to its quantitative significance, special attention has been focused on the analysis of the influence of visibility over the global radiation. By comparison of both estimated and measured global solar radiation along year 2002, a typical annual visibility series was obtained over every location. These visibility values has been analysed in order to identify patterns and typical values, in order to be used to estimate the global solar radiation along a different year. Validation was done over the year 2003, obtaining an annual estimation less than 10 % different to the measured value. (Author)

  3. The problem of multicollinearity in horizontal solar radiation estimation models and a new model for Turkey

    International Nuclear Information System (INIS)

    Demirhan, Haydar

    2014-01-01

    Highlights: • Impacts of multicollinearity on solar radiation estimation models are discussed. • Accuracy of existing empirical models for Turkey is evaluated. • A new non-linear model for the estimation of average daily horizontal global solar radiation is proposed. • Estimation and prediction performance of the proposed and existing models are compared. - Abstract: Due to the considerable decrease in energy resources and increasing energy demand, solar energy is an appealing field of investment and research. There are various modelling strategies and particular models for the estimation of the amount of solar radiation reaching at a particular point over the Earth. In this article, global solar radiation estimation models are taken into account. To emphasize severity of multicollinearity problem in solar radiation estimation models, some of the models developed for Turkey are revisited. It is observed that these models have been identified as accurate under certain multicollinearity structures, and when the multicollinearity is eliminated, the accuracy of these models is controversial. Thus, a reliable model that does not suffer from multicollinearity and gives precise estimates of global solar radiation for the whole region of Turkey is necessary. A new nonlinear model for the estimation of average daily horizontal solar radiation is proposed making use of the genetic programming technique. There is no multicollinearity problem in the new model, and its estimation accuracy is better than the revisited models in terms of numerous statistical performance measures. According to the proposed model, temperature, precipitation, altitude, longitude, and monthly average daily extraterrestrial horizontal solar radiation have significant effect on the average daily global horizontal solar radiation. Relative humidity and soil temperature are not included in the model due to their high correlation with precipitation and temperature, respectively. While altitude has

  4. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  5. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  6. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  7. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  8. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  9. The independent loss model with ordered insertions for the evolution of CRISPR spacers.

    Science.gov (United States)

    Baumdicker, F; Huebner, A M I; Pfaffelhuber, P

    2018-02-01

    Today, the CRISPR (clustered regularly interspaced short palindromic repeats) region within bacterial and archaeal genomes is known to encode an adaptive immune system. We rely on previous results on the evolution of the CRISPR arrays, which led to the ordered independent loss model, introduced by Kupczok and Bollback (2013). When focusing on the spacers (between the repeats), new elements enter a CRISPR array at rate θ at the leader end of the array, while all spacers present are lost at rate ρ along the phylogeny relating the sample. Within this model, we compute the distribution of distances of spacers which are present in all arrays in a sample of size n. We use these results to estimate the loss rate ρ from spacer array data for n=2 and n=3. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The second-order decomposition model of nonlinear irregular waves

    DEFF Research Database (Denmark)

    Yang, Zhi Wen; Bingham, Harry B.; Li, Jin Xuan

    2013-01-01

    into the first- and the second-order super-harmonic as well as the second-order sub-harmonic components by transferring them into an identical Fourier frequency-space and using a Newton-Raphson iteration method. In order to evaluate the present model, a variety of monochromatic waves and the second...

  11. Modeling of magnetic fields on a cylindrical surface and associated parameter estimation for development of a size sensor

    International Nuclear Information System (INIS)

    Zhang, Song; Rajamani, Rajesh

    2016-01-01

    This paper develops analytical sensing principles for estimation of circumferential size of a cylindrical surface using magnetic sensors. An electromagnet and magnetic sensors are used on a wearable band for measurement of leg size. In order to enable robust size estimation during rough real-world use of the wearable band, three estimation algorithms are developed based on models of the magnetic field variation over a cylindrical surface. The magnetic field models developed include those for a dipole and for a uniformly magnetized cylinder. The estimation algorithms used include a linear regression equation, an extended Kalman filter and an unscented Kalman filter. Experimental laboratory tests show that the size sensor in general performs accurately, yielding sub-millimeter estimation errors. The unscented Kalman filter yields the best performance that is robust to bias and misalignment errors. The size sensor developed herein can be used for monitoring swelling due to fluid accumulation in the lower leg and a number of other biomedical applications. (paper)

  12. Transport coefficient computation based on input/output reduced order models

    Science.gov (United States)

    Hurst, Joshua L.

    The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator

  13. A parametric model order reduction technique for poroelastic finite element models.

    Science.gov (United States)

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  14. Basic problems and solution methods for two-dimensional continuous 3 × 3 order hidden Markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Tang, Gui-jin; Gan, Zong-liang; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model referred to as two-dimensional continuous 3 × 3 order hidden Markov model is put forward to avoid the disadvantages of the classical hypothesis of two-dimensional continuous hidden Markov model. This paper presents three equivalent definitions of the model, in which the state transition probability relies on not only immediate horizontal and vertical states but also immediate diagonal state, and in which the probability density of the observation relies on not only current state but also immediate horizontal and vertical states. The paper focuses on the three basic problems of the model, namely probability density calculation, parameters estimation and path backtracking. Some algorithms solving the questions are theoretically derived, by exploiting the idea that the sequences of states on rows or columns of the model can be viewed as states of a one-dimensional continuous 1 × 2 order hidden Markov model. Simulation results further demonstrate the performance of the algorithms. Because there are more statistical characteristics in the structure of the proposed new model, it can more accurately describe some practical problems, as compared to two-dimensional continuous hidden Markov model.

  15. Low-order aeroelastic models of wind turbines for controller design

    DEFF Research Database (Denmark)

    Sønderby, Ivan Bergquist

    Wind turbine controllers are used to optimize the performance of wind turbines such as to reduce power variations and fatigue and extreme loads on wind turbine components. Accurate tuning and design of modern controllers must be done using low-order models that accurately captures the aeroelastic...... response of the wind turbine. The purpose of this thesis is to investigate the necessary model complexity required in aeroelastic models used for controller design and to analyze and propose methods to design low-order aeroelastic wind turbine models that are suited for model-based control design....... The thesis contains a characterization of the dynamics that influence the open-loop aeroelastic frequency response of a modern wind turbine, based on a high-order aeroelastic wind turbine model. One main finding is that the transfer function from collective pitch to generator speed is affected by two low...

  16. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  17. Fitting model-based psychometric functions to simultaneity and temporal-order judgment data: MATLAB and R routines.

    Science.gov (United States)

    Alcalá-Quintana, Rocío; García-Pérez, Miguel A

    2013-12-01

    Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.

  18. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem

    2017-01-01

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  19. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  20. SOLVING FRACTIONAL-ORDER COMPETITIVE LOTKA-VOLTERRA MODEL BY NSFD SCHEMES

    Directory of Open Access Journals (Sweden)

    S.ZIBAEI

    2016-12-01

    Full Text Available In this paper, we introduce fractional-order into a model competitive Lotka- Volterra prey-predator system. We will discuss the stability analysis of this fractional system. The non-standard nite difference (NSFD scheme is implemented to study the dynamic behaviors in the fractional-order Lotka-Volterra system. Proposed non-standard numerical scheme is compared with the forward Euler and fourth order Runge-Kutta methods. Numerical results show that the NSFD approach is easy and accurate for implementing when applied to fractional-order Lotka-Volterra model.

  1. Remaining lifetime modeling using State-of-Health estimation

    Science.gov (United States)

    Beganovic, Nejra; Söffker, Dirk

    2017-08-01

    Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model

  2. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  3. Improving spatio-temporal model estimation of satellite-derived PM2.5 concentrations: Implications for public health

    Science.gov (United States)

    Barik, M. G.; Al-Hamdan, M. Z.; Crosson, W. L.; Yang, C. A.; Coffield, S. R.

    2017-12-01

    Satellite-derived environmental data, available in a range of spatio-temporal scales, are contributing to the growing use of health impact assessments of air pollution in the public health sector. Models developed using correlation of Moderate Resolution Imaging Spectrometer (MODIS) Aerosol Optical Depth (AOD) with ground measurements of fine particulate matter less than 2.5 microns (PM2.5) are widely applied to measure PM2.5 spatial and temporal variability. In the public health sector, associations of PM2.5 with respiratory and cardiovascular diseases are often investigated to quantify air quality impacts on these health concerns. In order to improve predictability of PM2.5 estimation using correlation models, we have included meteorological variables, higher-resolution AOD products and instantaneous PM2.5 observations into statistical estimation models. Our results showed that incorporation of high-resolution (1-km) Multi-Angle Implementation of Atmospheric Correction (MAIAC)-generated MODIS AOD, meteorological variables and instantaneous PM2.5 observations improved model performance in various parts of California (CA), USA, where single variable AOD-based models showed relatively weak performance. In this study, we further asked whether these improved models actually would be more successful for exploring associations of public health outcomes with estimated PM2.5. To answer this question, we geospatially investigated model-estimated PM2.5's relationship with respiratory and cardiovascular diseases such as asthma, high blood pressure, coronary heart disease, heart attack and stroke in CA using health data from the Centers for Disease Control and Prevention (CDC)'s Wide-ranging Online Data for Epidemiologic Research (WONDER) and the Behavioral Risk Factor Surveillance System (BRFSS). PM2.5 estimation from these improved models have the potential to improve our understanding of associations between public health concerns and air quality.

  4. A computer model of the biosphere, to estimate stochastic and non-stochastic effects of radionuclides on humans

    International Nuclear Information System (INIS)

    Laurens, J.M.

    1985-01-01

    A computer code was written to model food chains in order to estimate the internal and external doses, for stochastic and non-stochastic effects, on humans (adults and infants). Results are given for 67 radionuclides, for unit concentration in water (1 Bq/L) and in atmosphere (1 Bq/m 3 )

  5. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  6. Censored rainfall modelling for estimation of fine-scale extremes

    Science.gov (United States)

    Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro

    2018-01-01

    Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  7. Resource-estimation models and predicted discovery

    International Nuclear Information System (INIS)

    Hill, G.W.

    1982-01-01

    Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)

  8. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    Science.gov (United States)

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  9. A new geometric-based model to accurately estimate arm and leg inertial estimates.

    Science.gov (United States)

    Wicke, Jason; Dumas, Geneviève A

    2014-06-03

    Segment estimates of mass, center of mass and moment of inertia are required input parameters to analyze the forces and moments acting across the joints. The objectives of this study were to propose a new geometric model for limb segments, to evaluate it against criterion values obtained from DXA, and to compare its performance to five other popular models. Twenty five female and 24 male college students participated in the study. For the criterion measures, the participants underwent a whole body DXA scan, and estimates for segment mass, center of mass location, and moment of inertia (frontal plane) were directly computed from the DXA mass units. For the new model, the volume was determined from two standing frontal and sagittal photographs. Each segment was modeled as a stack of slices, the sections of which were ellipses if they are not adjoining another segment and sectioned ellipses if they were adjoining another segment (e.g. upper arm and trunk). Length of axes of the ellipses was obtained from the photographs. In addition, a sex-specific, non-uniform density function was developed for each segment. A series of anthropometric measurements were also taken by directly following the definitions provided of the different body segment models tested, and the same parameters determined for each model. Comparison of models showed that estimates from the new model were consistently closer to the DXA criterion than those from the other models, with an error of less than 5% for mass and moment of inertia and less than about 6% for center of mass location. Copyright © 2014. Published by Elsevier Ltd.

  10. Estimating the numerical diapycnal mixing in an eddy-permitting ocean model

    Science.gov (United States)

    Megann, Alex

    2018-01-01

    Constant-depth (or "z-coordinate") ocean models such as MOM4 and NEMO have become the de facto workhorse in climate applications, having attained a mature stage in their development and are well understood. A generic shortcoming of this model type, however, is a tendency for the advection scheme to produce unphysical numerical diapycnal mixing, which in some cases may exceed the explicitly parameterised mixing based on observed physical processes, and this is likely to have effects on the long-timescale evolution of the simulated climate system. Despite this, few quantitative estimates have been made of the typical magnitude of the effective diapycnal diffusivity due to numerical mixing in these models. GO5.0 is a recent ocean model configuration developed jointly by the UK Met Office and the National Oceanography Centre. It forms the ocean component of the GC2 climate model, and is closely related to the ocean component of the UKESM1 Earth System Model, the UK's contribution to the CMIP6 model intercomparison. GO5.0 uses version 3.4 of the NEMO model, on the ORCA025 global tripolar grid. An approach to quantifying the numerical diapycnal mixing in this model, based on the isopycnal watermass analysis of Lee et al. (2002), is described, and the estimates thereby obtained of the effective diapycnal diffusivity in GO5.0 are compared with the values of the explicit diffusivity used by the model. It is shown that the effective mixing in this model configuration is up to an order of magnitude higher than the explicit mixing in much of the ocean interior, implying that mixing in the model below the mixed layer is largely dominated by numerical mixing. This is likely to have adverse consequences for the representation of heat uptake in climate models intended for decadal climate projections, and in particular is highly relevant to the interpretation of the CMIP6 class of climate models, many of which use constant-depth ocean models at ¼° resolution

  11. Estimation of exhaust gas aerodynamic force on the variable geometry turbocharger actuator: 1D flow model approach

    International Nuclear Information System (INIS)

    Ahmed, Fayez Shakil; Laghrouche, Salah; Mehmood, Adeel; El Bagdouri, Mohammed

    2014-01-01

    Highlights: • Estimation of aerodynamic force on variable turbine geometry vanes and actuator. • Method based on exhaust gas flow modeling. • Simulation tool for integration of aerodynamic force in automotive simulation software. - Abstract: This paper provides a reliable tool for simulating the effects of exhaust gas flow through the variable turbine geometry section of a variable geometry turbocharger (VGT), on flow control mechanism. The main objective is to estimate the resistive aerodynamic force exerted by the flow upon the variable geometry vanes and the controlling actuator, in order to improve the control of vane angles. To achieve this, a 1D model of the exhaust flow is developed using Navier–Stokes equations. As the flow characteristics depend upon the volute geometry, impeller blade force and the existing viscous friction, the related source terms (losses) are also included in the model. In order to guarantee stability, an implicit numerical solver has been developed for the resolution of the Navier–Stokes problem. The resulting simulation tool has been validated through comparison with experimentally obtained values of turbine inlet pressure and the aerodynamic force as measured at the actuator shaft. The simulator shows good compliance with experimental results

  12. Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    Science.gov (United States)

    Fink, P. W.; Wilton, D. R.; Dobbins, J. A.

    2002-01-01

    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required

  13. Ordering the Preference Hierarchies for Internal Finance, Bank Loans, Bond and Share Issues

    OpenAIRE

    Leo de Haan; Jeroen Hinloopen

    2002-01-01

    We estimate the incremental financing decision for a sample of some 150Dutch companies for the years 1984 through 1997, thereby distinguishinginternal finance and three types of external finance: bank borrowing, bondissues and share issues. First, we estimate a multinomial logit model whichconfirms several predictions of both the static trade-off theory and thepecking-order theory as to the determinants of financing choices. Next, weuse ordered probit models to determine which financing hiera...

  14. Low order physical models of vertical axis wind turbines

    Science.gov (United States)

    Craig, Anna; Dabiri, John; Koseff, Jeffrey

    2016-11-01

    In order to examine the ability of low-order physical models of vertical axis wind turbines to accurately reproduce key flow characteristics, experiments were conducted on rotating turbine models, rotating solid cylinders, and stationary porous flat plates (of both uniform and non-uniform porosities). From examination of the patterns of mean flow, the wake turbulence spectra, and several quantitative metrics, it was concluded that the rotating cylinders represent a reasonably accurate analog for the rotating turbines. In contrast, from examination of the patterns of mean flow, it was found that the porous flat plates represent only a limited analog for rotating turbines (for the parameters examined). These findings have implications for both laboratory experiments and numerical simulations, which have previously used analogous low order models in order to reduce experimental/computational costs. NSF GRF and SGF to A.C; ONR N000141211047 and the Gordon and Betty Moore Foundation Grant GBMF2645 to J.D.; and the Bob and Norma Street Environmental Fluid Mechanics Laboratory at Stanford University.

  15. Genomic breeding value estimation using nonparametric additive regression models

    Directory of Open Access Journals (Sweden)

    Solberg Trygve

    2009-01-01

    Full Text Available Abstract Genomic selection refers to the use of genomewide dense markers for breeding value estimation and subsequently for selection. The main challenge of genomic breeding value estimation is the estimation of many effects from a limited number of observations. Bayesian methods have been proposed to successfully cope with these challenges. As an alternative class of models, non- and semiparametric models were recently introduced. The present study investigated the ability of nonparametric additive regression models to predict genomic breeding values. The genotypes were modelled for each marker or pair of flanking markers (i.e. the predictors separately. The nonparametric functions for the predictors were estimated simultaneously using additive model theory, applying a binomial kernel. The optimal degree of smoothing was determined by bootstrapping. A mutation-drift-balance simulation was carried out. The breeding values of the last generation (genotyped was predicted using data from the next last generation (genotyped and phenotyped. The results show moderate to high accuracies of the predicted breeding values. A determination of predictor specific degree of smoothing increased the accuracy.

  16. Estimation of landfill emission lifespan using process oriented modeling

    International Nuclear Information System (INIS)

    Ustohalova, Veronika; Ricken, Tim; Widmann, Renatus

    2006-01-01

    Depending on the particular pollutants emitted, landfills may require service activities lasting from hundreds to thousands of years. Flexible tools allowing long-term predictions of emissions are of key importance to determine the nature and expected duration of maintenance and post-closure activities. A highly capable option represents predictions based on models and verified by experiments that are fast, flexible and allow for the comparison of various possible operation scenarios in order to find the most appropriate one. The intention of the presented work was to develop a experimentally verified multi-dimensional predictive model capable of quantifying and estimating processes taking place in landfill sites where coupled process description allows precise time and space resolution. This constitutive 2-dimensional model is based on the macromechanical theory of porous media (TPM) for a saturated thermo-elastic porous body. The model was used to simulate simultaneously occurring processes: organic phase transition, gas emissions, heat transport, and settlement behavior on a long time scale for municipal solid waste deposited in a landfill. The relationships between the properties (composition, pore structure) of a landfill and the conversion and multi-phase transport phenomena inside it were experimentally determined. In this paper, we present both the theoretical background of the model and the results of the simulations at one single point as well as in a vertical landfill cross section

  17. Reduced Order Modeling Methods for Turbomachinery Design

    Science.gov (United States)

    2009-03-01

    and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En

  18. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  19. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  20. Estimating Function Approaches for Spatial Point Processes

    Science.gov (United States)

    Deng, Chong

    second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  1. Bayes estimation of the general hazard rate model

    International Nuclear Information System (INIS)

    Sarhan, A.

    1999-01-01

    In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2

  2. Development of a Greek solar map based on solar model estimations

    Science.gov (United States)

    Kambezidis, H. D.; Psiloglou, B. E.; Kavadias, K. A.; Paliatsos, A. G.; Bartzokas, A.

    2016-05-01

    The realization of Renewable Energy Sources (RES) for power generation as the only environmentally friendly solution, moved solar systems to the forefront of the energy market in the last decade. The capacity of the solar power doubles almost every two years in many European countries, including Greece. This rise has brought the need for reliable predictions of meteorological data that can easily be utilized for proper RES-site allocation. The absence of solar measurements has, therefore, raised the demand for deploying a suitable model in order to create a solar map. The generation of a solar map for Greece, could provide solid foundations on the prediction of the energy production of a solar power plant that is installed in the area, by providing an estimation of the solar energy acquired at each longitude and latitude of the map. In the present work, the well-known Meteorological Radiation Model (MRM), a broadband solar radiation model, is engaged. This model utilizes common meteorological data, such as air temperature, relative humidity, barometric pressure and sunshine duration, in order to calculate solar radiation through MRM for areas where such data are not available. Hourly values of the above meteorological parameters are acquired from 39 meteorological stations, evenly dispersed around Greece; hourly values of solar radiation are calculated from MRM. Then, by using an integrated spatial interpolation method, a Greek solar energy map is generated, providing annual solar energy values all over Greece.

  3. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  4. Parameter Estimation in Stochastic Grey-Box Models

    DEFF Research Database (Denmark)

    Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay

    2004-01-01

    An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....

  5. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    Science.gov (United States)

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  6. Optimal inventory management and order book modeling

    KAUST Repository

    Baradel, Nicolas; Bouchard, Bruno; Evangelista, David; Mounjid, Othmane

    2018-01-01

    We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic

  7. Very high order lattice perturbation theory for Wilson loops

    International Nuclear Information System (INIS)

    Horsley, R.

    2010-10-01

    We calculate perturbativeWilson loops of various sizes up to loop order n=20 at different lattice sizes for pure plaquette and tree-level improved Symanzik gauge theories using the technique of Numerical Stochastic Perturbation Theory. This allows us to investigate the behavior of the perturbative series at high orders. We observe differences in the behavior of perturbative coefficients as a function of the loop order. Up to n=20 we do not see evidence for the often assumed factorial growth of the coefficients. Based on the observed behavior we sum this series in a model with hypergeometric functions. Alternatively we estimate the series in boosted perturbation theory. Subtracting the estimated perturbative series for the average plaquette from the non-perturbative Monte Carlo result we estimate the gluon condensate. (orig.)

  8. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  9. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  10. The Ising model coupled to 2d orders

    Science.gov (United States)

    Glaser, Lisa

    2018-04-01

    In this article we make first steps in coupling matter to causal set theory in the path integral. We explore the case of the Ising model coupled to the 2d discrete Einstein Hilbert action, restricted to the 2d orders. We probe the phase diagram in terms of the Wick rotation parameter β and the Ising coupling j and find that the matter and the causal sets together give rise to an interesting phase structure. The couplings give rise to five different phases. The causal sets take on random or crystalline characteristics as described in Surya (2012 Class. Quantum Grav. 29 132001) and the Ising model can be correlated or uncorrelated on the random orders and correlated, uncorrelated or anti-correlated on the crystalline orders. We find that at least one new phase transition arises, in which the Ising spins push the causal set into the crystalline phase.

  11. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  12. Global parameter estimation for thermodynamic models of transcriptional regulation.

    Science.gov (United States)

    Suleimenov, Yerzhan; Ay, Ahmet; Samee, Md Abul Hassan; Dresch, Jacqueline M; Sinha, Saurabh; Arnosti, David N

    2013-07-15

    Deciphering the mechanisms involved in gene regulation holds the key to understanding the control of central biological processes, including human disease, population variation, and the evolution of morphological innovations. New experimental techniques including whole genome sequencing and transcriptome analysis have enabled comprehensive modeling approaches to study gene regulation. In many cases, it is useful to be able to assign biological significance to the inferred model parameters, but such interpretation should take into account features that affect these parameters, including model construction and sensitivity, the type of fitness calculation, and the effectiveness of parameter estimation. This last point is often neglected, as estimation methods are often selected for historical reasons or for computational ease. Here, we compare the performance of two parameter estimation techniques broadly representative of local and global approaches, namely, a quasi-Newton/Nelder-Mead simplex (QN/NMS) method and a covariance matrix adaptation-evolutionary strategy (CMA-ES) method. The estimation methods were applied to a set of thermodynamic models of gene transcription applied to regulatory elements active in the Drosophila embryo. Measuring overall fit, the global CMA-ES method performed significantly better than the local QN/NMS method on high quality data sets, but this difference was negligible on lower quality data sets with increased noise or on data sets simplified by stringent thresholding. Our results suggest that the choice of parameter estimation technique for evaluation of gene expression models depends both on quality of data, the nature of the models [again, remains to be established] and the aims of the modeling effort. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Combining Empirical and Stochastic Models for Extreme Floods Estimation

    Science.gov (United States)

    Zemzami, M.; Benaabidate, L.

    2013-12-01

    Hydrological models can be defined as physical, mathematical or empirical. The latter class uses mathematical equations independent of the physical processes involved in the hydrological system. The linear regression and Gradex (Gradient of Extreme values) are classic examples of empirical models. However, conventional empirical models are still used as a tool for hydrological analysis by probabilistic approaches. In many regions in the world, watersheds are not gauged. This is true even in developed countries where the gauging network has continued to decline as a result of the lack of human and financial resources. Indeed, the obvious lack of data in these watersheds makes it impossible to apply some basic empirical models for daily forecast. So we had to find a combination of rainfall-runoff models in which it would be possible to create our own data and use them to estimate the flow. The estimated design floods would be a good choice to illustrate the difficulties facing the hydrologist for the construction of a standard empirical model in basins where hydrological information is rare. The construction of the climate-hydrological model, which is based on frequency analysis, was established to estimate the design flood in the Anseghmir catchments, Morocco. The choice of using this complex model returns to its ability to be applied in watersheds where hydrological information is not sufficient. It was found that this method is a powerful tool for estimating the design flood of the watershed and also other hydrological elements (runoff, volumes of water...).The hydrographic characteristics and climatic parameters were used to estimate the runoff, water volumes and design flood for different return periods.

  14. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    Science.gov (United States)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  15. Generalized modeling of the fractional-order memcapacitor and its character analysis

    Science.gov (United States)

    Guo, Zhang; Si, Gangquan; Diao, Lijie; Jia, Lixin; Zhang, Yanbin

    2018-06-01

    Memcapacitor is a new type of memory device generalized from the memristor. This paper proposes a generalized fractional-order memcapacitor model by introducing the fractional calculus into the model. The generalized formulas are studied and the two fractional-order parameter α, β are introduced where α mostly affects the fractional calculus value of charge q within the generalized Ohm's law and β generalizes the state equation which simulates the physical mechanism of a memcapacitor into the fractional sense. This model will be reduced to the conventional memcapacitor as α = 1 , β = 0 and to the conventional memristor as α = 0 , β = 1 . Then the numerical analysis of the fractional-order memcapacitor is studied. And the characteristics and output behaviors of the fractional-order memcapacitor applied with sinusoidal charge are derived. The analysis results have shown that there are four basic v - q and v - i curve patterns when the fractional order α, β respectively equal to 0 or 1, moreover all v - q and v - i curves of the other fractional-order models are transition curves between the four basic patterns.

  16. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  17. Surface Runoff Estimation Using SMOS Observations, Rain-gauge Measurements and Satellite Precipitation Estimations. Comparison with Model Predictions

    Science.gov (United States)

    Garcia Leal, Julio A.; Lopez-Baeza, Ernesto; Khodayar, Samiro; Estrela, Teodoro; Fidalgo, Arancha; Gabaldo, Onofre; Kuligowski, Robert; Herrera, Eddy

    Surface runoff is defined as the amount of water that originates from precipitation, does not infiltrates due to soil saturation and therefore circulates over the surface. A good estimation of runoff is useful for the design of draining systems, structures for flood control and soil utilisation. For runoff estimation there exist different methods such as (i) rational method, (ii) isochrone method, (iii) triangular hydrograph, (iv) non-dimensional SCS hydrograph, (v) Temez hydrograph, (vi) kinematic wave model, represented by the dynamics and kinematics equations for a uniforme precipitation regime, and (vii) SCS-CN (Soil Conservation Service Curve Number) model. This work presents a way of estimating precipitation runoff through the SCS-CN model, using SMOS (Soil Moisture and Ocean Salinity) mission soil moisture observations and rain-gauge measurements, as well as satellite precipitation estimations. The area of application is the Jucar River Basin Authority area where one of the objectives is to develop the SCS-CN model in a spatial way. The results were compared to simulations performed with the 7-km COSMO-CLM (COnsortium for Small-scale MOdelling, COSMO model in CLimate Mode) model. The use of SMOS soil moisture as input to the COSMO-CLM model will certainly improve model simulations.

  18. Input-output model for MACCS nuclear accident impacts estimation¹

    Energy Technology Data Exchange (ETDEWEB)

    Outkin, Alexander V. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bixler, Nathan E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vargas, Vanessa N [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-27

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

  19. The Meaning of Higher-Order Factors in Reflective-Measurement Models

    Science.gov (United States)

    Eid, Michael; Koch, Tobias

    2014-01-01

    Higher-order factor analysis is a widely used approach for analyzing the structure of a multidimensional test. Whenever first-order factors are correlated researchers are tempted to apply a higher-order factor model. But is this reasonable? What do the higher-order factors measure? What is their meaning? Willoughby, Holochwost, Blanton, and Blair…

  20. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  1. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  2. Model estimation of claim risk and premium for motor vehicle insurance by using Bayesian method

    Science.gov (United States)

    Sukono; Riaman; Lesmana, E.; Wulandari, R.; Napitupulu, H.; Supian, S.

    2018-01-01

    Risk models need to be estimated by the insurance company in order to predict the magnitude of the claim and determine the premiums charged to the insured. This is intended to prevent losses in the future. In this paper, we discuss the estimation of risk model claims and motor vehicle insurance premiums using Bayesian methods approach. It is assumed that the frequency of claims follow a Poisson distribution, while a number of claims assumed to follow a Gamma distribution. The estimation of parameters of the distribution of the frequency and amount of claims are made by using Bayesian methods. Furthermore, the estimator distribution of frequency and amount of claims are used to estimate the aggregate risk models as well as the value of the mean and variance. The mean and variance estimator that aggregate risk, was used to predict the premium eligible to be charged to the insured. Based on the analysis results, it is shown that the frequency of claims follow a Poisson distribution with parameter values λ is 5.827. While a number of claims follow the Gamma distribution with parameter values p is 7.922 and θ is 1.414. Therefore, the obtained values of the mean and variance of the aggregate claims respectively are IDR 32,667,489.88 and IDR 38,453,900,000,000.00. In this paper the prediction of the pure premium eligible charged to the insured is obtained, which amounting to IDR 2,722,290.82. The prediction of the claims and premiums aggregate can be used as a reference for the insurance company’s decision-making in management of reserves and premiums of motor vehicle insurance.

  3. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  4. Comparing higher order models for the EORTC QLQ-C30

    DEFF Research Database (Denmark)

    Gundy, Chad M; Fayers, Peter M; Grønvold, Mogens

    2012-01-01

    To investigate the statistical fit of alternative higher order models for summarizing the health-related quality of life profile generated by the EORTC QLQ-C30 questionnaire.......To investigate the statistical fit of alternative higher order models for summarizing the health-related quality of life profile generated by the EORTC QLQ-C30 questionnaire....

  5. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer

    2015-12-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton\\'s iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  6. Modulating Functions Based Algorithm for the Estimation of the Coefficients and Differentiation Order for a Space-Fractional Advection-Dispersion Equation

    KAUST Repository

    Aldoghaither, Abeer; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem

    2015-01-01

    In this paper, a new method, based on the so-called modulating functions, is proposed to estimate average velocity, dispersion coefficient, and differentiation order in a space-fractional advection-dispersion equation, where the average velocity and the dispersion coefficient are space-varying. First, the average velocity and the dispersion coefficient are estimated by applying the modulating functions method, where the problem is transformed into a linear system of algebraic equations. Then, the modulating functions method combined with a Newton's iteration algorithm is applied to estimate the coefficients and the differentiation order simultaneously. The local convergence of the proposed method is proved. Numerical results are presented with noisy measurements to show the effectiveness and robustness of the proposed method. It is worth mentioning that this method can be extended to general fractional partial differential equations.

  7. Evaluating LMA and CLAMP: Using information criteria to choose a model for estimating elevation

    Science.gov (United States)

    Miller, I.; Green, W.; Zaitchik, B.; Brandon, M.; Hickey, L.

    2005-12-01

    The morphology of leaves and composition of the flora respond strongly to the moisture and temperature of their environment. Elevation and latitude correlate, at first order, to these atmospheric parameters. An obvious modern example of this relationship between leaf morphology and environment is the tree line, where boreal forests give way to artic (high latitude) or alpine (high elevation) tundra. Several quantitative methods, all of which rely on uniformitarianism, have been developed to estimate paleoelevation using fossil leaf morphology. These include 1) the univariate leaf-margin analysis (LMA), which estimates mean annual temperature (MAT) by the positive linear correlation between MAT and P, the proportion of entire or smooth to non-entire or toothed margined woody dicot angiosperm leaves within a flora and 2) the Climate Leaf Analysis Multivariate Program (CLAMP) which uses Canonical Correspondence Analysis (CCA) to estimate MAT, moist enthalpy, and other atmospheric parameters using 31 explanatory leaf characters from woody dicot angiosperms. Given a difference in leaf-estimated MAT or moist enthalpy between contemporaneous, synlatitudinal fossil floras-one at sea-level, the other at an unknown paleoelevation-paleoelevation may be estimated. These methods have been widely applied to orogenic settings and concentrate particularly in the Western US. We introduce the use of information criteria to compare different models for estimating elevation and show how the additional complexity of the CLAMP analytical methodology does not necessarily improve on the elevation estimates produced by simpler regression models. In addition, we discuss the signal-to-noise ratio in the data, give confidence intervals for detecting elevations, and address the problem of spatial autocorrelation and irregular sampling in the data.

  8. Estimation of global daily irradiation in complex topography zones using digital elevation models and meteosat images: Comparison of the results

    International Nuclear Information System (INIS)

    Martinez-Durban, M.; Zarzalejo, L.F.; Bosch, J.L.; Rosiek, S.; Polo, J.; Batlles, F.J.

    2009-01-01

    The knowledge of the solar irradiation in a certain place is fundamental for the suitable location of solar systems, both thermal and photovoltaic. On the local scale, the topography is the most important modulating factor of the solar irradiation on the surface. In this work the global daily irradiation is estimated concerning various sky conditions, in zones of complex topography. In order to estimate the global daily irradiation we use a methodology based on a Digital Terrain Model (DTM), on one hand making use of pyranometer measurements and on the other hand utilizing satellite images. We underline that DTM application employing pyranometer measurements produces better results than estimation using satellite images, though accuracy of the same order is obtained in both cases for Root Mean Square Error (RMSE) and Mean Bias Error (MBE).

  9. Estimation of global daily irradiation in complex topography zones using digital elevation models and meteosat images: Comparison of the results

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Durban, M. [Dpto. de Lenguajes y Computacion, Universidad de Almeria, 04120 Almeria (Spain); Zarzalejo, L.F.; Polo, J. [Dpto. de Energia, CIEMAT, 28040 Madrid (Spain); Bosch, J.L.; Rosiek, S.; Batlles, F.J. [Dpto. Fisica Aplicada, Universidad de Almeria, 04120 Almeria (Spain)

    2009-09-15

    The knowledge of the solar irradiation in a certain place is fundamental for the suitable location of solar systems, both thermal and photovoltaic. On the local scale, the topography is the most important modulating factor of the solar irradiation on the surface. In this work the global daily irradiation is estimated concerning various sky conditions, in zones of complex topography. In order to estimate the global daily irradiation we use a methodology based on a Digital Terrain Model (DTM), on one hand making use of pyranometer measurements and on the other hand utilizing satellite images. We underline that DTM application employing pyranometer measurements produces better results than estimation using satellite images, though accuracy of the same order is obtained in both cases for Root Mean Square Error (RMSE) and Mean Bias Error (MBE). (author)

  10. Differences among estimates of critical power and anaerobic work capacity derived from five mathematical models and the three-minute all-out test.

    Science.gov (United States)

    Bergstrom, Haley C; Housh, Terry J; Zuniga, Jorge M; Traylor, Daniel A; Lewis, Robert W; Camic, Clayton L; Schmidt, Richard J; Johnson, Glen O

    2014-03-01

    Estimates of critical power (CP) and anaerobic work capacity (AWC) from the power output vs. time relationship have been derived from various mathematical models. The purpose of this study was to examine estimates of CP and AWC from the multiple work bout, 2- and 3-parameter models, and those from the 3-minute all-out CP (CP3min) test. Nine college-aged subjects performed a maximal incremental test to determine the peak oxygen consumption rate and the gas exchange threshold. On separate days, each subject completed 4 randomly ordered constant power output rides to exhaustion to estimate CP and AWC from 5 regression models (2 linear, 2 nonlinear, and 1 exponential). During the final visit, CP and AWC were estimated from the CP3min test. The nonlinear 3-parameter (Nonlinear-3) model produced the lowest estimate of CP. The exponential (EXP) model and the CP3min test were not statistically different and produced the highest estimates of CP. Critical power estimated from the Nonlinear-3 model was 14% less than those from the EXP model and the CP3min test and 4-6% less than those from the linear models. Furthermore, the Nonlinear-3 and nonlinear 2-parameter (Nonlinear-2) models produced significantly greater estimates of AWC than did the linear models and CP3min. The current findings suggested that the Nonlinear-3 model may provide estimates of CP and AWC that more accurately reflect the asymptote of the power output vs. time relationship, the demarcation of the heavy and severe exercise intensity domains, and anaerobic capabilities than will the linear models and CP3min test.

  11. Development of collision dynamics models to estimate the results of full-scale rail vehicle impact tests : Tufts University Master's Thesis

    Science.gov (United States)

    2000-11-01

    In an effort to study occupant survivability in train collisions, analyses and tests were conducted to understand and improve the crashworthiness of rail vehicles. A collision dynamics model was developed in order to estimate the rigid body motion of...

  12. [Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].

    Science.gov (United States)

    Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L

    2017-03-10

    To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.

  13. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  14. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  15. First-order estimate of the planktic foraminifer biomass in the modern ocean

    Directory of Open Access Journals (Sweden)

    R. Schiebel

    2012-09-01

    Full Text Available Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. With a new non-destructive protocol developed from the bicinchoninic acid (BCA method and nano-photospectrometry, we have analysed the protein-biomass, along with test size and weight, of 754 individual planktic foraminifers from 21 different species and morphotypes. From additional CHN analysis, it can be assumed that protein-biomass equals carbon-biomass. Accordingly, the average individual planktic foraminifer protein- and carbon-biomass amounts to 0.845 μg. Samples include symbiont bearing and symbiont-barren species from the sea surface down to 2500 m water depth. Conversion factors between individual biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum test diameter. Assemblage-biomass data presented here include 1128 sites and water depth intervals. The regional coverage of data includes the North Atlantic, Arabian Sea, Red Sea, and Caribbean as well as literature data from the eastern and western North Pacific, and covers a wide range of oligotrophic to eutrophic waters over six orders of magnitude of planktic-foraminifer assemblage-biomass (PFAB. A first order estimate of the average global planktic foraminifer biomass production (>125 μm ranges from 8.2–32.7 Tg C yr−1 (i.e. 0.008–0.033 Gt C yr−1, and might be more than three times as high including neanic and juvenile individuals adding up to 25–100 Tg C yr−1. However, this is a first

  16. Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States

    Science.gov (United States)

    Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.

    2017-01-01

    This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.

  17. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  18. Declarative Modeling for Production Order Portfolio Scheduling

    Directory of Open Access Journals (Sweden)

    Banaszak Zbigniew

    2014-12-01

    Full Text Available A declarative framework enabling to determine conditions as well as to develop decision-making software supporting small- and medium-sized enterprises aimed at unique, multi-project-like and mass customized oriented production is discussed. A set of unique production orders grouped into portfolio orders is considered. Operations executed along different production orders share available resources following a mutual exclusion protocol. A unique product or production batch is completed while following a given activity’s network order. The problem concerns scheduling a newly inserted project portfolio subject to constraints imposed by a multi-project environment The answers sought are: Can a given project portfolio specified by its cost and completion time be completed within the assumed time period in a manufacturing system in hand? Which manufacturing system capability guarantees the completion of a given project portfolio ordered under assumed cost and time constraints? The considered problems regard finding a computationally effective approach aimed at simultaneous routing and allocation as well as batching and scheduling of a newly ordered project portfolio subject to constraints imposed by a multi-project environment. The main objective is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multi-project-like and mass customized oriented production scheduling. Multiple illustrative examples are discussed.

  19. Spatially ordered structures in storm clouds and fogs

    International Nuclear Information System (INIS)

    Shavlov, A.V.; Dzhumandzhi, V.A.

    2010-01-01

    The article shows the possibility of formation of the spatially ordered structures by the charged drops of water in both storm clouds and fogs. To predict the existence of the given structures there was proposed a model of interaction mechanism among the charged particles. We also estimated the influence of drop ordering onto the surface tension and the shear viscosity in clouds.

  20. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  1. Estimates of solutions of certain classes of second-order differential equations in a Hilbert space

    International Nuclear Information System (INIS)

    Artamonov, N V

    2003-01-01

    Linear second-order differential equations of the form u''(t)+(B+iD)u'(t)+(T+iS)u(t)=0 in a Hilbert space are studied. Under certain conditions on the (generally speaking, unbounded) operators T, S, B and D the correct solubility of the equation in the 'energy' space is proved and best possible (in the general case) estimates of the solutions on the half-axis are obtained

  2. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  3. Improving Frozen Precipitation Density Estimation in Land Surface Modeling

    Science.gov (United States)

    Sparrow, K.; Fall, G. M.

    2017-12-01

    The Office of Water Prediction (OWP) produces high-value water supply and flood risk planning information through the use of operational land surface modeling. Improvements in diagnosing frozen precipitation density will benefit the NWS's meteorological and hydrological services by refining estimates of a significant and vital input into land surface models. A current common practice for handling the density of snow accumulation in a land surface model is to use a standard 10:1 snow-to-liquid-equivalent ratio (SLR). Our research findings suggest the possibility of a more skillful approach for assessing the spatial variability of precipitation density. We developed a 30-year SLR climatology for the coterminous US from version 3.22 of the Daily Global Historical Climatology Network - Daily (GHCN-D) dataset. Our methods followed the approach described by Baxter (2005) to estimate mean climatological SLR values at GHCN-D sites in the US, Canada, and Mexico for the years 1986-2015. In addition to the Baxter criteria, the following refinements were made: tests were performed to eliminate SLR outliers and frequent reports of SLR = 10, a linear SLR vs. elevation trend was fitted to station SLR mean values to remove the elevation trend from the data, and detrended SLR residuals were interpolated using ordinary kriging with a spherical semivariogram model. The elevation values of each station were based on the GMTED 2010 digital elevation model and the elevation trend in the data was established via linear least squares approximation. The ordinary kriging procedure was used to interpolate the data into gridded climatological SLR estimates for each calendar month at a 0.125 degree resolution. To assess the skill of this climatology, we compared estimates from our SLR climatology with observations from the GHCN-D dataset to consider the potential use of this climatology as a first guess of frozen precipitation density in an operational land surface model. The difference in

  4. Estimating national landfill methane emissions: an application of the 2006 Intergovernmental Panel on Climate Change Waste Model in Panama.

    Science.gov (United States)

    Weitz, Melissa; Coburn, Jeffrey B; Salinas, Edgar

    2008-05-01

    This paper estimates national methane emissions from solid waste disposal sites in Panama over the time period 1990-2020 using both the 2006 Intergovernmental Panel on Climate Change (IPCC) Waste Model spreadsheet and the default emissions estimate approach presented in the 1996 IPCC Good Practice Guidelines. The IPCC Waste Model has the ability to calculate emissions from a variety of solid waste disposal site types, taking into account country- or region-specific waste composition and climate information, and can be used with a limited amount of data. Countries with detailed data can also run the model with country-specific values. The paper discusses methane emissions from solid waste disposal; explains the differences between the two methodologies in terms of data needs, assumptions, and results; describes solid waste disposal circumstances in Panama; and presents the results of this analysis. It also demonstrates the Waste Model's ability to incorporate landfill gas recovery data and to make projections. The former default method methane emissions estimates are 25 Gg in 1994, and range from 23.1 Gg in 1990 to a projected 37.5 Gg in 2020. The Waste Model estimates are 26.7 Gg in 1994, ranging from 24.6 Gg in 1990 to 41.6 Gg in 2020. Emissions estimates for Panama produced by the new model were, on average, 8% higher than estimates produced by the former default methodology. The increased estimate can be attributed to the inclusion of all solid waste disposal in Panama (as opposed to only disposal in managed landfills), but the increase was offset somewhat by the different default factors and regional waste values between the 1996 and 2006 IPCC guidelines, and the use of the first-order decay model with a time delay for waste degradation in the IPCC Waste Model.

  5. State-of-charge inconsistency estimation of lithium-ion battery pack using mean-difference model and extended Kalman filter

    Science.gov (United States)

    Zheng, Yuejiu; Gao, Wenkai; Ouyang, Minggao; Lu, Languang; Zhou, Long; Han, Xuebing

    2018-04-01

    State-of-charge (SOC) inconsistency impacts the power, durability and safety of the battery pack. Therefore, it is necessary to measure the SOC inconsistency of the battery pack with good accuracy. We explore a novel method for modeling and estimating the SOC inconsistency of lithium-ion (Li-ion) battery pack with low computation effort. In this method, a second-order RC model is selected as the cell mean model (CMM) to represent the overall performance of the battery pack. A hypothetical Rint model is employed as the cell difference model (CDM) to evaluate the SOC difference. The parameters of mean-difference model (MDM) are identified with particle swarm optimization (PSO). Subsequently, the mean SOC and the cell SOC differences are estimated by using extended Kalman filter (EKF). Finally, we conduct an experiment on a small Li-ion battery pack with twelve cells connected in series. The results show that the evaluated SOC difference is capable of tracking the changing of actual value after a quick convergence.

  6. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  7. Electroencephalography in ellipsoidal geometry with fourth-order harmonics.

    Science.gov (United States)

    Alcocer-Sosa, M; Gutierrez, D

    2016-08-01

    We present a solution to the electroencephalographs (EEG) forward problem of computing the scalp electric potentials for the case when the head's geometry is modeled using a four-shell ellipsoidal geometry and the brain sources with an equivalent current dipole (ECD). The proposed solution includes terms up to the fourth-order ellipsoidal harmonics and we compare this new approximation against those that only considered up to second- and third-order harmonics. Our comparisons use as reference a solution in which a tessellated volume approximates the head and the forward problem is solved through the boundary element method (BEM). We also assess the solution to the inverse problem of estimating the magnitude of an ECD through different harmonic approximations. Our results show that the fourth-order solution provides a better estimate of the ECD in comparison to lesser order ones.

  8. Lattice Boltzmann model for high-order nonlinear partial differential equations.

    Science.gov (United States)

    Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang

    2018-01-01

    In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂_{t}ϕ+∑_{k=1}^{m}α_{k}∂_{x}^{k}Π_{k}(ϕ)=0 (1≤k≤m≤6), α_{k} are constant coefficients, Π_{k}(ϕ) are some known differential functions of ϕ. As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K(n,n)-Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009)1672-179910.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009)PHYADX0378-437110.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.

  9. Lattice Boltzmann model for high-order nonlinear partial differential equations

    Science.gov (United States)

    Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang

    2018-01-01

    In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂tϕ +∑k=1mαk∂xkΠk(ϕ ) =0 (1 ≤k ≤m ≤6 ), αk are constant coefficients, Πk(ϕ ) are some known differential functions of ϕ . As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K (n ,n ) -Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009), 10.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009), 10.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.

  10. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  11. Covariant quantization of infinite spin particle models, and higher order gauge theories

    International Nuclear Information System (INIS)

    Edgren, Ludde; Marnelius, Robert

    2006-01-01

    Further properties of a recently proposed higher order infinite spin particle model are derived. Infinitely many classically equivalent but different Hamiltonian formulations are shown to exist. This leads to a condition of uniqueness in the quantization process. A consistent covariant quantization is shown to exist. Also a recently proposed supersymmetric version for half-odd integer spins is quantized. A general algorithm to derive gauge invariances of higher order Lagrangians is given and applied to the infinite spin particle model, and to a new higher order model for a spinning particle which is proposed here, as well as to a previously given higher order rigid particle model. The latter two models are also covariantly quantized

  12. NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Roman L. Leibov

    2017-09-01

    Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented

  13. Quantitative estimation of renal function with dynamic contrast-enhanced MRI using a modified two-compartment model.

    Directory of Open Access Journals (Sweden)

    Bin Chen

    Full Text Available To establish a simple two-compartment model for glomerular filtration rate (GFR and renal plasma flow (RPF estimations by dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.A total of eight New Zealand white rabbits were included in DCE-MRI. The two-compartment model was modified with the impulse residue function in this study. First, the reliability of GFR measurement of the proposed model was compared with other published models in Monte Carlo simulation at different noise levels. Then, functional parameters were estimated in six healthy rabbits to test the feasibility of the new model. Moreover, in order to investigate its validity of GFR estimation, two rabbits underwent acute ischemia surgical procedure in unilateral kidney before DCE-MRI, and pixel-wise measurements were implemented to detect the cortical GFR alterations between normal and abnormal kidneys.The lowest variability of GFR and RPF measurements were found in the proposed model in the comparison. Mean GFR was 3.03±1.1 ml/min and mean RPF was 2.64±0.5 ml/g/min in normal animals, which were in good agreement with the published values. Moreover, large GFR decline was found in dysfunction kidneys comparing to the contralateral control group.Results in our study demonstrate that measurement of renal kinetic parameters based on the proposed model is feasible and it has the ability to discriminate GFR changes in healthy and diseased kidneys.

  14. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  15. Reduced Order Modeling of Combustion Instability in a Gas Turbine Model Combustor

    Science.gov (United States)

    Arnold-Medabalimi, Nicholas; Huang, Cheng; Duraisamy, Karthik

    2017-11-01

    Hydrocarbon fuel based propulsion systems are expected to remain relevant in aerospace vehicles for the foreseeable future. Design of these devices is complicated by combustion instabilities. The capability to model and predict these effects at reduced computational cost is a requirement for both design and control of these devices. This work focuses on computational studies on a dual swirl model gas turbine combustor in the context of reduced order model development. Full fidelity simulations are performed utilizing URANS and Hybrid RANS-LES with finite rate chemistry. Following this, data decomposition techniques are used to extract a reduced basis representation of the unsteady flow field. These bases are first used to identify sensor locations to guide experimental interrogations and controller feedback. Following this, initial results on developing a control-oriented reduced order model (ROM) will be presented. The capability of the ROM will be further assessed based on different operating conditions and geometric configurations.

  16. A matlab framework for estimation of NLME models using stochastic differential equations: applications for estimation of insulin secretion rates.

    Science.gov (United States)

    Mortensen, Stig B; Klim, Søren; Dammann, Bernd; Kristensen, Niels R; Madsen, Henrik; Overgaard, Rune V

    2007-10-01

    The non-linear mixed-effects model based on stochastic differential equations (SDEs) provides an attractive residual error model, that is able to handle serially correlated residuals typically arising from structural mis-specification of the true underlying model. The use of SDEs also opens up for new tools for model development and easily allows for tracking of unknown inputs and parameters over time. An algorithm for maximum likelihood estimation of the model has earlier been proposed, and the present paper presents the first general implementation of this algorithm. The implementation is done in Matlab and also demonstrates the use of parallel computing for improved estimation times. The use of the implementation is illustrated by two examples of application which focus on the ability of the model to estimate unknown inputs facilitated by the extension to SDEs. The first application is a deconvolution-type estimation of the insulin secretion rate based on a linear two-compartment model for C-peptide measurements. In the second application the model is extended to also give an estimate of the time varying liver extraction based on both C-peptide and insulin measurements.

  17. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  18. Modeling the self-assembly of ordered nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Monson, Peter [Univ. of Massachusetts, Amherst, MA (United States); Auerbach, Scott [Univ. of Massachusetts, Amherst, MA (United States)

    2017-11-13

    This report describes progress on a collaborative project on the multiscale modeling of the assembly processes in the synthesis of nanoporous materials. Such materials are of enormous importance in modern technology with application in the chemical process industries, biomedicine and biotechnology as well as microelectronics. The project focuses on two important classes of materials: i) microporous crystalline materials, such as zeolites, and ii) ordered mesoporous materials. In the first case the pores are part of the crystalline structure, while in the second the structures are amorphous on the atomistic length scale but where surfactant templating gives rise to order on the length scale of 2 - 20 nm. We have developed a modeling framework that encompasses both these kinds of materials. Our models focus on the assembly of corner sharing silica tetrahedra in the presence of structure directing agents. We emphasize a balance between sufficient realism in the models and computational tractibility given the complex many-body phenomena. We use both on-lattice and off-lattice models and the primary computational tools are Monte Carlo simulations with sampling techniques and ensembles appropriate to specific situations. Our modeling approach is the first to capture silica polymerization, nanopore crystallization, and mesopore formation through computer-simulated self assembly.

  19. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  20. Modelling stock order flows with non-homogeneous intensities from high-frequency data

    Science.gov (United States)

    Gorshenin, Andrey K.; Korolev, Victor Yu.; Zeifman, Alexander I.; Shorgin, Sergey Ya.; Chertok, Andrey V.; Evstafyev, Artem I.; Korchagin, Alexander Yu.

    2013-10-01

    A micro-scale model is proposed for the evolution of such information system as the limit order book in financial markets. Within this model, the flows of orders (claims) are described by doubly stochastic Poisson processes taking account of the stochastic character of intensities of buy and sell orders that determine the price discovery mechanism. The proposed multiplicative model of stochastic intensities makes it possible to analyze the characteristics of the order flows as well as the instantaneous proportion of the forces of buyers and sellers, that is, the imbalance process, without modelling the external information background. The proposed model gives the opportunity to link the micro-scale (high-frequency) dynamics of the limit order book with the macro-scale models of stock price processes of the form of subordinated Wiener processes by means of limit theorems of probability theory and hence, to use the normal variance-mean mixture models of the corresponding heavy-tailed distributions. The approach can be useful in different areas with similar properties (e.g., in plasma physics).

  1. Mechanical model for filament buckling and growth by phase ordering.

    Science.gov (United States)

    Rey, Alejandro D; Abukhdeir, Nasser M

    2008-02-05

    A mechanical model of open filament shape and growth driven by phase ordering is formulated. For a given phase-ordering driving force, the model output is the filament shape evolution and the filament end-point kinematics. The linearized model for the slope of the filament is the Cahn-Hilliard model of spinodal decomposition, where the buckling corresponds to concentration fluctuations. Two modes are predicted: (i) sequential growth and buckling and (ii) simultaneous buckling and growth. The relation among the maximum buckling rate, filament tension, and matrix viscosity is given. These results contribute to ongoing work in smectic A filament buckling.

  2. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  3. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  4. Study on Hyperspectral Characteristics and Estimation Model of Soil Mercury Content

    Science.gov (United States)

    Liu, Jinbao; Dong, Zhenyu; Sun, Zenghui; Ma, Hongchao; Shi, Lei

    2017-12-01

    In this study, the mercury content of 44 soil samples in Guan Zhong area of Shaanxi Province was used as the data source, and the reflectance spectrum of soil was obtained by ASD Field Spec HR (350-2500 nm) Comparing the reflection characteristics of different contents and the effect of different pre-treatment methods on the establishment of soil heavy metal spectral inversion model. The first order differential, second order differential and reflectance logarithmic transformations were carried out after the pre-treatment of NOR, MSC and SNV, and the sensitive bands of reflectance and mercury content in different mathematical transformations were selected. A hyperspectral estimation model is established by regression method. The results of chemical analysis show that there is a serious Hg pollution in the study area. The results show that: (1) the reflectivity decreases with the increase of mercury content, and the sensitive regions of mercury are located at 392 ~ 455nm, 923nm ~ 1040nm and 1806nm ~ 1969nm. (2) The combination of NOR, MSC and SNV transformations combined with differential transformations can improve the information of heavy metal elements in the soil, and the combination of high correlation band can improve the stability and prediction ability of the model. (3) The partial least squares regression model based on the logarithm of the original reflectance is better and the precision is higher, Rc2 = 0.9912, RMSEC = 0.665; Rv2 = 0.9506, RMSEP = 1.93, which can achieve the mercury content in this region Quick forecast.

  5. Estimation of Genetic Parameters for First Lactation Monthly Test-day Milk Yields using Random Regression Test Day Model in Karan Fries Cattle

    Directory of Open Access Journals (Sweden)

    Ajay Singh

    2016-06-01

    Full Text Available A single trait linear mixed random regression test-day model was applied for the first time for analyzing the first lactation monthly test-day milk yield records in Karan Fries cattle. The test-day milk yield data was modeled using a random regression model (RRM considering different order of Legendre polynomial for the additive genetic effect (4th order and the permanent environmental effect (5th order. Data pertaining to 1,583 lactation records spread over a period of 30 years were recorded and analyzed in the study. The variance component, heritability and genetic correlations among test-day milk yields were estimated using RRM. RRM heritability estimates of test-day milk yield varied from 0.11 to 0.22 in different test-day records. The estimates of genetic correlations between different test-day milk yields ranged 0.01 (test-day 1 [TD-1] and TD-11 to 0.99 (TD-4 and TD-5. The magnitudes of genetic correlations between test-day milk yields decreased as the interval between test-days increased and adjacent test-day had higher correlations. Additive genetic and permanent environment variances were higher for test-day milk yields at both ends of lactation. The residual variance was observed to be lower than the permanent environment variance for all the test-day milk yields.

  6. Numerical modelling of ultrasonic waves in a bubbly Newtonian liquid using a high-order acoustic cavitation model.

    Science.gov (United States)

    Lebon, G S Bruno; Tzanakis, I; Djambazov, G; Pericleous, K; Eskin, D G

    2017-07-01

    To address difficulties in treating large volumes of liquid metal with ultrasound, a fundamental study of acoustic cavitation in liquid aluminium, expressed in an experimentally validated numerical model, is presented in this paper. To improve the understanding of the cavitation process, a non-linear acoustic model is validated against reference water pressure measurements from acoustic waves produced by an immersed horn. A high-order method is used to discretize the wave equation in both space and time. These discretized equations are coupled to the Rayleigh-Plesset equation using two different time scales to couple the bubble and flow scales, resulting in a stable, fast, and reasonably accurate method for the prediction of acoustic pressures in cavitating liquids. This method is then applied to the context of treatment of liquid aluminium, where it predicts that the most intense cavitation activity is localised below the vibrating horn and estimates the acoustic decay below the sonotrode with reasonable qualitative agreement with experimental data. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  7. Nonparametric volatility density estimation for discrete time models

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2005-01-01

    We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed

  8. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  9. Data-Driven Model Order Reduction for Bayesian Inverse Problems

    KAUST Repository

    Cui, Tiangang; Youssef, Marzouk; Willcox, Karen

    2014-01-01

    One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce

  10. Efficient and robust estimation for longitudinal mixed models for binary data

    DEFF Research Database (Denmark)

    Holst, René

    2009-01-01

    This paper proposes a longitudinal mixed model for binary data. The model extends the classical Poisson trick, in which a binomial regression is fitted by switching to a Poisson framework. A recent estimating equations method for generalized linear longitudinal mixed models, called GEEP, is used...... as a vehicle for fitting the conditional Poisson regressions, given a latent process of serial correlated Tweedie variables. The regression parameters are estimated using a quasi-score method, whereas the dispersion and correlation parameters are estimated by use of bias-corrected Pearson-type estimating...... equations, using second moments only. Random effects are predicted by BLUPs. The method provides a computationally efficient and robust approach to the estimation of longitudinal clustered binary data and accommodates linear and non-linear models. A simulation study is used for validation and finally...

  11. Intelligent Models Performance Improvement Based on Wavelet Algorithm and Logarithmic Transformations in Suspended Sediment Estimation

    Directory of Open Access Journals (Sweden)

    R. Hajiabadi

    2016-10-01

    Full Text Available Introduction One reason for the complexity of hydrological phenomena prediction, especially time series is existence of features such as trend, noise and high-frequency oscillations. These complex features, especially noise, can be detected or removed by preprocessing. Appropriate preprocessing causes estimation of these phenomena become easier. Preprocessing in the data driven models such as artificial neural network, gene expression programming, support vector machine, is more effective because the quality of data in these models is important. Present study, by considering diagnosing and data transformation as two different preprocessing, tries to improve the results of intelligent models. In this study two different intelligent models, Artificial Neural Network and Gene Expression Programming, are applied to estimation of daily suspended sediment load. Wavelet transforms and logarithmic transformation is used for diagnosing and data transformation, respectively. Finally, the impacts of preprocessing on the results of intelligent models are evaluated. Materials and Methods In this study, Gene Expression Programming and Artificial Neural Network are used as intelligent models for suspended sediment load estimation, then the impacts of diagnosing and logarithmic transformations approaches as data preprocessor are evaluated and compared to the result improvement. Two different logarithmic transforms are considered in this research, LN and LOG. Wavelet transformation is used to time series denoising. In order to denoising by wavelet transforms, first, time series can be decomposed at one level (Approximation part and detail part and second, high-frequency part (detail will be removed as noise. According to the ability of gene expression programming and artificial neural network to analysis nonlinear systems; daily values of suspended sediment load of the Skunk River in USA, during a 5-year period, are investigated and then estimated.4 years of

  12. Order Quantity Distributions: Estimating an Adequate Aggregation Horizon

    Directory of Open Access Journals (Sweden)

    Eriksen Poul Svante

    2016-09-01

    Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.

  13. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  14. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  15. E-Model MOS Estimate Precision Improvement and Modelling of Jitter Effects

    Directory of Open Access Journals (Sweden)

    Adrian Kovac

    2012-01-01

    Full Text Available This paper deals with the ITU-T E-model, which is used for non-intrusive MOS VoIP call quality estimation on IP networks. The pros of E-model are computational simplicity and usability on real-time traffic. The cons, as shown in our previous work, are the inability of E-model to reflect effects of network jitter present on real traffic flows and jitter-buffer behavior on end user devices. These effects are visible mostly on traffic over WAN, internet and radio networks and cause the E-model MOS call quality estimate to be noticeably too optimistic. In this paper, we propose a modification to E-model using previously proposed Pplef (effective packet loss using jitter and jitter-buffer model based on Pareto/D/1/K system. We subsequently perform optimization of newly added parameters reflecting jitter effects into E-model by using PESQ intrusive measurement method as a reference for selected audio codecs. Function fitting and parameter optimization is performed under varying delay, packet loss, jitter and different jitter-buffer sizes for both, correlated and uncorrelated long-tailed network traffic.

  16. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  17. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  18. Relative risk estimation of Chikungunya disease in Malaysia: An analysis based on Poisson-gamma model

    Science.gov (United States)

    Samat, N. A.; Ma'arof, S. H. Mohd Imam

    2015-05-01

    Disease mapping is a method to display the geographical distribution of disease occurrence, which generally involves the usage and interpretation of a map to show the incidence of certain diseases. Relative risk (RR) estimation is one of the most important issues in disease mapping. This paper begins by providing a brief overview of Chikungunya disease. This is followed by a review of the classical model used in disease mapping, based on the standardized morbidity ratio (SMR), which we then apply to our Chikungunya data. We then fit an extension of the classical model, which we refer to as a Poisson-Gamma model, when prior distributions for the relative risks are assumed known. Both results are displayed and compared using maps and we reveal a smoother map with fewer extremes values of estimated relative risk. The extensions of this paper will consider other methods that are relevant to overcome the drawbacks of the existing methods, in order to inform and direct government strategy for monitoring and controlling Chikungunya disease.

  19. Computable error estimates of a finite difference scheme for option pricing in exponential Lévy models

    KAUST Repository

    Kiessling, Jonas

    2014-05-06

    Option prices in exponential Lévy models solve certain partial integro-differential equations. This work focuses on developing novel, computable error approximations for a finite difference scheme that is suitable for solving such PIDEs. The scheme was introduced in (Cont and Voltchkova, SIAM J. Numer. Anal. 43(4):1596-1626, 2005). The main results of this work are new estimates of the dominating error terms, namely the time and space discretisation errors. In addition, the leading order terms of the error estimates are determined in a form that is more amenable to computations. The payoff is only assumed to satisfy an exponential growth condition, it is not assumed to be Lipschitz continuous as in previous works. If the underlying Lévy process has infinite jump activity, then the jumps smaller than some (Formula presented.) are approximated by diffusion. The resulting diffusion approximation error is also estimated, with leading order term in computable form, as well as the dependence of the time and space discretisation errors on this approximation. Consequently, it is possible to determine how to jointly choose the space and time grid sizes and the cut off parameter (Formula presented.). © 2014 Springer Science+Business Media Dordrecht.

  20. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.