WorldWideScience

Sample records for model order estimation

  1. ICA Model Order Estimation Using Clustering Method

    Directory of Open Access Journals (Sweden)

    P. Sovka

    2007-12-01

    Full Text Available In this paper a novel approach for independent component analysis (ICA model order estimation of movement electroencephalogram (EEG signals is described. The application is targeted to the brain-computer interface (BCI EEG preprocessing. The previous work has shown that it is possible to decompose EEG into movement-related and non-movement-related independent components (ICs. The selection of only movement related ICs might lead to BCI EEG classification score increasing. The real number of the independent sources in the brain is an important parameter of the preprocessing step. Previously, we used principal component analysis (PCA for estimation of the number of the independent sources. However, PCA estimates only the number of uncorrelated and not independent components ignoring the higher-order signal statistics. In this work, we use another approach - selection of highly correlated ICs from several ICA runs. The ICA model order estimation is done at significance level α = 0.05 and the model order is less or more dependent on ICA algorithm and its parameters.

  2. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...

  3. Accelerated gravitational wave parameter estimation with reduced order modeling.

    Science.gov (United States)

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  4. Improved variance estimation of maximum likelihood estimators in stable first-order dynamic regression models

    NARCIS (Netherlands)

    Kiviet, J.F.; Phillips, G.D.A.

    2014-01-01

    In dynamic regression models conditional maximum likelihood (least-squares) coefficient and variance estimators are biased. Using expansion techniques an approximation is obtained to the bias in variance estimation yielding a bias corrected variance estimator. This is achieved for both the standard

  5. A Modified Weighted Symmetric Estimator for a Gaussian First-order Autoregressive Model with Additive Outliers

    Directory of Open Access Journals (Sweden)

    Wararit PANICHKITKOSOLKUL

    2012-09-01

    Full Text Available Guttman and Tiao [1], and Chang [2] showed that the effect of outliers may cause serious bias in estimating autocorrelations, partial correlations, and autoregressive moving average parameters (cited in Chang et al. [3]. This paper presents a modified weighted symmetric estimator for a Gaussian first-order autoregressive AR(1 model with additive outliers. We apply the recursive median adjustment based on an exponentially weighted moving average (EWMA to the weighted symmetric estimator of Park and Fuller [4]. We consider the following estimators: the weighted symmetric estimator (, the recursive mean adjusted weighted symmetric estimator ( proposed by Niwitpong [5], the recursive median adjusted weighted symmetric estimator ( proposed by Panichkitkosolkul [6], and the weighted symmetric estimator using adjusted recursive median based on EWMA (. Using Monte Carlo simulations, we compare the mean square error (MSE of estimators. Simulation results have shown that the proposed estimator, , provides a MSE lower than those of , and  for almost all situations.

  6. Research on SOC estimation based on second-order RC model

    Directory of Open Access Journals (Sweden)

    Xieyang Wang

    2012-11-01

    Full Text Available The estimation accuracy of batteries’ State of Charge (SOC plays an important role in the development of hybrid electric vehicle (HEV. Accurate estimation of SOC can prevent battery from overly charging and discharging, so the lifetime of batteries will be increased. Although Kalman filter algorithm has better estimation accuracy for HEV application in which the current changes fast, Kalman filter algorithm deeply relies on the battery model. In other words, the accuracy of batteries’ SOC estimation needs precise batteries models. Besides, when the HEV is running, the statistical characteristics of noise produced in the course of the battery management system collecting data are unknown. This can cause estimated performance of Kalman filter algorithm to decrease even diffuse. To solve the problem, adaptive Kalman filter algorithm is adopted to estimate battery SOC based on the second order RC battery model in this paper. Through MATLAB simulation analysis, the estimation accuracy of battery SOC is improved to some extent.

  7. Higher Order Mean Squared Error of Generalized Method of Moments Estimators for Nonlinear Models

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available Generalized method of moments (GMM has been widely applied for estimation of nonlinear models in economics and finance. Although generalized method of moments has good asymptotic properties under fairly moderate regularity conditions, its finite sample performance is not very well. In order to improve the finite sample performance of generalized method of moments estimators, this paper studies higher-order mean squared error of two-step efficient generalized method of moments estimators for nonlinear models. Specially, we consider a general nonlinear regression model with endogeneity and derive the higher-order asymptotic mean square error for two-step efficient generalized method of moments estimator for this model using iterative techniques and higher-order asymptotic theories. Our theoretical results allow the number of moments to grow with sample size, and are suitable for general moment restriction models, which contains conditional moment restriction models as special cases. The higher-order mean square error can be used to compare different estimators and to construct the selection criteria for improving estimator’s finite sample performance.

  8. Mixed Lp Estimators Variety for Model Order Reduction in Control Oriented System Identification

    Directory of Open Access Journals (Sweden)

    Christophe Corbier

    2015-01-01

    Full Text Available A new family of MLE type Lp estimators for model order reduction in dynamical systems identification is presented in this paper. A family of Lp distributions proposed in this work combines Lp2 (1estimation criterion and reduce the estimated model complexity. Convergence consistency properties of the estimator are analysed and the model order reduction is established. Experimental results are presented and discussed on a real vibration complex dynamical system and pseudo-linear models are considered.

  9. SOC EKF Estimation based on a Second-order LiFePO4 Battery Model

    Directory of Open Access Journals (Sweden)

    Zheng Zhu

    2013-08-01

    Full Text Available An accurate battery State of Charge (SOC estimation has great significance in improving battery life and vehicle performance. An improved second-order battery model is proposed in this paper through quantities of LiFePO4 battery experiments. The parameters of the model were acquired by the HPPC composite pulse condition under different temperature, charging and discharging rates, SOC. Based on the model, battery SOC is estimated by Extended Kalman Filter (EKF. Comparison of three different pulse conditions shows that the average error of SOC estimation of this algorithm is about 4.2%. The improved model is able to reflect the dynamic performance of batteries suitably, and the SOC estimation algorithm is provided with higher accuracy and better dynamic adaptability.

  10. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    Science.gov (United States)

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  11. A fractional order model for lead-acid battery crankability estimation

    Science.gov (United States)

    Sabatier, J.; Cugnet, M.; Laruelle, S.; Grugeon, S.; Sahut, B.; Oustaloup, A.; Tarascon, J. M.

    2010-05-01

    With EV and HEV developments, battery monitoring systems have to meet the new requirements of car industry. This paper deals with one of them, the battery ability to start a vehicle, also called battery crankability. A fractional order model obtained by system identification is used to estimate the crankability of lead-acid batteries. Fractional order modelling permits an accurate simulation of the battery electrical behaviour with a low number of parameters. It is demonstrated that battery available power is correlated to the battery crankability and its resistance. Moreover, the high-frequency gain of the fractional model can be used to evaluate the battery resistance. Then, a battery crankability estimator using the battery resistance is proposed. Finally, this technique is validated with various battery experimental data measured on test rigs and vehicles.

  12. Geometrical order-of-magnitude estimates for spatial curvature in realistic models of the Universe

    CERN Document Server

    Buchert, Thomas; van Elst, Henk; 10.1007/s10714-009-0828-4

    2009-01-01

    The thoughts expressed in this article are based on remarks made by J\\"urgen Ehlers at the Albert-Einstein-Institut, Golm, Germany in July 2007. The main objective of this article is to demonstrate, in terms of plausible order-of-magnitude estimates for geometrical scalars, the relevance of spatial curvature in realistic models of the Universe that describe the dynamics of structure formation since the epoch of matter-radiation decoupling. We introduce these estimates with a commentary on the use of a quasi-Newtonian metric form in this context.

  13. Comparisons of Modeling and State of Charge Estimation for Lithium-Ion Battery Based on Fractional Order and Integral Order Methods

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2016-03-01

    Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.

  14. BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)

    Science.gov (United States)

    We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...

  15. An Order Allocation Model based on the Competitive and Rational Pre-Estimate Behavior in Logistics Service Supply Chain

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2013-07-01

    Full Text Available In the actual order allocation process of Logistics Service Supply Chain (LSSC, Functional Logistics Service Providers (FLSPs are strategic: they will pre-estimate the order allocation results to decide whether or not to participate in order allocation. Considering a two-echelon Logistics Service Supply Chain (LSSC consisting of one Logistics Service Integrator (LSI and several competitive FLSPs, we establish an order allocation optimization model of LSSC based on the pre-estimate and competitive behavior of FLSPs. The model considers three objectives: to minimize the cost of LSI, to maximize the order satisfaction of FLSPs and to match the different logistics capacities of FLSPs as much as possible. Numerical analysis is performed to discuss the effects of the competition among FLSPs on the order allocation results. The results show that with the rational expectations equilibrium, competitions among FLSPs help improve the comprehensive performance of LSSC.

  16. Estimation and asymptotic inference in the first order AR-ARCH model

    DEFF Research Database (Denmark)

    Lange, Theis; Rahbek, Anders; Jensen, Søren Tolver

    2011-01-01

    This article studies asymptotic properties of the quasi-maximum likelihood estimator (QMLE) for the parameters in the autoregressive (AR) model with autoregressive conditional heteroskedastic (ARCH) errors. A modified QMLE (MQMLE) is also studied. This estimator is based on truncation of individual...... terms of the likelihood function and is related to the recent so-called self-weighted QMLE in Ling (2007b). We show that the MQMLE is asymptotically normal irrespectively of the existence of finite moments, as geometric ergodicity alone suffice. Moreover, our included simulations show that the MQMLE...... for the QMLE to be asymptotically normal. Finally, geometric ergodicity for AR-ARCH processes is shown to hold under mild and classic conditions on the AR and ARCH processes....

  17. Estimating developmental states of tumors and normal tissues using a linear time-ordered model

    Directory of Open Access Journals (Sweden)

    Xuan Zhenyu

    2011-02-01

    Full Text Available Abstract Background Tumor cells are considered to have an aberrant cell state, and some evidence indicates different development states appearing in the tumorigenesis. Embryonic development and stem cell differentiation are ordered processes in which the sequence of events over time is highly conserved. The "cancer attractor" concept integrates normal developmental processes and tumorigenesis into a high-dimensional "cell state space", and provides a reasonable explanation of the relationship between these two biological processes from theoretical viewpoint. However, it is hard to describe such relationship by using existed experimental data; moreover, the measurement of different development states is also difficult. Results Here, by applying a novel time-ordered linear model based on a co-bisector which represents the joint direction of a series of vectors, we described the trajectories of development process by a line and showed different developmental states of tumor cells from developmental timescale perspective in a cell state space. This model was used to transform time-course developmental expression profiles of human ESCs, normal mouse liver, ovary and lung tissue into "cell developmental state lines". Then these cell state lines were applied to observe the developmental states of different tumors and their corresponding normal samples. Mouse liver and ovarian tumors showed different similarity to early development stage. Similarly, human glioma cells and ovarian tumors became developmentally "younger". Conclusions The time-ordered linear model captured linear projected development trajectories in a cell state space. Meanwhile it also reflected the change tendency of gene expression over time from the developmental timescale perspective, and our finding indicated different development states during tumorigenesis processes in different tissues.

  18. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  19. Iterative Procedures for Exact Maximum Likelihood Estimation in the First-Order Gaussian Moving Average Model

    Science.gov (United States)

    1990-11-01

    findings contained in this report are thosE Df the author(s) and should not he construed as an official Department Df the Army position, policy , or...Marquardt methods" to perform linear and nonlinear estimations. One idea in this area by Box and Jenkins (1976) was the " backcasting " procedure to evaluate

  20. The application of the reduced order model Kalman filter to motion estimation of degraded image sequences. M.S. Thesis

    Science.gov (United States)

    Simpson, Elizabeth C.

    1989-01-01

    Motion estimation is a field of great interest because of its many applications in areas such as robotics and image coding. The optic flow method is one such scheme which, although fairly accurate, is prone to error in the presence of noise. This thesis describes the use of the reduced order model Kalman filter (ROMKF) in reducing errors in displacement estimation due to degradation of the sequence. The implementation of filtering and motion estimation algorithms on the SUN workstation is also discussed. Results from preliminary testing were used to determine the degrees of freedom available for the ROMKF in the SUN software. The tests indicated that increasing the state to the left leads to slight improvement over the minimum state case. Therefore, the software uses the minimum model, with the option of adding states to the left only. The ROMKF was then used in conjunction with a hierarchical pel recursive motion estimation algorithm. Applying the ROMKF to the degraded displacements themselves generally yielded slight improvements in cases with noise degradation and noise plus blur. Filtering the images of the degraded sequence prior to motion estimation was less effective in these cases. Both methods performed badly in the case of blur alone, resulting in increased displacement errors. This is thought to be due in part to filter artifacts. Some improvements were obtained by varying the filter parameters when filtering the displacements directly. This result suggests that further study in varying filter parameters may lead to better results. The results of this thesis indicate that the ROMKF can play a part in reducing motion estimation errors from degraded sequences. However, more work needs to be done before the use of the ROMKF can be a practical solution.

  1. Binary Logistic Regression Modeling of Idle CO Emissions in Order to Estimate Predictors Influences in Old Vehicle Park

    Directory of Open Access Journals (Sweden)

    Branimir Milosavljević

    2015-01-01

    Full Text Available This paper determines, by experiments, the CO emissions at idle running with 1,785 vehicles powered by spark ignition engine, in order to verify the correctness of emissions values with a representative sample of vehicles in Serbia. The permissible emissions limits were considered for three (3 fitted binary logistic regression (BLR models, and the key reason for such analysis is finding the predictors that can have a crucial influence on the accuracy of the estimation whether such vehicles have correct emissions or not. Having summarized the research results, we found out that vehicles produced in Serbia (hereinafter referred to as “domestic vehicles” cause more pollution than imported cars (hereinafter referred to as “foreign vehicles”, although domestic vehicles are of lower average age and mileage. Another trend was observed: low-power vehicles and vehicles produced before 1992 are potentially more serious polluters.

  2. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  3. Nitrogen Removal in a Horizontal Subsurface Flow Constructed Wetland Estimated Using the First-Order Kinetic Model

    Directory of Open Access Journals (Sweden)

    Lijuan Cui

    2016-11-01

    Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.

  4. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills

    DEFF Research Database (Denmark)

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-01-01

    Methane (CH4) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required...... categories. In general, the single-phase model, LandGEM, significantly overestimated CH4 generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH4 potential (BMP) and CH4 generation rate constant (k.......Implications: Landfill operators use the first-order decay (FOD) models to estimate methane (CH4) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in overestimation when handling a low-organic waste scenario. Site-specific data were important and capable of calibrating key parameter...

  5. Estimation of basis line-integrals in a spectral distortion-modeled photon counting detector using low-order polynomial approximation of x-ray transmittance.

    Science.gov (United States)

    Lee, Okkyun; Kappler, Steffen; Polster, Christoph; Taguchi, Katsuyuki

    2016-10-26

    Photon counting detector (PCD)-based computed tomography exploits spectral information from a transmitted x-ray spectrum to estimate basis line-integrals. The recorded spectrum, however, is distorted and deviates from the transmitted spectrum due to spectral response effect (SRE). Therefore, the SRE needs to be compensated for when estimating basis lineintegrals. One approach is to incorporate the SRE model with an incident spectrum into the PCD measurement model and the other approach is to perform a calibration process that inherently includes both the SRE and the incident spectrum. A maximum likelihood estimator can be used to the former approach, which guarantees asymptotic optimality; however, a heavy computational burden is a concern. Calibration-based estimators are a form of the latter approach. They can be very efficient; however, a heuristic calibration process needs to be addressed. In this paper, we propose a computationally efficient three-step estimator for the former approach using a low-order polynomial approximation of x-ray transmittance. The low-order polynomial approximation can change the original non-linear estimation method to a two-step linearized approach followed by an iterative bias correction step. We show that the calibration process is required only for the bias correction step and prove that it converges to the unbiased solution under practical assumptions. Extensive simulation studies validate the proposed method and show that the estimation results are comparable to those of the ML estimator while the computational time is reduced substantially.

  6. Higher Order Spreading Models

    CERN Document Server

    Argyros, S A; Tyros, K

    2012-01-01

    We introduce the higher order spreading models associated to a Banach space $X$. Their definition is based on $\\ff$-sequences $(x_s)_{s\\in\\ff}$ with $\\ff$ a regular thin family and the plegma families. We show that the higher order spreading models of a Banach space $X$ form an increasing transfinite hierarchy $(\\mathcal{SM}_\\xi(X))_{\\xi<\\omega_1}$. Each $\\mathcal{SM}_\\xi (X)$ contains all spreading models generated by $\\ff$-sequences $(x_s)_{s\\in\\ff}$ with order of $\\ff$ equal to $\\xi$. We also provide a study of the fundamental properties of the hierarchy.

  7. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima

    2015-07-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  8. Estimation of Methane Production with Ordinary First Order Models%普通一级模型产甲烷量估算

    Institute of Scientific and Technical Information of China (English)

    王登玉

    2015-01-01

    以广州某生活垃圾填埋场填埋的生活垃圾为研究对象,用普通一级模型估算填埋场从开始运行到2010年底填埋生活垃圾的产甲烷量,比较各模型逐年产甲烷量估算值的相对大小,比较2010年产甲烷量估算值和实测值的大小。结果表明:同一年逐年产甲烷量估算值最大的模型是 SWANA 一级模型,估算值最小的模型是 LandGEM(EPA,2005);LandGEM(EPA,2005)模型估算值与实测值最接近,是估算该填埋场产甲烷量的最佳模型。%Household garbage of one landfill in Guangzhou is used as research object,methane produc-tion of household garbage from the run time to 2010 is estimated with ordinary first order models, the estimated relative methane production values are compared with each year,and the estimated value of 2010 is compared with the measured value.The results show that the model of the biggest estimated methane production value is SWANA first order model,the smallest estimated value is LandGEM(EPA,2005);LandGEM(EPA,2005)is most close to the measured value,and this mod-el is the most appropriate in ordinary first order models used to estimate this landfill.

  9. Estimating the accuracy of a reduced-order model for the calculation of fractional flow reserve (FFR).

    Science.gov (United States)

    Boileau, Etienne; Pant, Sanjay; Roobottom, Carl; Sazonov, Igor; Deng, Jingjing; Xie, Xianghua; Nithiarasu, Perumal

    2017-06-09

    Image-based noninvasive fractional flow reserve (FFR) is an emergent approach to determine the functional relevance of coronary stenoses. The present work aimed to determine the feasibility of using a method based on coronary computed tomography angiography (CCTA) and reduced-order models (0D-1D) for the evaluation of coronary stenoses. The reduced-order methodology (cFFRRO ) was kept as simple as possible and did not include pressure drop or stenosis models. The geometry definition was incorporated into the physical model used to solve coronary flow and pressure. cFFRRO was assessed on a virtual cohort of 30 coronary artery stenoses in 25 vessels and compared with a standard approach based on 3D computational fluid dynamics (cFFR3D ). In this proof-of-concept study, we sought to investigate the influence of geometry and boundary conditions on the agreement between both methods. Performance on a per-vessel level showed a good correlation between both methods (Pearson's product-moment R=0.885, P<0.01), when using cFFR3D  as the reference standard. The 95% limits of agreement were -0.116 and 0.08, and the mean bias was -0.018 (SD =0.05). Our results suggest no appreciable difference between cFFRRO  and cFFR3D with respect to lesion length and/or aspect ratio. At a fixed aspect ratio, however, stenosis severity and shape appeared to be the most critical factors accounting for differences in both methods. Despite the assumptions inherent to the 1D formulation, asymmetry did not seem to affect the agreement. The choice of boundary conditions is critical in obtaining a functionally significant drop in pressure. Our initial data suggest that this approach may be part of a broader risk assessment strategy aimed at increasing the diagnostic yield of cardiac catheterisation for in-hospital evaluation of haemodynamically significant stenoses. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Second order Standard Model

    Directory of Open Access Journals (Sweden)

    Johnny Espin

    2015-06-01

    Full Text Available It is known, though not commonly, that one can describe fermions using a second order in derivatives Lagrangian instead of the first order Dirac one. In this description the propagator is scalar, and the complexity is shifted to the vertex, which contains a derivative operator. In this paper we rewrite the Lagrangian of the fermionic sector of the Standard Model in such second order form. The new Lagrangian is extremely compact, and is obtained from the usual first order Lagrangian by integrating out all primed (or dotted 2-component spinors. It thus contains just half of the 2-component spinors that appear in the usual Lagrangian, which suggests a new perspective on unification. We sketch a natural in this framework SU(2×SU(4⊂SO(9 unified theory.

  11. Reduced Order Podolsky Model

    CERN Document Server

    Thibes, Ronaldo

    2016-01-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  12. A Novel Observer for Lithium-Ion Battery State of Charge Estimation in Electric Vehicles Based on a Second-Order Equivalent Circuit Model

    Directory of Open Access Journals (Sweden)

    Bizhong Xia

    2017-08-01

    Full Text Available Accurate state of charge (SOC estimation can prolong lithium-ion battery life and improve its performance in practice. This paper proposes a new method for SOC estimation. The second-order resistor-capacitor (2RC equivalent circuit model (ECM is applied to describe the dynamic behavior of lithium-ion battery on deriving state space equations. A novel method for SOC estimation is then presented. This method does not require any matrix calculation, so the computation cost can be very low, making it more suitable for hardware implementation. The Federal Urban Driving Schedule (FUDS, The New European Driving Cycle (NEDC, and the West Virginia Suburban Driving Schedule (WVUSUB experiments are carried to evaluate the performance of the proposed method. Experimental results show that the SOC estimation error can converge to 3% error boundary within 30 seconds when the initial SOC estimation error is 20%, and the proposed method can maintain an estimation error less than 3% with 1% voltage noise and 5% current noise. Further, the proposed method has excellent robustness against parameter disturbance. Also, it has higher estimation accuracy than the extended Kalman filter (EKF, but with decreased hardware requirements and faster convergence rate.

  13. Efficient particle-based estimation of marginal costs in a first-order macroscopic traffic flow model

    NARCIS (Netherlands)

    Zuurbier, F.S.; Hegyi, A.; Hoogendoorn, S.P.

    2010-01-01

    Marginal costs in traffic networks are the extra costs incurred to the system as the result of extra traffic. Marginal costs are required frequently e.g. when considering system optimal traffic assignment or tolling problems. When explicitly considering spillback in a traffic flow model, one can use

  14. Electrochemical state and internal variables estimation using a reduced-order physics-based model of a lithium-ion cell and an extended Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL

    2015-03-15

    This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.

  15. Continuous Time Model Estimation

    OpenAIRE

    Carl Chiarella; Shenhuai Gao

    2004-01-01

    This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...

  16. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  17. Regularized Reduced Order Models

    CERN Document Server

    Wells, David; Xie, Xuping; Iliescu, Traian

    2015-01-01

    This paper puts forth a regularization approach for the stabilization of proper orthogonal decomposition (POD) reduced order models (ROMs) for the numerical simulation of realistic flows. Two regularized ROMs (Reg-ROMs) are proposed: the Leray ROM (L-ROM) and the evolve-then-filter ROM (EF-ROM). These new Reg-ROMs use spatial filtering to smooth (regularize) various terms in the ROMs. Two spatial filters are used: a POD projection onto a POD subspace (Proj) and a new POD differential filter (DF). The four Reg-ROM/filter combinations are tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient and the three-dimensional flow past a circular cylinder at a low Reynolds number (Re = 100). Overall, the most accurate Reg-ROM/filter combination is EF-ROM-DF. Furthermore, the DF generally yields better results than Proj. Finally, the four Reg-ROM/filter combinations are computationally efficient and generally more accurate than the standard Galerkin ROM.

  18. Comparing Fit and Reliability Estimates of a Psychological Instrument Using Second-Order CFA, Bifactor, and Essentially Tau-Equivalent (Coefficient Alpha) Models via AMOS 22

    Science.gov (United States)

    Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey

    2015-01-01

    Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…

  19. Comparing Fit and Reliability Estimates of a Psychological Instrument Using Second-Order CFA, Bifactor, and Essentially Tau-Equivalent (Coefficient Alpha) Models via AMOS 22

    Science.gov (United States)

    Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey

    2015-01-01

    Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…

  20. A-posteriori error estimation for second order mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard

    2012-01-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  1. Optical method of atomic ordering estimation

    Energy Technology Data Exchange (ETDEWEB)

    Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)

    2013-12-04

    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  2. Zero Order Estimates for Analytic Functions

    CERN Document Server

    Zorin, Evgeniy

    2011-01-01

    The primary goal of this paper is to provide a general multiplicity estimate. Our main theorem allows to reduce a proof of multiplicity lemma to the study of ideals stable under some appropriate transformation of a polynomial ring. In particular, this result leads to a new link between the theory of polarized algebraic dynamical systems and transcendental number theory. On the other hand, it allows to establish an improvement of Nesterenko's conditional result on solutions of systems of differential equations. We also deduce, under some condition on stable varieties, the optimal multiplicity estimate in the case of generalized Mahler's functional equations, previously studied by Mahler, Nishioka, Topfer and others. Further, analyzing stable ideals we prove the unconditional optimal result in the case of linear functional systems of generalized Mahler's type. The latter result generalizes a famous theorem of Nishioka (1986) previously conjectured by Mahler (1969), and simultaneously it gives a counterpart in t...

  3. 零级动力学模型产甲烷量估算值与实测值的比较研究%Comparison between estimated methane production values of zero order models and the measured value

    Institute of Scientific and Technical Information of China (English)

    王登玉; 贾跃然; 王淑娜

    2015-01-01

    The dynamic methane production estimation model of landfill can be divided into zero order, first order and second order models.Zero order models include SWANA zero order model,COD model-based zero order model,generalized molecular model-based zero order model and biodegrad-able model-based zero order model.Every year methane productions of one of Guangzhou Domes-tic Solid Waste Sanitary Landfills,which were compared with the measured value,were estimated with zero order models.The results show that the difference between the measured value and esti-mated COD model-based zero order model value is the biggest,and the difference between the measured value and estimated SWANA zero order model value is the smallest.SWANA zero order model is the most appropriate estimating zero order model.%填埋场产甲烷动力学模型可分为零级模型、一级模型和二级模型.零级模型包括SWANA零级模型、以COD模型为基础的零级模型、以概化分子模型为基础的零级模型和以可生物降解模型为基础的零级模型.用零级模型估算了广州某生活垃圾填埋场逐年产甲烷量,将2010年估算值与该年实测值进行比较,结果表明,以COD模型为基础的零级模型估算值与实测值差距最大,SWANA零级模型差距最小.SWANA零级模型是填埋场产甲烷量估算的最适合零级模型.

  4. Higher order moments of lensing convergence - I. Estimate from simulations

    CERN Document Server

    Vicinanza, M; Maoli, R; Scaramella, R; Er, X

    2016-01-01

    Large area lensing surveys are expected to make it possible to use cosmic shear tomography as a tool to severely constrain cosmological parameters. To this end, one typically relies on second order statistics such as the two - point correlation fucntion and its Fourier counterpart, the power spectrum. Moving a step forward, we wonder whether and to which extent higher order stastistics can improve the lensing Figure of Merit (FoM). In this first paper of a series, we investigate how second, third and fourth order lensing convergence moments can be measured and use as probe of the underlying cosmological model. We use simulated data and investigate the impact on moments estimate of the map reconstruction procedure, the cosmic variance, and the intrinsic ellipticity noise. We demonstrate that, under realistic assumptions, it is indeed possible to use higher order moments as a further lensing probe.

  5. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  6. Bayesian Estimation for the Order of INAR(q) Mo del

    Institute of Scientific and Technical Information of China (English)

    Miao Guan-hong; Wang De-hui

    2016-01-01

    In this paper, we consider the problem of determining the order of INAR(q) model on the basis of the Bayesian estimation theory. The Bayesian es-timator for the order is given with respect to a squared-error loss function. The consistency of the estimator is discussed. The results of a simulation study for the estimation method are presented.

  7. Estimating nonlinear models

    Science.gov (United States)

    Billings, S. A.

    1988-03-01

    Time and frequency domain identification methods for nonlinear systems are reviewed. Parametric methods, prediction error methods, structure detection, model validation, and experiment design are discussed. Identification of a liquid level system, a heat exchanger, and a turbocharge automotive diesel engine are illustrated. Rational models are introduced. Spectral analysis for nonlinear systems is treated. Recursive estimation is mentioned.

  8. Using integral transforms to estimate higher order derivatives

    OpenAIRE

    Bradley, David M.

    2007-01-01

    Integral transformations are used to estimate high order derivatives of various special functions. Applications are given to numerical integration, where estimates of high order derivatives of the integrand are needed to achieve bounds on the error. The main idea is to find a suitable integral representation of the function whose derivatives are to be estimated, differentiate repeatedly under the integral sign, and estimate the resulting integral.

  9. Parameters and Fractional Differentiation Orders Estimation for Linear Continuous-Time Non-Commensurate Fractional Order Systems

    KAUST Repository

    Belkhatir, Zehor

    2017-05-31

    This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.

  10. Markov chain order estimation with conditional mutual information

    Science.gov (United States)

    Papapetrou, M.; Kugiumtzis, D.

    2013-04-01

    We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of K symbols, we define CMI of order m, Ic(m), as the mutual information of two variables in the chain being m time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of Ic(m), where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing m and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using ϕ-divergence. The order criterion of CMI-testing turns out to be superior for orders larger than one, but its effectiveness for large orders depends on data availability. In view of the results from the simulations, we interpret the estimated orders by the CMI-testing and the other criteria on genes and intergenic regions of DNA chains.

  11. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  12. Approximate Deconvolution Reduced Order Modeling

    CERN Document Server

    Xie, Xuping; Wang, Zhu; Iliescu, Traian

    2015-01-01

    This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})

  13. Markov Chain Order estimation with Conditional Mutual Information

    CERN Document Server

    Papapetrou, Maria; 10.1016/j.physa.2012.12.017.

    2013-01-01

    We introduce the Conditional Mutual Information (CMI) for the estimation of the Markov chain order. For a Markov chain of $K$ symbols, we define CMI of order $m$, $I_c(m)$, as the mutual information of two variables in the chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. We find approximate analytic significance limits based on the estimation bias of CMI and develop a randomization significance test of $I_c(m)$, where the randomized symbol sequences are formed by random permutation of the components of the original symbol sequence. The significance test is applied for increasing $m$ and the Markov chain order is estimated by the last order for which the null hypothesis is rejected. We present the appropriateness of CMI-testing on Monte Carlo simulations and compare it to the Akaike and Bayesian information criteria, the maximal fluctuation method (Peres-Shields estimator) and a likelihood ratio test for increasing orders using $\\phi$-divergence. The order criterion of...

  14. Joint input-response estimation for structural systems based on reduced-order models and vibration data from a limited number of sensors

    Science.gov (United States)

    Lourens, E.; Papadimitriou, C.; Gillijns, S.; Reynders, E.; De Roeck, G.; Lombaert, G.

    2012-05-01

    An algorithm is presented for jointly estimating the input and state of a structure from a limited number of acceleration measurements. The algorithm extends an existing joint input-state estimation filter, derived using linear minimum-variance unbiased estimation, to applications in structural dynamics. The filter has the structure of a Kalman filter, except that the true value of the input is replaced by an optimal estimate. No prior information on the dynamic evolution of the input forces is assumed and no regularization is required, permitting online application. The effectiveness and accuracy of the proposed algorithm are demonstrated using data from a numerical cantilever beam example as well as a laboratory experiment on an instrumented steel beam and an in situ experiment on a footbridge.

  15. An Non-parametrical Approach to Estimate Location Parameters under Simple Order

    Institute of Scientific and Technical Information of China (English)

    孙旭

    2005-01-01

    This paper deals with estimating parameters under simple order when samples come from location models. Based on the idea of Hodges and Lehmann estimator (H-L estimator), a new approach to estimate parameters is proposed, which is difference with the classical L1 isotoaic regression and L2 isotonic regression. An algorithm to corupute estimators is given. Simulations by the Monte-Carlo method is applied to compare the likelihood functions with respect to L1 estimators and weighted isotonic H-L estimators.

  16. Accurate estimates of solutions of second order recursions

    NARCIS (Netherlands)

    Mattheij, R.M.M.

    1975-01-01

    Two important types of two dimensional matrix-vector and second order scalar recursions are studied. Both types possess two kinds of solutions (to be called forward and backward dominant solutions). For the directions of these solutions sharp estimates are derived, from which the solutions themselve

  17. Estimation of the fourth-order dispersion coefficient β4

    Institute of Scientific and Technical Information of China (English)

    Jing Huang; Jianquan Yao

    2012-01-01

    The fourth-order dispersion coefficient of fibers are estimated by the iterations around the third-order dispersion and the high-order nonlinear items in the nonlinear Schordinger equation solved by Green's function approach.Our theoretical evaluation demonstrates that the fourth-order dispersion coefficient slightly varies with distance.The fibers also record β4 values of about 0.002,0.003,and 0.00032 ps4/km for SMF,NZDSF and DCF,respectively.In the zero-dispersion regime,the high-order nonlinear effect (higher than self-steepening) has a strong impact on the transmitted short pulse.This red-shifts accelerates the symmetrical split of the pulse,although this effect is degraded rapidly with the increase of β2.Thus,the contributions to β4 of SMF,NZDSF,and DCF can be neglected.

  18. Minimax estimates of parameters in a semiparametric regression model based on higher order difference%基于高阶差分方法半参数回归模型中参数的minimax估计

    Institute of Scientific and Technical Information of China (English)

    王晖; 左国新

    2013-01-01

    基于高阶差分方法给出半参数回归模型中参数β的minimax线性估计条件,并指出差分方法下得到的最小二乘估计(β)diff为β的minimax线性估计.另外对差分项存在多重共线性的情况,指出参数β的岭估计(β)diff(k)存在minimax估计优良性的条件.%The paper introduces conditions of difference-based minimax estimates of the regression parameters β in a semiparametric model. The ordinary least squares estimator βdiff based on higher order differences of the observations, and the minimax linear estimator of β are considered at the same time. Furthermore, the difference-based ridge regression estimator βdiff(k) that used in the presence of multicollinearity in a semiparametric model, and the conditions of βdiff(k) as a minimax linear estimator are also considered.

  19. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  20. Costate estimation for dynamic systems of the second order

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    The dynamics of a mechanical system in the Lagrange space yields a set of differential equations of the second order and involves much less variables and constraints than that described in the state space. This paper presents a so-called Legendre pseudo-spectral (PS) approach for directly estimating the costates of the Bolza problem of optimal control of a set of dynamic equations of the second order. Under a set of closure conditions, it is proved that the Karush-Kuhn-Tucker (KKT) multipliers satisfy the same conditions as those determined by collocating the costate equations of the second order. Hence, the KKT multipliers can be used to estimate the costates of the Bolza problem via a simple linear map- ping. The proposed approach can be used to check the optimality of the direct solution for a trajectory optimization problem involving the dynamic equations of the second order and to remove any conver- sion of the dynamic system from the second order to the first order. The new approach is demonstrated via two classical benchmark problems.

  1. Costate estimation for dynamic systems of the second order

    Institute of Scientific and Technical Information of China (English)

    WEN Hao; JIN DongPing; HU HaiYan

    2009-01-01

    The dynamics of a mechanical system in the Lagrange space yields a set of differential equations of the second order and involves much less variables and constraints than that described in the state space. This paper presents a so-called Legendre pseudo-spectral (PS) approach for directly estimating the costates of the Bolza problem of optimal control of a set of dynamic equations of the second order. Under a set of closure conditions, it is proved that the Karush-Kuhn-Tucker (KKT) multipliers satisfy the same conditions as those determined by collocating the costate equations of the second order. Hence, the KKT multipliers can be used to estimate the costates of the Bolza problem via a simple linear mapping. The proposed approach can be used to check the optimality of the direct solution for a trajectory optimization problem involving the dynamic equations of the second order and to remove any conversion of the dynamic system from the second order to the first order. The new approach is demonstrated via two classical benchmark problems.

  2. Parameter estimation of stable distribution based on zero - order statistics

    Science.gov (United States)

    Chen, Jian; Chen, Hong; Cai, Xiaoxia; Weng, Pengfei; Nie, Hao

    2017-08-01

    With the increasing complexity of the channel, there are many impulse noise signals in the real channel. The statistical properties of such processes are significantly deviated from the Gaussian distribution, and the Alpha stable distribution provides a very useful theoretical tool for this process. This paper focuses on the parameter estimation method of the Alpha stable distribution. First, the basic theory of Alpha stable distribution is introduced. Then, the concept of logarithmic moment and geometric power are proposed. Finally, the parameter estimation of Alpha stable distribution is realized based on zero order statistic (ZOS). This method has better toughness and precision.

  3. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla

    2015-08-12

    This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.

  4. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  5. A Parametric Procedure for Ultrametric Tree Estimation from Conditional Rank Order Proximity Data.

    Science.gov (United States)

    Young, Martin R.; DeSarbo, Wayne S.

    1995-01-01

    A new parametric maximum likelihood procedure is proposed for estimating ultrametric trees for the analysis of conditional rank order proximity data. Technical aspects of the model and the estimation algorithm are discussed, and Monte Carlo results illustrate its application. A consumer psychology application is also examined. (SLD)

  6. High-order ionospheric effects on electron density estimation from Fengyun-3C GPS radio occultation

    Science.gov (United States)

    Li, Junhai; Jin, Shuanggen

    2017-03-01

    GPS radio occultation can estimate ionospheric electron density and total electron content (TEC) with high spatial resolution, e.g., China's recent Fengyun-3C GPS radio occultation. However, high-order ionospheric delays are normally ignored. In this paper, the high-order ionospheric effects on electron density estimation from the Fengyun-3C GPS radio occultation data are estimated and investigated using the NeQuick2 ionosphere model and the IGRF12 (International Geomagnetic Reference Field, 12th generation) geomagnetic model. Results show that the high-order ionospheric delays have large effects on electron density estimation with up to 800 el cm-3, which should be corrected in high-precision ionospheric density estimation and applications. The second-order ionospheric effects are more significant, particularly at 250-300 km, while third-order ionospheric effects are much smaller. Furthermore, the high-order ionospheric effects are related to the location, the local time, the radio occultation azimuth and the solar activity. The large high-order ionospheric effects are found in the low-latitude area and in the daytime as well as during strong solar activities. The second-order ionospheric effects have a maximum positive value when the radio occultation azimuth is around 0-20°, and a maximum negative value when the radio occultation azimuth is around -180 to -160°. Moreover, the geomagnetic storm also affects the high-order ionospheric delay, which should be carefully corrected.

  7. Accurate estimation of third-order moments from turbulence measurements

    Directory of Open Access Journals (Sweden)

    J. J. Podesta

    2009-02-01

    Full Text Available Politano and Pouquet's law, a generalization of Kolmogorov's four-fifths law to incompressible MHD, makes it possible to measure the energy cascade rate in incompressible MHD turbulence by means of third-order moments. In hydrodynamics, accurate measurement of third-order moments requires large amounts of data because the probability distributions of velocity-differences are nearly symmetric and the third-order moments are relatively small. Measurements of the energy cascade rate in solar wind turbulence have recently been performed for the first time, but without careful consideration of the accuracy or statistical uncertainty of the required third-order moments. This paper investigates the statistical convergence of third-order moments as a function of the sample size N. It is shown that the accuracy of the third-moment <(δ v||3> depends on the number of correlation lengths spanned by the data set and a method of estimating the statistical uncertainty of the third-moment is developed. The technique is illustrated using both wind tunnel data and solar wind data.

  8. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  9. Estimating Functions and Semiparametric Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    1996-01-01

    The thesis is divided in two parts. The first part treats some topics of the estimation theory for semiparametric models in general. There the classic optimality theory is reviewed and exposed in a suitable way for the further developments given after. Further the theory of estimating functions...... contained in this part of the thesis constitutes an original contribution. There can be found the detailed characterization of the class of regular estimating functions, a calculation of efficient regular asymptotic linear estimating sequences (\\ie the classical optimality theory) and a discussion...... of the attainability of the bounds for the concentration of regular asymptotic linear estimating sequences by estimators derived from estimating functions. The main class of models considered in the second part of the thesis (chapter 5) are constructed by assuming that the expectation of a number of given square...

  10. Estimation of Wind Turbulence Using Spectral Models

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Knudsen, Torben; Bak, Thomas

    2011-01-01

    The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind....... In this paper, a method is presented for estimation of the turbulence. The spectral model of the wind is used in order to provide the estimations. The suggested estimation approach is applied to a case study in which the objective is to estimate wind turbulence at desired points using the measurements of wind...... speed outside the wind field. The results show that the method is able to provide estimations which explain more than 50% of the wind turbulence from the distance of about 300 meters....

  11. Reduced order ARMA spectral estimation of ocean waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Witz, J.A.; Lyons, G.J.

    Autoregressive Moving Average (ARMA) algorithm which is based on the calculation of modal energies. We apply this system identification technique to reduce time series data or target spectra into a few parameters which are the coefficients of the ARMA model... are determined using the modified Yule-Walker equations and then first and second order real modes are obtained from the autoregressive polynomial. The energy in each mode is then determined. Considering only the higher energy modes, the autoregressive part...

  12. Mode choice model parameters estimation

    OpenAIRE

    Strnad, Irena

    2010-01-01

    The present work focuses on parameter estimation of two mode choice models: multinomial logit and EVA 2 model, where four different modes and five different trip purposes are taken into account. Mode choice model discusses the behavioral aspect of mode choice making and enables its application to a traffic model. Mode choice model includes mode choice affecting trip factors by using each mode and their relative importance to choice made. When trip factor values are known, it...

  13. A POSTERIORI ESTIMATOR OF NONCONFORMING FINITE ELEMENT METHOD FOR FOURTH ORDER ELLIPTIC PERTURBATION PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Shuo Zhang; Ming Wang

    2008-01-01

    In this paper,we consider the nonconforming finite element approximations of fourth order elliptic perturbation problems in two dimensions.We present an a posteriori error estimator under certain conditions,and give an h-version adaptive algorithm based on the error estimation.The local behavior of the estimator is analyzed as well.This estimator works for several nonconforming methods,such as the modified Morley method and the modified Zienkiewicz method,and under some assumptions,it is an optimal one.Numerical examples are reported.with a linear stationary Cahn-Hilliard-type equation as a model problem.

  14. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  15. Regional fuzzy chain model for evapotranspiration estimation

    Science.gov (United States)

    Güçlü, Yavuz Selim; Subyani, Ali M.; Şen, Zekai

    2017-01-01

    Evapotranspiration (ET) is one of the main hydrological cycle components that has extreme importance for water resources management and agriculture especially in arid and semi-arid regions. In this study, regional ET estimation models based on the fuzzy logic (FL) principles are suggested, where the first stage includes the ET calculation via Penman-Monteith equation, which produces reliable results. In the second phase, ET estimations are produced according to the conventional FL inference system model. In this paper, regional fuzzy model (RFM) and regional fuzzy chain model (RFCM) are proposed through the use of adjacent stations' data in order to fill the missing ones. The application of the two models produces reliable and satisfactory results for mountainous and sea region locations in the Kingdom of Saudi Arabia, but comparatively RFCM estimations have more accuracy. In general, the mean absolute percentage error is less than 10%, which is acceptable in practical applications.

  16. Diagonal Kernel Point Estimation of th-Order Discrete Volterra-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Pirani Massimiliano

    2004-01-01

    Full Text Available The estimation of diagonal elements of a Wiener model kernel is a well-known problem. The new operators and notations proposed here aim at the implementation of efficient and accurate nonparametric algorithms for the identification of diagonal points. The formulas presented here allow a direct implementation of Wiener kernel identification up to the th order. Their efficiency is demonstrated by simulations conducted on discrete Volterra systems up to fifth order.

  17. Consistent Estimation of Order for Regression in the Presence of Serial Correlation and Heteroscedasticity

    Institute of Scientific and Technical Information of China (English)

    CHEN Min; WU Guo-fu; QI Quan-yue

    2001-01-01

    In this paper, we consider a multiple regression model in the presence of serial correlation and heteroscedasticity. We establish the convergence rate of an efficient estimation of autoregressive coefficients suggested by Harvey and Robison (1988). We propose a method to identify order of serial correlation data and prove that it is of strong consistency. The simulation reports show that the method of identifying order is available.

  18. Model error estimation in ensemble data assimilation

    Directory of Open Access Journals (Sweden)

    S. Gillijns

    2007-01-01

    Full Text Available A new methodology is proposed to estimate and account for systematic model error in linear filtering as well as in nonlinear ensemble based filtering. Our results extend the work of Dee and Todling (2000 on constant bias errors to time-varying model errors. In contrast to existing methodologies, the new filter can also deal with the case where no dynamical model for the systematic error is available. In the latter case, the applicability is limited by a matrix rank condition which has to be satisfied in order for the filter to exist. The performance of the filter developed in this paper is limited by the availability and the accuracy of observations and by the variance of the stochastic model error component. The effect of these aspects on the estimation accuracy is investigated in several numerical experiments using the Lorenz (1996 model. Experimental results indicate that the availability of a dynamical model for the systematic error significantly reduces the variance of the model error estimates, but has only minor effect on the estimates of the system state. The filter is able to estimate additive model error of any type, provided that the rank condition is satisfied and that the stochastic errors and measurement errors are significantly smaller than the systematic errors. The results of this study are encouraging. However, it remains to be seen how the filter performs in more realistic applications.

  19. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    NARCIS (Netherlands)

    Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with correspond

  20. On-line adaptive battery impedance parameter and state estimation considering physical principles in reduced order equivalent circuit battery models. Part 1. Requirements, critical review of methods and modeling

    Science.gov (United States)

    Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe

    2014-08-01

    Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.

  1. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  2. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States.

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-21

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects' affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain's motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  3. IDC Reengineering Phase 2 & 3 Rough Order of Magnitude (ROM) Cost Estimate Summary (Leveraged NDC Case).

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James M.; Prescott, Ryan; Dawson, Jericah M.; Huelskamp, Robert M.

    2014-11-01

    Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.

  4. Fractional Order Models of Industrial Pneumatic Controllers

    Directory of Open Access Journals (Sweden)

    Abolhassan Razminia

    2014-01-01

    Full Text Available This paper addresses a new approach for modeling of versatile controllers in industrial automation and process control systems such as pneumatic controllers. Some fractional order dynamical models are developed for pressure and pneumatic systems with bellows-nozzle-flapper configuration. In the light of fractional calculus, a fractional order derivative-derivative (FrDD controller and integral-derivative (FrID are remodeled. Numerical simulations illustrate the application of the obtained theoretical results in simple examples.

  5. Partial order of frustrated Potts model

    Energy Technology Data Exchange (ETDEWEB)

    Igarashi, Ryo [CCSE, Japan Atomic Energy Agency, Higashi-Ueno, Taito, Tokyo 110-0015 (Japan); Ogata, Masao, E-mail: igarashi.ryo@jaea.go.j [Deaprtment of Physics, University of Tokyo, Hongo, Bunkyo, Tokyo 133-0033 (Japan)

    2010-01-01

    We investigate a 4-state ferromagnetic Potts model with a special type of geometrical frustration on a three dimensional diamond lattice. We find that the model undergoes unconventional phase transition; half of the spins in the system order in a two dimensional hexagonal-sheet-like structure while the remaining half of the spins stay disordered. The ordered sheets and the disordered sheets stack one after another. We obtain fairly large residual entropy using the Wang-Landau Monte Carlo simulation.

  6. Investigation of Effectiveness of Order Review and Release Models in Make to Order Supply Chain

    Directory of Open Access Journals (Sweden)

    Kundu Kaustav

    2016-01-01

    Full Text Available Nowadays customisation becomes more common due to vast requirement from the customers for which industries are trying to use make-to-order (MTO strategy. Due to high variation in the process, workload control models are extensively used for jobshop companies which usually adapt MTO strategy. Some authors tried to implement workload control models, order review and release systems, in non-repetitive manufacturing companies, where there is a dominant flow in production. Those models are better in shop floor but their performances are never been investigated in high variation situations like MTO supply chain. This paper starts with the introduction of particular issues in MTO companies and a general overview of order review and release systems widely used in the industries. Two order review and release systems, the Limited and Balanced models, particularly suitable for flow shop system are applied to MTO supply chain, where the processing times are difficult to estimate due to high variation. Simulation results show that the Balanced model performs much better than the Limited model if the processing times can be estimated preciously.

  7. Mathematical modelling of fractional order circuits

    CERN Document Server

    Moreles, Miguel Angel

    2016-01-01

    In this work a classical derivation of fractional order circuits models is presented. Generalized constitutive equations in terms of fractional Riemann-Liouville derivatives are introduced in the Maxwell's equations. Next the Kirchhoff voltage law is applied in a RCL circuit configuration. A fractional differential equation model is obtained with Caputo derivatives. Thus standard initial conditions apply.

  8. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    Energy Technology Data Exchange (ETDEWEB)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au [Department of Mechanical Engineering, The University of Melbourne, Melbourne, Victoria 3010 (Australia); Tinney, Charles E. [Center for Aeromechanics Research, The University of Texas at Austin, Austin, Texas 78712 (United States)

    2014-05-15

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimation (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.

  9. Methods in Model Order Reduction (MOR) field

    Institute of Scientific and Technical Information of China (English)

    刘志超

    2014-01-01

    Nowadays, the modeling of systems may be quite large, even up to tens of thousands orders. In spite of the increasing computational powers, direct simulation of these large-scale systems may be impractical. Thus, to industry requirements, analytically tractable and computationally cheap models must be designed. This is the essence task of Model Order Reduction (MOR). This article describes the basics of MOR optimization, various way of designing MOR, and gives the conclusion about existing methods. In addition, it proposed some heuristic footpath.

  10. Higher-Order Item Response Models for Hierarchical Latent Traits

    Science.gov (United States)

    Huang, Hung-Yu; Wang, Wen-Chung; Chen, Po-Hsi; Su, Chi-Ming

    2013-01-01

    Many latent traits in the human sciences have a hierarchical structure. This study aimed to develop a new class of higher order item response theory models for hierarchical latent traits that are flexible in accommodating both dichotomous and polytomous items, to estimate both item and person parameters jointly, to allow users to specify…

  11. Low Order Empirical Galerkin Models for Feedback Flow Control

    Science.gov (United States)

    Tadmor, Gilead; Noack, Bernd

    2005-11-01

    Model-based feedback control restrictions on model order and complexity stem from several generic considerations: real time computation, the ability to either measure or reliably estimate the state in real time and avoiding sensitivity to noise, uncertainty and numerical ill-conditioning are high on that list. Empirical POD Galerkin models are attractive in the sense that they are simple and (optimally) efficient, but are notoriously fragile, and commonly fail to capture transients and control effects. In this talk we review recent efforts to enhance empirical Galerkin models and make them suitable for feedback design. Enablers include `subgrid' estimation of turbulence and pressure representations, tunable models using modes from multiple operating points, and actuation models. An invariant manifold defines the model's dynamic envelope. It must be respected and can be exploited in observer and control design. These ideas are benchmarked in the cylinder wake system and validated by a systematic DNS investigation of a 3-dimensional Galerkin model of the controlled wake.

  12. Joint estimation of the fractional differentiation orders and the unknown input for linear fractional non-commensurate system

    KAUST Repository

    Belkhatir, Zehor

    2015-11-05

    This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.

  13. Algebraic Lens Distortion Model Estimation

    Directory of Open Access Journals (Sweden)

    Luis Alvarez

    2010-07-01

    Full Text Available A very important property of the usual pinhole model for camera projection is that 3D lines in the scene are projected to 2D lines. Unfortunately, wide-angle lenses (specially low-cost lenses may introduce a strong barrel distortion, which makes the usual pinhole model fail. Lens distortion models try to correct such distortion. We propose an algebraic approach to the estimation of the lens distortion parameters based on the rectification of lines in the image. Using the proposed method, the lens distortion parameters are obtained by minimizing a 4 total-degree polynomial in several variables. We perform numerical experiments using calibration patterns and real scenes to show the performance of the proposed method.

  14. Order Quantity Distributions: Estimating an Adequate Aggregation Horizon

    Directory of Open Access Journals (Sweden)

    Eriksen Poul Svante

    2016-09-01

    Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.

  15. Dynamical models of happiness with fractional order

    Science.gov (United States)

    Song, Lei; Xu, Shiyun; Yang, Jianying

    2010-03-01

    This present study focuses on a dynamical model of happiness described through fractional-order differential equations. By categorizing people of different personality and different impact factor of memory (IFM) with different set of model parameters, it is demonstrated via numerical simulations that such fractional-order models could exhibit various behaviors with and without external circumstance. Moreover, control and synchronization problems of this model are discussed, which correspond to the control of emotion as well as emotion synchronization in real life. This study is an endeavor to combine the psychological knowledge with control problems and system theories, and some implications for psychotherapy as well as hints of a personal approach to life are both proposed.

  16. A high-order electromagnetic gyrokinetic model

    CERN Document Server

    Miyato, N

    2013-01-01

    A high-order extension is presented for the electromagnetic gyrokinetic formulation in which the parallel canonical momentum is taken as one of phase space coordinates. The high-order displacement vector associated with the guiding-center transformation should be considered in the long wavelength regime. This yields addtional terms in the gyrokinetic Hamiltonian which lead to modifications to the gyrokinetic Poisson and Amp\\`ere equations. In addition, the high-order piece of the guiding-center transformation for the parallel canonical momentum should be also kept in the electromagnetic model. The high-order piece contains the Ba\\~nos drift effect and further modifies the gyrokinetic Amp\\`ere equation.

  17. Health Parameter Estimation with Second-Order Sliding Mode Observer for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Xiaodong Chang

    2017-07-01

    Full Text Available In this paper the problem of health parameter estimation in an aero-engine is investigated by using an unknown input observer-based methodology, implemented by a second-order sliding mode observer (SOSMO. Unlike the conventional state estimator-based schemes, such as Kalman filters (KF and sliding mode observers (SMO, the proposed scheme uses a “reconstruction signal” to estimate health parameters modeled as artificial inputs, and is not only applicable to long-time health degradation, but reacts much quicker in handling abrupt fault cases. In view of the inevitable uncertainties in engine dynamics and modeling, a weighting matrix is created to minimize such effect on estimation by using the linear matrix inequalities (LMI. A big step toward uncertainty modeling is taken compared with our previous SMO-based work, in that uncertainties are considered in a more practical form. Moreover, to avoid chattering in sliding modes, the super-twisting algorithm (STA is employed in observer design. Various simulations are carried out, based on the comparisons between the KF-based scheme, the SMO-based scheme in our earlier research, and the proposed method. The results consistently demonstrate the capabilities and advantages of the proposed approach in health parameter estimation.

  18. Order Parameters of the Dilute A Models

    CERN Document Server

    Warnaar, S O; Seaton, K A; Nienhuis, B

    1993-01-01

    The free energy and local height probabilities of the dilute A models with broken $\\Integer_2$ symmetry are calculated analytically using inversion and corner transfer matrix methods. These models possess four critical branches. The first two branches provide new realisations of the unitary minimal series and the other two branches give a direct product of this series with an Ising model. We identify the integrable perturbations which move the dilute A models away from the critical limit. Generalised order parameters are defined and their critical exponents extracted. The associated conformal weights are found to occur on the diagonal of the relevant Kac table. In an appropriate regime the dilute A$_3$ model lies in the universality class of the Ising model in a magnetic field. In this case we obtain the magnetic exponent $\\delta=15$ directly, without the use of scaling relations.

  19. The improved 10th order QED expression for a_{\\mu} new results and related estimates

    CERN Document Server

    Kataev, A L

    2006-01-01

    New estimates of the 10th order QED corrections to the muon anomalous magnetic moment are presented. The estimates include the information on definite improved 10th order QED contributions to $a_{\\mu}$, calculated by Kinoshita and Nio. The final estimates are in good agreement with the ones, given recently by Kinoshita.

  20. FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW

    Institute of Scientific and Technical Information of China (English)

    Haiying WANG; Xinyu ZHANG; Guohua ZOU

    2009-01-01

    In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.

  1. Third-Order Doppler Parameter Estimation of Bistatic Forward-Looking SAR Based on Modified Cubic Phase Function

    Science.gov (United States)

    Li, Wenchao; Yang, Jianyu; Huang, Yulin; Kong, Lingjiang

    For Doppler parameter estimation of forward-looking SAR, the third-order Doppler parameter can not be neglected. In this paper, the azimuth signal of the transmitter fixed bistatic forward-looking SAR is modeled as a cubic polynomial phase signal (CPPS) and multiple time-overlapped CPPSs, and the modified cubic phase function is presented to estimate the third-order Doppler parameter. By combining the cubic phase function (CPF) with Radon transform, the method can give an accurate estimation of the third-order Doppler parameter. Simulations validate the effectiveness of the algorithm.

  2. Regularized Positive-Definite Fourth Order Tensor Field Estimation from DW-MRI★

    Science.gov (United States)

    Barmpoutis, Angelos; Vemuri, Baba C.; Howland, Dena; Forder, John R.

    2009-01-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these tensors are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order tensors as ternary quartics and then apply Hilbert’s theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order tensor approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations. PMID:19063978

  3. Regularized positive-definite fourth order tensor field estimation from DW-MRI.

    Science.gov (United States)

    Barmpoutis, Angelos; Hwang, Min Sig; Howland, Dena; Forder, John R; Vemuri, Baba C

    2009-03-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex local tissue structures, e.g. crossing fibers, and as a result, the scalar quantities derived from these tensors are grossly inaccurate at such locations. In this paper we employ a 4th order symmetric positive-definite (SPD) tensor approximation to represent the diffusivity function and present a novel technique to estimate these tensors from the DW-MRI data guaranteeing the SPD property. Several articles have been reported in literature on higher order tensor approximations of the diffusivity function but none of them guarantee the positivity of the estimates, which is a fundamental constraint since negative values of the diffusivity are not meaningful. In this paper we represent the 4th-order tensors as ternary quartics and then apply Hilbert's theorem on ternary quartics along with the Iwasawa parametrization to guarantee an SPD 4th-order tensor approximation from the DW-MRI data. The performance of this model is depicted on synthetic data as well as real DW-MRIs from a set of excised control and injured rat spinal cords, showing accurate estimation of scalar quantities such as generalized anisotropy and trace as well as fiber orientations.

  4. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  5. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared to the p......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...

  6. Parameter estimation, model reduction and quantum filtering

    Science.gov (United States)

    Chase, Bradley A.

    This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving

  7. Design of reduced-order state estimators for linear time-varying multivariable systems

    Science.gov (United States)

    Nguyen, Charles C.

    1987-01-01

    The design of reduced-order state estimators for linear time-varying multivariable systems is considered. Employing the concepts of matrix operators and the method of canonical transformations, this paper shows that there exists a reduced-order state estimator for linear time-varying systems that are 'lexicography-fixedly observable'. In addition, the eigenvalues of the estimator can be arbitrarily assigned. A simple algorithm is proposed for the design of the state estimator.

  8. Identification of slow molecular order parameters for Markov model construction

    CERN Document Server

    Perez-Hernandez, Guillermo; Giorgino, Toni; de Fabritiis, Gianni; Noé, Frank

    2013-01-01

    A goal in the kinetic characterization of a macromolecular system is the description of its slow relaxation processes, involving (i) identification of the structural changes involved in these processes, and (ii) estimation of the rates or timescales at which these slow processes occur. Most of the approaches to this task, including Markov models, Master-equation models, and kinetic network models, start by discretizing the high-dimensional state space and then characterize relaxation processes in terms of the eigenvectors and eigenvalues of a discrete transition matrix. The practical success of such an approach depends very much on the ability to finely discretize the slow order parameters. How can this task be achieved in a high-dimensional configuration space without relying on subjective guesses of the slow order parameters? In this paper, we use the variational principle of conformation dynamics to derive an optimal way of identifying the "slow subspace" of a large set of prior order parameters - either g...

  9. Model Selection Through Sparse Maximum Likelihood Estimation

    CERN Document Server

    Banerjee, Onureena; D'Aspremont, Alexandre

    2007-01-01

    We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...

  10. On fractional order composite model reference adaptive control

    Science.gov (United States)

    Wei, Yiheng; Sun, Zhenyuan; Hu, Yangsheng; Wang, Yong

    2016-08-01

    This paper presents a novel composite model reference adaptive control approach for a class of fractional order linear systems with unknown constant parameters. The method is extended from the model reference adaptive control. The parameter estimation error of our method depends on both the tracking error and the prediction error, whereas the existing method only depends on the tracking error, which makes our method has better transient performance in the sense of generating smooth system output. By the aid of the continuous frequency distributed model, stability of the proposed approach is established in the Lyapunov sense. Furthermore, the convergence property of the model parameters estimation is presented, on the premise that the closed-loop control system is stable. Finally, numerical simulation examples are given to demonstrate the effectiveness of the proposed schemes.

  11. Nearly best linear estimates of logistic parameters based on complete ordered statistics

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Deals with the determination of the nearly best linear estimates of location and scale parameters of a logistic population, when both parameters are unknown, by introducing Bloms semi-empirical α, β-correction′into the asymptotic mean and covariance formulae with complete and ordered samples taken into consideration and various nearly best linear estimates established and points out the high efficiency of these estimators relative to the best linear unbiased estimators (BLUEs) and other linear estimators makes them useful in practice.

  12. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan

    2013-01-01

    We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data. © 2013 Springer-Verlag.

  13. Kalman filter estimation model in flood forecasting

    Science.gov (United States)

    Husain, Tahir

    Elementary precipitation and runoff estimation problems associated with hydrologic data collection networks are formulated in conjunction with the Kalman Filter Estimation Model. Examples involve the estimation of runoff using data from a single precipitation station and also from a number of precipitation stations. The formulations demonstrate the role of state-space, measurement, and estimation equations of the Kalman Filter Model in flood forecasting. To facilitate the formulation, the unit hydrograph concept and antecedent precipitation index is adopted in the estimation model. The methodology is then applied to estimate various flood events in the Carnation Creek of British Columbia.

  14. Discrete Choice Models - Estimation of Passenger Traffic

    DEFF Research Database (Denmark)

    Sørensen, Majken Vildrik

    2003-01-01

    ), which simultaneously finds optimal coefficients values (utility elements) and parameter values (distributed terms) in the utility function. The shape of the distributed terms is specified prior to the estimation; hence, the validity is not tested during the estimation. The proposed method, assesses...... for data, a literature review follows. Models applied for estimation of discrete choice models are described by properties and limitations, and relations between these are established. Model types are grouped into three classes, Hybrid choice models, Tree models and Latent class models. Relations between...... the shape of the distribution from data, by means of repetitive model estimation. In particular, one model was estimated for each sub-sample of data. The shape of distributions is assessed from between model comparisons. This is not to be regarded as an alternative to MSL estimation, rather...

  15. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    Directory of Open Access Journals (Sweden)

    Kim Hyang-Mi

    2012-09-01

    Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their

  16. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  17. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  18. Modeling ability differentiation in the second-order factor model

    NARCIS (Netherlands)

    Molenaar, D.; Dolan, C.V.; van der Maas, H.L.J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model

  19. Markov chain order estimation with parametric significance tests of conditional mutual information

    CERN Document Server

    Papapetrou, Maria

    2015-01-01

    Besides the different approaches suggested in the literature, accurate estimation of the order of a Markov chain from a given symbol sequence is an open issue, especially when the order is moderately large. Here, parametric significance tests of conditional mutual information (CMI) of increasing order $m$, $I_c(m)$, on a symbol sequence are conducted for increasing orders $m$ in order to estimate the true order $L$ of the underlying Markov chain. CMI of order $m$ is the mutual information of two variables in the Markov chain being $m$ time steps apart, conditioning on the intermediate variables of the chain. The null distribution of CMI is approximated with a normal and gamma distribution deriving analytic expressions of their parameters, and a gamma distribution deriving its parameters from the mean and variance of the normal distribution. The accuracy of order estimation is assessed with the three parametric tests, and the parametric tests are compared to the randomization significance test and other known ...

  20. Real Order and Logarithmic Moment Estimation Method of P-norm Distribution

    Directory of Open Access Journals (Sweden)

    PAN Xiong

    2016-03-01

    Full Text Available The estimation methods of P-norm distribution is improved in this paper from the perspective of the parameters estimation precision and algorithm complexity. The real order and logarithmic moment estimation is introduced and the real order moment estimation method of P-norm distribution is established based on the actual error distribution. First of all, the relation between the shape parameter p and the real order value r is derived by using the real order moment estimation, and corresponding suggestions are provided for shape parameter's selection. Then, the nonlinear estimation formula of shape parameter, expectations and mean square error is derived via logarithmic moment estimation, function truncation error on the calculation of parameter estimation is eliminated and the solving method of corresponding parameters and calculation process is given, leading an improvement of the theory. Finally, some examples are performed for analyzing the stability and precision of such three methods including real order moment, logarithmic moment and maximum likelihood estimation. The result shows that the stability, precision and convergence speed of the method in this paper are better than maximum likelihood estimation, which generalized the existing errors theory.

  1. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  2. Projection-type estimation for varying coefficient regression models

    CERN Document Server

    Lee, Young K; Park, Byeong U; 10.3150/10-BEJ331

    2012-01-01

    In this paper we introduce new estimators of the coefficient functions in the varying coefficient regression model. The proposed estimators are obtained by projecting the vector of the full-dimensional kernel-weighted local polynomial estimators of the coefficient functions onto a Hilbert space with a suitable norm. We provide a backfitting algorithm to compute the estimators. We show that the algorithm converges at a geometric rate under weak conditions. We derive the asymptotic distributions of the estimators and show that the estimators have the oracle properties. This is done for the general order of local polynomial fitting and for the estimation of the derivatives of the coefficient functions, as well as the coefficient functions themselves. The estimators turn out to have several theoretical and numerical advantages over the marginal integration estimators studied by Yang, Park, Xue and H\\"{a}rdle [J. Amer. Statist. Assoc. 101 (2006) 1212--1227].

  3. Outlier Rejecting Multirate Model for State Estimation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Wavelet transform was introduced to detect and eliminate outliers in time-frequency domain. The outlier rejection and multirate information extraction were initially incorporated by wavelet transform, a new outlier rejecting multirate model for state estimation was proposed. The model is applied to state estimation with interacting multiple model, as the outlier is eliminated and more reasonable multirate information is extracted, the estimation accuracy is greatly enhanced. The simulation results prove that the new model is robust to outliers and the estimation performance is significantly improved.

  4. Efficient Estimation in Heteroscedastic Varying Coefficient Models

    Directory of Open Access Journals (Sweden)

    Chuanhua Wei

    2015-07-01

    Full Text Available This paper considers statistical inference for the heteroscedastic varying coefficient model. We propose an efficient estimator for coefficient functions that is more efficient than the conventional local-linear estimator. We establish asymptotic normality for the proposed estimator and conduct some simulation to illustrate the performance of the proposed method.

  5. Estimating Canopy Dark Respiration for Crop Models

    Science.gov (United States)

    Monje Mejia, Oscar Alberto

    2014-01-01

    Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.

  6. DYNAMIC ESTIMATION FOR PARAMETERS OF INTERFERENCE SIGNALS BY THE SECOND ORDER EXTENDED KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    P. A. Ermolaev

    2014-03-01

    Full Text Available Data processing in the interferometer systems requires high-resolution and high-speed algorithms. Recurrence algorithms based on parametric representation of signals execute consequent processing of signal samples. In some cases recurrence algorithms make it possible to increase speed and quality of data processing as compared with classic processing methods. Dependence of the measured interferometer signal on parameters of its model and stochastic nature of noise formation in the system is, in general, nonlinear. The usage of nonlinear stochastic filtering algorithms is expedient for such signals processing. Extended Kalman filter with linearization of state and output equations by the first vector parameters derivatives is an example of these algorithms. To decrease approximation error of this method the second order extended Kalman filtering is suggested with additionally usage of the second vector parameters derivatives of model equations. Examples of algorithm implementation with the different sets of estimated parameters are described. The proposed algorithm gives the possibility to increase the quality of data processing in interferometer systems in which signals are forming according to considered models. Obtained standard deviation of estimated amplitude envelope does not exceed 4% of the maximum. It is shown that signal-to-noise ratio of reconstructed signal is increased by 60%.

  7. Reduced order modeling of fluid/structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  8. PARAMETER ESTIMATION OF ENGINEERING TURBULENCE MODEL

    Institute of Scientific and Technical Information of China (English)

    钱炜祺; 蔡金狮

    2001-01-01

    A parameter estimation algorithm is introduced and used to determine the parameters in the standard k-ε two equation turbulence model (SKE). It can be found from the estimation results that although the parameter estimation method is an effective method to determine model parameters, it is difficult to obtain a set of parameters for SKE to suit all kinds of separated flow and a modification of the turbulence model structure should be considered. So, a new nonlinear k-ε two-equation model (NNKE) is put forward in this paper and the corresponding parameter estimation technique is applied to determine the model parameters. By implementing the NNKE to solve some engineering turbulent flows, it is shown that NNKE is more accurate and versatile than SKE. Thus, the success of NNKE implies that the parameter estimation technique may have a bright prospect in engineering turbulence model research.

  9. Analysis of Empirical Software Effort Estimation Models

    CERN Document Server

    Basha, Saleem

    2010-01-01

    Reliable effort estimation remains an ongoing challenge to software engineers. Accurate effort estimation is the state of art of software engineering, effort estimation of software is the preliminary phase between the client and the business enterprise. The relationship between the client and the business enterprise begins with the estimation of the software. The credibility of the client to the business enterprise increases with the accurate estimation. Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently under constrained problem. Accurate estimation is a complex process because it can be visualized as software effort prediction, as the term indicates prediction never becomes an actual. This work follows the basics of the empirical software effort estimation models. The goal of this paper is to study the empirical software effort estimation. The primary conclusion is that no single technique is best for all sit...

  10. Order of Dirichlet Series in the Whole Plane and Remainder Estimation

    Institute of Scientific and Technical Information of China (English)

    HUANG Hui-jun; NING Ju-hong

    2015-01-01

    In this paper, firstly, theρorder andρβorder of Dirichlet series which converges in the whole plane are studied. Secondly, the equivalence relation between remainder logarithm ln En−1(f,α), ln Rn(f,α) and coefficients logarithm ln|an|is discussed respectively. Finally, the theory of applying remainder to estimateρorder andρβ order can be obtained by using the equivalence relation.

  11. Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction

    CERN Document Server

    Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R

    2011-01-01

    Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.

  12. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian...... method is based on a modified version of the EM algorithm. Experimental results for a deformable template used for textile inspection are presented...

  13. A reduced-order method for estimating the stability region of power systems with saturated controls

    Institute of Scientific and Technical Information of China (English)

    GAN; DeQiang; XIN; HuanHai; QIU; JiaJu; HAN; ZhenXiang

    2007-01-01

    In a modern power system, there is often large difference in the decay speeds of transients. This could lead to numerical problems such as heavy simulation burden and singularity when the traditional methods are used to estimate the stability region of such a dynamic system with saturation nonlinearities. To overcome these problems, a reduced-order method, based on the singular perturbation theory, is suggested to estimate the stability region of a singular system with saturation nonlinearities. In the reduced-order method, a low-order linear dynamic system with saturation nonlinearities is constructed to estimate the stability region of the primary high-order system so that the singularity is eliminated and the estimation process is simplified. In addition, the analytical foundation of the reduction method is proven and the method is validated using a test power system with 3 buses and 5 machines.

  14. Low-order models of biogenic ocean mixing

    Science.gov (United States)

    Dabiri, J. O.; Rosinelli, D.; Koumoutsakos, P.

    2009-12-01

    Biogenic ocean mixing, the process whereby swimming animals may affect ocean circulation, has primarily been studied using order-of-magnitude theoretical estimates and a small number of field observations. We describe numerical simulations of arrays of simplified animal shapes migrating in inviscid fluid and at finite Reynolds numbers. The effect of density stratification is modeled in the fluid dynamic equations of motion by a buoyancy acceleration term, which arises due to perturbations to the density field by the migrating bodies. The effects of fluid viscosity, body spacing, and array configuration are investigated to identify scenarios in which a meaningful contribution to ocean mixing by swimming animals is plausible.

  15. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  16. Second order pseudo-maximum likelihood estimation and conditional variance misspecification

    OpenAIRE

    Lejeune, Bernard

    1997-01-01

    In this paper, we study the behavior of second order pseudo-maximum likelihood estimators under conditional variance misspecification. We determine sufficient and essentially necessary conditions for such a estimator to be, regardless of the conditional variance (mis)specification, consistent for the mean parameters when the conditional mean is correctly specified. These conditions implie that, even if mean and variance parameters vary independently, standard PML2 estimators are generally not...

  17. The Optimal Economic Order: the simplest model

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1992-01-01

    textabstractIn the last five years humanity has become faced with the problem of the optimal socioeconomic order more clearly than ever. After the confrontation of capitalism and socialism, which was the core of the Marxist thesis, the fact transpired that capitalism was not the optimal order. It wa

  18. Multi-order Arnoldi-based model order reduction of second-order time-delay systems

    Science.gov (United States)

    Xiao, Zhi-Hua; Jiang, Yao-Lin

    2016-09-01

    In this paper, we discuss the Krylov subspace-based model order reduction methods of second-order systems with time delays, and present two structure-preserving methods for model order reduction of these second-order systems, which avoid to convert the second-order systems into first-order ones. One method is based on a Krylov subspace by using the Taylor series expansion, the other method is based on the Laguerre series expansion. These two methods are used in the multi-order Arnoldi algorithm to construct the projection matrices. The resulting reduced models can not only preserve the structure of the original systems, but also can match a certain number of approximate moments or Laguerre expansion coefficients. The effectiveness of the proposed methods is demonstrated by two numerical examples.

  19. Pose and Motion Estimation from Vision Based on the First-Order Interpolation Filter

    Institute of Scientific and Technical Information of China (English)

    WUXuedong; WANGYaonan

    2004-01-01

    Determination of relative threedimensional (3D) position, orientation, and relative motion between two reference frames is an important problem in robotic guidance, manipulation, and assembly as well as in other fields such as photogrammetry. A solution to this problem that uses Two-dimensional (2D) intensity images from a single camera is desirable for real-time applications. The difficulty in performing this measurement is the process of projecting 3D object features to 2D images, a nonlinear transformation. Modeling the 3D transformation as a nonlinear stochastic system, and using a new set of filtering which are based on the first-order interpolation approximations of the nonlinear transformations as estimator, this paper presents solutions to the remote measurement problem given a sequence of 2D intensity images of an object. The method has been implemented with simulated data, and the simulation result has shown that the proposed method has good convergence.

  20. Parameter Estimation, Model Reduction and Quantum Filtering

    CERN Document Server

    Chase, Bradley A

    2009-01-01

    This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.

  1. DOA estimation in multipath based on third-order cyclic moment

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A new direction of arrival (DOA) estimation algorithm in multipath based on third-order cyclic moment is proposed in this paper. The algorithm can highly solute every DOA in multipath, and the use of third-order cyclic moment suppresses all stationary measurement noises. The simulation shows the advantages of this algorithm.

  2. Mineral resources estimation based on block modeling

    Science.gov (United States)

    Bargawa, Waterman Sulistyana; Amri, Nur Ali

    2016-02-01

    The estimation in this paper uses three kinds of block models of nearest neighbor polygon, inverse distance squared and ordinary kriging. The techniques are weighting scheme which is based on the principle that block content is a linear combination of the grade data or the sample around the block being estimated. The case study in Pongkor area, here is gold-silver resource modeling that allegedly shaped of quartz vein as a hydrothermal process of epithermal type. Resources modeling includes of data entry, statistical and variography analysis of topography and geological model, the block model construction, estimation parameter, presentation model and tabulation of mineral resources. Skewed distribution, here isolated by robust semivariogram. The mineral resources classification generated in this model based on an analysis of the kriging standard deviation and number of samples which are used in the estimation of each block. Research results are used to evaluate the performance of OK and IDS estimator. Based on the visual and statistical analysis, concluded that the model of OK gives the estimation closer to the data used for modeling.

  3. Estimating hybrid choice models with the new version of Biogeme

    OpenAIRE

    Bierlaire, Michel

    2010-01-01

    Hybrid choice models integrate many types of discrete choice modeling methods, including latent classes and latent variables, in order to capture concepts such as perceptions, attitudes, preferences, and motivatio (Ben-Akiva et al., 2002). Although they provide an excellent framework to capture complex behavior patterns, their use in applications remains rare in the literature due to the difficulty of estimating the models. In this talk, we provide a short introduction to hybrid choice model...

  4. An Evolutionary Approach for Joint Blind Multichannel Estimation and Order Detection

    Directory of Open Access Journals (Sweden)

    Chen Fangjiong

    2003-07-01

    Full Text Available A joint blind order-detection and parameter-estimation algorithm for a single-input multiple-output (SIMO channel is presented. Based on the subspace decomposition of the channel output, an objective function including channel order and channel parameters is proposed. The problem is resolved by using a specifically designed genetic algorithm (GA. In the proposed GA, we encode both the channel order and parameters into a single chromosome, so they can be estimated simultaneously. Novel GA operators and convergence criteria are used to guarantee correct and high convergence speed. Simulation results show that the proposed GA achieves satisfactory convergence speed and performance.

  5. Dynamical numerical model for nematic order reconstruction

    Science.gov (United States)

    Lombardo, G.; Ayeb, H.; Barberi, R.

    2008-05-01

    In highly frustrated calamitic nematic liquid crystals, a strong elastic distortion can be confined on a few nanometers. The classical elastic theory fails to describe such systems and a more complete description based on the tensor order parameter Q is required. A finite element method is used to implement the Q dynamics by a variational principle and it is shown that a uniaxial nematic configuration can evolve passing through transient biaxial states. This solution, which connects two competing uniaxial nematic textures, is known as “nematic order reconstruction.”

  6. Mean-value second-order uncertainty analysis method: application to water quality modelling

    Science.gov (United States)

    Mailhot, Alain; Villeneuve, Jean-Pierre

    Uncertainty analysis in hydrology and water quality modelling is an important issue. Various methods have been proposed to estimate uncertainties on model results based on given uncertainties on model parameters. Among these methods, the mean-value first-order second-moment (MFOSM) method and the advanced mean-value first-order second-moment (AFOSM) method are the most common ones. This paper presents a method based on a second-order approximation of a model output function. The application of this method requires the estimation of first- and second-order derivatives at a mean-value point in the parameter space. Application to a Streeter-Phelps prototype model is presented. Uncertainties on two and six parameters are considered. Exceedance probabilities (EP) of dissolved oxygen concentrations are obtained and compared with EP computed using Monte Carlo, AFOSM and MFOSM methods. These results show that the mean-value second-order method leads to better estimates of EP.

  7. Estimating parameters for generalized mass action models with connectivity information

    Directory of Open Access Journals (Sweden)

    Voit Eberhard O

    2009-05-01

    Full Text Available Abstract Background Determining the parameters of a mathematical model from quantitative measurements is the main bottleneck of modelling biological systems. Parameter values can be estimated from steady-state data or from dynamic data. The nature of suitable data for these two types of estimation is rather different. For instance, estimations of parameter values in pathway models, such as kinetic orders, rate constants, flux control coefficients or elasticities, from steady-state data are generally based on experiments that measure how a biochemical system responds to small perturbations around the steady state. In contrast, parameter estimation from dynamic data requires time series measurements for all dependent variables. Almost no literature has so far discussed the combined use of both steady-state and transient data for estimating parameter values of biochemical systems. Results In this study we introduce a constrained optimization method for estimating parameter values of biochemical pathway models using steady-state information and transient measurements. The constraints are derived from the flux connectivity relationships of the system at the steady state. Two case studies demonstrate the estimation results with and without flux connectivity constraints. The unconstrained optimal estimates from dynamic data may fit the experiments well, but they do not necessarily maintain the connectivity relationships. As a consequence, individual fluxes may be misrepresented, which may cause problems in later extrapolations. By contrast, the constrained estimation accounting for flux connectivity information reduces this misrepresentation and thereby yields improved model parameters. Conclusion The method combines transient metabolic profiles and steady-state information and leads to the formulation of an inverse parameter estimation task as a constrained optimization problem. Parameter estimation and model selection are simultaneously carried out

  8. Bayesian estimation of the network autocorrelation model

    NARCIS (Netherlands)

    Dittrich, D.; Leenders, R.T.A.J.; Mulder, J.

    2017-01-01

    The network autocorrelation model has been extensively used by researchers interested modeling social influence effects in social networks. The most common inferential method in the model is classical maximum likelihood estimation. This approach, however, has known problems such as negative bias of

  9. Ordering dynamics of microscopic models with nonconserved order parameter of continuous symmetry

    DEFF Research Database (Denmark)

    Zhang, Z.; Mouritsen, Ole G.; Zuckermann, Martin J.

    1993-01-01

    Numerical Monte Carlo temperature-quenching experiments have been performed on two three-dimensional classical lattice models with continuous ordering symmetry: the Lebwohl-Lasher model [Phys. Rev. A 6, 426 (1972)] and the ferromagnetic isotropic Heisenberg model. Both models describe a transition...... from a disordered phase to an orientationally ordered phase of continuous symmetry. The Lebwohl-Lasher model accounts for the orientational ordering properties of the nematic-isotropic transition in liquid crystals and the Heisenberg model for the ferromagnetic-paramagnetic transition in magnetic...

  10. Order and Disorder in Product Innovation Models

    NARCIS (Netherlands)

    Pina e Cunha, Miguel; Gomes, Jorge F.S.

    2003-01-01

    This article argues that the conceptual development of product innovation models goes hand in hand with paradigmatic changes in the field of organization science. Remarkable similarities in the change of organizational perspectives and product innovation models are noticeable. To illustrate how chan

  11. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... and uncertainty estimation. Essential issues relating to calibration are discussed. The classical regression methods are described; however, the main focus is on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. The next two chapters describe case studies in which the GLUE methodology...

  12. INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS

    Science.gov (United States)

    Hong, Sungjoon; Oguchi, Takashi

    In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.

  13. INTRUSION DETECTION BASED ON THE SECOND-ORDER STOCHASTIC MODEL

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a new method based on a second-order stochastic model for computer intrusion detection. The results show that the performance of the second-order stochastic model is better than that of a first-order stochastic model. In this study, different window sizes are also used to test the performance of the model. The detection results show that the second-order stochastic model is not so sensitive to the window size, comparing with the first-order stochastic model and other previous researches. The detection result of window sizes 6 and 10 is the same.

  14. Simultaneous confidence bands for Yule-Walker estimators and order selection

    CERN Document Server

    Jirak, Moritz

    2012-01-01

    Let $\\{X_k,k\\in{\\mathbb{Z}}\\}$ be an autoregressive process of order $q$. Various estimators for the order $q$ and the parameters ${\\bolds \\Theta}_q=(\\theta_1,...,\\theta_q)^T$ are known; the order is usually determined with Akaike's criterion or related modifications, whereas Yule-Walker, Burger or maximum likelihood estimators are used for the parameters ${\\bolds\\Theta}_q$. In this paper, we establish simultaneous confidence bands for the Yule--Walker estimators $\\hat{\\theta}_i$; more precisely, it is shown that the limiting distribution of ${\\max_{1\\leq i\\leq d_n}}|\\hat{\\theta}_i-\\theta_i|$ is the Gumbel-type distribution $e^{-e^{-z}}$, where $q\\in\\{0,...,d_n\\}$ and $d_n=\\mathcal {O}(n^{\\delta})$, $\\delta >0$. This allows to modify some of the currently used criteria (AIC, BIC, HQC, SIC), but also yields a new class of consistent estimators for the order $q$. These estimators seem to have some potential, since they outperform most of the previously mentioned criteria in a small simulation study. In particul...

  15. Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation

    OpenAIRE

    2015-01-01

    This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values dep...

  16. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  17. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Science.gov (United States)

    Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng

    2015-01-01

    Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  18. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  19. Synchronization-based parameter estimation of fractional-order neural networks

    Science.gov (United States)

    Gu, Yajuan; Yu, Yongguang; Wang, Hu

    2017-10-01

    This paper focuses on the parameter estimation problem of fractional-order neural network. By combining the adaptive control and parameter update law, we generalize the synchronization-based identification method that has been reported in several literatures on identifying unknown parameters of integer-order systems. With this method, parameter identification and synchronization can be achieved simultaneously. Finally, a numerical example is given to illustrate the effectiveness of the theoretical results.

  20. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...

  1. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  2. Hydrograph estimation with fuzzy chain model

    Science.gov (United States)

    Güçlü, Yavuz Selim; Şen, Zekai

    2016-07-01

    Hydrograph peak discharge estimation is gaining more significance with unprecedented urbanization developments. Most of the existing models do not yield reliable peak discharge estimations for small basins although they provide acceptable results for medium and large ones. In this study, fuzzy chain model (FCM) is suggested by considering the necessary adjustments based on some measurements over a small basin, Ayamama basin, within Istanbul City, Turkey. FCM is based on Mamdani and the Adaptive Neuro Fuzzy Inference Systems (ANFIS) methodologies, which yield peak discharge estimation. The suggested model is compared with two well-known approaches, namely, Soil Conservation Service (SCS)-Snyder and SCS-Clark methodologies. In all the methods, the hydrographs are obtained through the use of dimensionless unit hydrograph concept. After the necessary modeling, computation, verification and adaptation stages comparatively better hydrographs are obtained by FCM. The mean square error for the FCM is many folds smaller than the other methodologies, which proves outperformance of the suggested methodology.

  3. Parameter estimation of hydrologic models using data assimilation

    Science.gov (United States)

    Kaheil, Y. H.

    2005-12-01

    The uncertainties associated with the modeling of hydrologic systems sometimes demand that data should be incorporated in an on-line fashion in order to understand the behavior of the system. This paper represents a Bayesian strategy to estimate parameters for hydrologic models in an iterative mode. The paper presents a modified technique called localized Bayesian recursive estimation (LoBaRE) that efficiently identifies the optimum parameter region, avoiding convergence to a single best parameter set. The LoBaRE methodology is tested for parameter estimation for two different types of models: a support vector machine (SVM) model for predicting soil moisture, and the Sacramento Soil Moisture Accounting (SAC-SMA) model for estimating streamflow. The SAC-SMA model has 13 parameters that must be determined. The SVM model has three parameters. Bayesian inference is used to estimate the best parameter set in an iterative fashion. This is done by narrowing the sampling space by imposing uncertainty bounds on the posterior best parameter set and/or updating the "parent" bounds based on their fitness. The new approach results in fast convergence towards the optimal parameter set using minimum training/calibration data and evaluation of fewer parameter sets. The efficacy of the localized methodology is also compared with the previously used Bayesian recursive estimation (BaRE) algorithm.

  4. Estimation of growth parameters using a nonlinear mixed Gompertz model.

    Science.gov (United States)

    Wang, Z; Zuidhof, M J

    2004-06-01

    In order to maximize the utility of simulation models for decision making, accurate estimation of growth parameters and associated variances is crucial. A mixed Gompertz growth model was used to account for between-bird variation and heterogeneous variance. The mixed model had several advantages over the fixed effects model. The mixed model partitioned BW variation into between- and within-bird variation, and the covariance structure assumed with the random effect accounted for part of the BW correlation across ages in the same individual. The amount of residual variance decreased by over 55% with the mixed model. The mixed model reduced estimation biases that resulted from selective sampling. For analysis of longitudinal growth data, the mixed effects growth model is recommended.

  5. Model Order Reduction for Electronic Circuits:

    DEFF Research Database (Denmark)

    Hjorth, Poul G.; Shontz, Suzanne

    Electronic circuits are ubiquitous; they are used in numerous industries including: the semiconductor, communication, robotics, auto, and music industries (among many others). As products become more and more complicated, their electronic circuits also grow in size and complexity. This increased ...... in the semiconductor industry. Circuit simulation proceeds by using Maxwell’s equations to create a mathematical model of the circuit. The boundary element method is then used to discretize the equations, and the variational form of the equations are then solved on the graph network....

  6. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Directory of Open Access Journals (Sweden)

    Jiri Kadlec

    2008-08-01

    Full Text Available A high performance RLS lattice filter with the estimation of an unknown order and forgetting factor of identified system was developed and implemented as a PCORE coprocessor for Xilinx EDK. The coprocessor implemented in FPGA hardware can fully exploit parallelisms in the algorithm and remove load from a microprocessor. The EDK integration allows effective programming and debugging of hardware accelerated DSP applications. The RLS lattice core extended by the order and forgetting factor estimation was implemented using the logarithmic numbers system (LNS arithmetic. An optimal mapping of the RLS lattice onto the LNS arithmetic units found by the cyclic scheduling was used. The schedule allows us to run four independent filters in parallel on one arithmetic macro set. The coprocessor containing the RLS lattice core is highly configurable. It allows to exploit the modular structure of the RLS lattice filter and construct the pipelined serial connection of filters for even higher performance. It also allows to run independent parallel filters on the same input with different forgetting factors in order to estimate which order and exponential forgetting factor better describe the observed data. The FPGA coprocessor implementation presented in the paper is able to evaluate the RLS lattice filter of order 504 at 12 kHz input data sampling rate. For the filter of order up to 20, the probability of order and forgetting factor hypotheses can be continually estimated. It has been demonstrated that the implemented coprocessor accelerates the Microblaze solution up to 20 times. It has also been shown that the coprocessor performs up to 2.5 times faster than highly optimized solution using 50 MIPS SHARC DSP processor, while the Microblaze is capable of performing another tasks concurrently.

  7. Reconsidered estimates of the 10th order QED contributions to the muon anomaly

    CERN Document Server

    Kataev, A L

    2006-01-01

    The problem of estimating the 10th order QED corrections to the muon anomalous magnetic moment is reconsidered. The incorporation of the recently improved contributions to the $\\alpha^4$ and $\\alpha^5$- corrections to $a_{\\mu}$ within the renormalization-group inspired scheme-invariant approach leads to the estimate $a_{\\mu}^{(10)}\\approx 643$. It is in good agreement with the estimate $a_{\\mu}^{(10)}= 663(20) (\\alpha/\\pi)^5$, obtained by Kinoshita and Nio from the numerical calculations of 2958 10-th order diagrams, which are considered to be more important than the still uncalculated 6122 10th-order $m_{\\mu}/m_e$-dependent vertex graphs, and 12672 5-loop diagrams, responsible for the mass-independent constant contribution both to $a_{\\mu}$ and $a_e$. This confirms Kinoshita and Nio guess about dominance of the 10-th order diagrams calculated by them. Comparisons with other estimates of the $\\alpha^5$- contributions to $a_{\\mu}$, which exist in the literature, are presented.

  8. A POSTERIORI ERROR ESTIMATE OF THE DSD METHOD FOR FIRST-ORDER HYPERBOLIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    康彤; 余德浩

    2002-01-01

    A posteriori error estimate of the discontinuous-streamline diffusion method for first-order hyperbolic equations was presented, which can be used to adjust space mesh reasonably. A numerical example is given to illustrate the accuracy and feasibility of this method.

  9. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  10. Bayesian mixture models for spectral density estimation

    OpenAIRE

    Cadonna, Annalisa

    2017-01-01

    We introduce a novel Bayesian modeling approach to spectral density estimation for multiple time series. Considering first the case of non-stationary timeseries, the log-periodogram of each series is modeled as a mixture of Gaussiandistributions with frequency-dependent weights and mean functions. The implied model for the log-spectral density is a mixture of linear mean functionswith frequency-dependent weights. The mixture weights are built throughsuccessive differences of a logit-normal di...

  11. Estimation and uncertainty of reversible Markov models

    CERN Document Server

    Trendelkamp-Schroer, Benjamin; Paul, Fabian; Noé, Frank

    2015-01-01

    Reversibility is a key concept in the theory of Markov models, simplified kinetic models for the conforma- tion dynamics of molecules. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model relies heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is therefore crucial to the successful application of the previously developed theory. In this work we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference.

  12. Developing Physician Migration Estimates for Workforce Models.

    Science.gov (United States)

    Holmes, George M; Fraher, Erin P

    2017-02-01

    To understand factors affecting specialty heterogeneity in physician migration. Physicians in the 2009 American Medical Association Masterfile data were matched to those in the 2013 file. Office locations were geocoded in both years to one of 293 areas of the country. Estimated utilization, calculated for each specialty, was used as the primary predictor of migration. Physician characteristics (e.g., specialty, age, sex) were obtained from the 2009 file. Area characteristics and other factors influencing physician migration (e.g., rurality, presence of teaching hospital) were obtained from various sources. We modeled physician location decisions as a two-part process: First, the physician decides whether to move. Second, conditional on moving, a conditional logit model estimates the probability a physician moved to a particular area. Separate models were estimated by specialty and whether the physician was a resident. Results differed between specialties and according to whether the physician was a resident in 2009, indicating heterogeneity in responsiveness to policies. Physician migration was higher between geographically proximate states with higher utilization for that specialty. Models can be used to estimate specialty-specific migration patterns for more accurate workforce modeling, including simulations to model the effect of policy changes. © Health Research and Educational Trust.

  13. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    Science.gov (United States)

    Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.

    2013-08-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature ( T max) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative distribution functions of the CDOCE and T max as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase in the probability of T max being greater than the resin degradation temperature. A similar trend is also seen for the probability of the CDOCE being less than 0.8.

  14. Parameter estimation for the Pearson type 3 distribution using order statistics

    Science.gov (United States)

    Rocky Durrans, S.

    1992-05-01

    The Pearson type 3 distribution and its relatives, the log Pearson type 3 and gamma family of distributions, are among the most widely applied in the field of hydrology. Parameter estimation for these distributions has been accomplished using the method of moments, the methods of mixed moments and generalized moments, and the methods of maximum likelihood and maximum entropy. This study evaluates yet another estimation approach, which is based on the use of the properties of an extreme-order statistic. Based on the hypothesis that the population is distributed as Pearson type 3, this estimation approach yields both parameter and 100-year quantile estimators that have lower biases and variances than those of the method of moments approach as recommended by the US Water Resources Council.

  15. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  16. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  17. Numerical simulation for SI model with variable-order fractional

    Directory of Open Access Journals (Sweden)

    mohamed mohamed

    2016-04-01

    Full Text Available In this paper numerical studies for the variable-order fractional delay differential equations are presented. Adams-Bashforth-Moulton algorithm has been extended to study this problem, where the derivative is defined in the Caputo variable-order fractional sense. Special attention is given to prove the error estimate of the proposed method. Numerical test examples are presented to demonstrate utility of the method. Chaotic behaviors are observed in variable-order one dimensional delayed systems.

  18. A Model-Based Approach for Visualizing the Dimensional Structure of Ordered Successive Categories Preference Data

    Science.gov (United States)

    DeSarbo, Wayne S.; Park, Joonwook; Scott, Crystal J.

    2008-01-01

    A cyclical conditional maximum likelihood estimation procedure is developed for the multidimensional unfolding of two- or three-way dominance data (e.g., preference, choice, consideration) measured on ordered successive category rating scales. The technical description of the proposed model and estimation procedure are discussed, as well as the…

  19. Partial Orders and Fully Abstract Models for Concurrency

    DEFF Research Database (Denmark)

    Engberg, Uffe Henrik

    1990-01-01

    In this thesis sets of labelled partial orders are employed as fundamental mathematical entities for modelling nondeterministic and concurrent processes thereby obtaining so-called noninterleaving semantics. Based on different closures of sets of labelled partial orders, simple algebraic languages...

  20. Crosstalk Model and Estimation Formula for VLSI Interconnect Wires

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    We develop an interconnect crosstalk estimation model on the assumption of linearity for CMOS device. First, we analyze the terminal response of RC model on the worst condition from the S field to the time domain. The exact 3 order coefficients in S field are obtained due to the interconnect tree model. Based on this, a crosstalk peak estimation formula is presented. Unlike other crosstalk equations in the literature, this formula is only used coupled capacitance and grand capacitance as parameter. Experimental results show that, compared with the SPICE results, the estimation formulae are simple and accurate. So the model is expected to be used in such fields as layout-driven logic and high level synthesis, performance-driven floorplanning and interconnect planning.

  1. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    Science.gov (United States)

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  < 10 -20 ) by using probabilistic topic models to summarize clinical data into up to 32 topics. Many of these latent topics yield natural clinical interpretations (e.g., "critical care," "pneumonia," "neurologic evaluation"). Existing order sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support.

  2. Sliding Mode Control Design via Reduced Order Model Approach

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a design of continuous-time sliding mode control for the higher order systems via reduced order model. It is shown that a continuous-time sliding mode control designed for the reduced order model gives similar performance for the higher order system. The method is illustrated by numerical examples. The paper also introduces a technique for design of a sliding surface such that the system satisfies a cost-optimality condition when on the sliding surface.

  3. A hybrid finite mixture model for exploring heterogeneous ordering patterns of driver injury severity.

    Science.gov (United States)

    Ma, Lu; Wang, Guan; Yan, Xuedong; Weng, Jinxian

    2016-04-01

    Debates on the ordering patterns of crash injury severity are ongoing in the literature. Models without proper econometrical structures for accommodating the complex ordering patterns of injury severity could result in biased estimations and misinterpretations of factors. This study proposes a hybrid finite mixture (HFM) model aiming to capture heterogeneous ordering patterns of driver injury severity while enhancing modeling flexibility. It attempts to probabilistically partition samples into two groups in which one group represents an unordered/nominal data-generating process while the other represents an ordered data-generating process. Conceptually, the newly developed model offers flexible coefficient settings for mining additional information from crash data, and more importantly it allows the coexistence of multiple ordering patterns for the dependent variable. A thorough modeling performance comparison is conducted between the HFM model, and the multinomial logit (MNL), ordered logit (OL), finite mixture multinomial logit (FMMNL) and finite mixture ordered logit (FMOL) models. According to the empirical results, the HFM model presents a strong ability to extract information from the data, and more importantly to uncover heterogeneous ordering relationships between factors and driver injury severity. In addition, the estimated weight parameter associated with the MNL component in the HFM model is greater than the one associated with the OL component, which indicates a larger likelihood of the unordered pattern than the ordered pattern for driver injury severity.

  4. Model for Estimation Urban Transportation Supply-Demand Ratio

    Directory of Open Access Journals (Sweden)

    Chaoqun Wu

    2015-01-01

    Full Text Available The paper establishes an estimation model of urban transportation supply-demand ratio (TSDR to quantitatively describe the conditions of an urban transport system and to support a theoretical basis for transport policy-making. This TSDR estimation model is supported by the system dynamic principle and the VENSIM (an application that simulates the real system. It was accomplished by long-term observation of eight cities’ transport conditions and by analyzing the estimated results of TSDR from fifteen sets of refined data. The estimated results indicate that an urban TSDR can be classified into four grades representing four transport conditions: “scarce supply,” “short supply,” “supply-demand balance,” and “excess supply.” These results imply that transport policies or measures can be quantified to facilitate the process of ordering and screening them.

  5. Estimating Model Evidence Using Data Assimilation

    Science.gov (United States)

    Carrassi, Alberto; Bocquet, Marc; Hannart, Alexis; Ghil, Michael

    2017-04-01

    We review the field of data assimilation (DA) from a Bayesian perspective and show that, in addition to its by now common application to state estimation, DA may be used for model selection. An important special case of the latter is the discrimination between a factual model - which corresponds, to the best of the modeller's knowledge, to the situation in the actual world in which a sequence of events has occurred-and a counterfactual model, in which a particular forcing or process might be absent or just quantitatively different from the actual world. Three different ensemble-DA methods are reviewed for this purpose: the ensemble Kalman filter (EnKF), the ensemble four-dimensional variational smoother (En-4D-Var), and the iterative ensemble Kalman smoother (IEnKS). An original contextual formulation of model evidence (CME) is introduced. It is shown how to apply these three methods to compute CME, using the approximated time-dependent probability distribution functions (pdfs) each of them provide in the process of state estimation. The theoretical formulae so derived are applied to two simplified nonlinear and chaotic models: (i) the Lorenz three-variable convection model (L63), and (ii) the Lorenz 40- variable midlatitude atmospheric dynamics model (L95). The numerical results of these three DA-based methods and those of an integration based on importance sampling are compared. It is found that better CME estimates are obtained by using DA, and the IEnKS method appears to be best among the DA methods. Differences among the performance of the three DA-based methods are discussed as a function of model properties. Finally, the methodology is implemented for parameter estimation and for event attribution.

  6. Stochastic reduced order models for inverse problems under uncertainty.

    Science.gov (United States)

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  7. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  8. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    OpenAIRE

    Hadiyanto Hadiyanto; AJB van Boxtel

    2012-01-01

    Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally pro...

  9. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  10. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  11. Estimating an Activity Driven Hidden Markov Model

    OpenAIRE

    Meyer, David A.; Shakeel, Asif

    2015-01-01

    We define a Hidden Markov Model (HMM) in which each hidden state has time-dependent $\\textit{activity levels}$ that drive transitions and emissions, and show how to estimate its parameters. Our construction is motivated by the problem of inferring human mobility on sub-daily time scales from, for example, mobile phone records.

  12. First Versus Second Order Latent Growth Curve Models: Some Insights From Latent State-Trait Theory

    OpenAIRE

    Geiser, Christian; Keller, Brian; Lockhart, Ginger

    2013-01-01

    First order latent growth curve models (FGMs) estimate change based on a single observed variable and are widely used in longitudinal research. Despite significant advantages, second order latent growth curve models (SGMs), which use multiple indicators, are rarely used in practice, and not all aspects of these models are widely understood. In this article, our goal is to contribute to a deeper understanding of theoretical and practical differences between FGMs and SGMs. We define the latent ...

  13. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.

    2017-07-25

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.

  14. Fractional-order in a macroeconomic dynamic model

    Science.gov (United States)

    David, S. A.; Quintino, D. D.; Soliani, J.

    2013-10-01

    In this paper, we applied the Riemann-Liouville approach in order to realize the numerical simulations to a set of equations that represent a fractional-order macroeconomic dynamic model. It is a generalization of a dynamic model recently reported in the literature. The aforementioned equations have been simulated for several cases involving integer and non-integer order analysis, with some different values to fractional order. The time histories and the phase diagrams have been plotted to visualize the effect of fractional order approach. The new contribution of this work arises from the fact that the macroeconomic dynamic model proposed here involves the public sector deficit equation, which renders the model more realistic and complete when compared with the ones encountered in the literature. The results reveal that the fractional-order macroeconomic model can exhibit a real reasonable behavior to macroeconomics systems and might offer greater insights towards the understanding of these complex dynamic systems.

  15. Solar energy estimation using REST2 model

    Directory of Open Access Journals (Sweden)

    M. Rizwan, Majid Jamil, D. P. Kothari

    2010-03-01

    Full Text Available The network of solar energy measuring stations is relatively rare through out the world. In India, only IMD (India Meteorological Department Pune provides data for quite few stations, which is considered as the base data for research purposes. However, hourly data of measured energy is not available, even for those stations where measurement has already been done. Due to lack of hourly measured data, the estimation of solar energy at the earth’s surface is required. In the proposed study, hourly solar energy is estimated at four important Indian stations namely New Delhi, Mumbai, Pune and Jaipur keeping in mind their different climatic conditions. For this study, REST2 (Reference Evaluation of Solar Transmittance, 2 bands, a high performance parametric model for the estimation of solar energy is used. REST2 derivation uses the two-band scheme as used in the CPCR2 (Code for Physical Computation of Radiation, 2 bands but CPCR2 does not include NO2 absorption, which is an important parameter for estimating solar energy. In this study, using ground measurements during 1986-2000 as reference, a MATLAB program is written to evaluate the performance of REST2 model at four proposed stations. The solar energy at four stations throughout the year is estimated and compared with CPCR2. The results obtained from REST2 model show the good agreement against the measured data on horizontal surface. The study reveals that REST2 models performs better and evaluate the best results as compared to the other existing models under cloudless sky for Indian climatic conditions.

  16. ESTIMATION OF CHAIN REACTION BANKRUPTCY STRUCTURE BY CHANCE DISCOVERY METHOD- WITH TIME ORDER METHOD AND DIRECTED KEYGRAPH

    Institute of Scientific and Technical Information of China (English)

    Shinichi GODA; Yukio OHSAWA

    2007-01-01

    Chain reaction bankruptcy is regarded as common phenomenon and its effect is to be taken into account when credit risk portfolio is analyzed. But consideration and modeling of its effect leave much room for improvement. That is mainly because method for grasping relations among companies with limited data is underdeveloped. In this article, chance discovery method is applied to estimate industrial relations that are to include companies' relations that transmit chain reaction of bankruptcy.Time order method and directed KeyGraph are newly introduced to distinguish and express the time order among defaults that is essential information for the analysis of chain reaction bankruptcy. The steps for the data analysis are introduced and result of example analysis with default data in Kyushu,Japan, 2005 is presented. The structure estimated by the new method is compared with the structure of actual account receivable holders of bankrupted companies for evaluation.

  17. Filtering Error Estimates and Order of Accuracy via the Peano Kernel Theorem

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2011-02-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise. The concept of the order of accuracy of a filter is introduced and used as an organizing principle to compare the accuracy of different filters.

  18. Joint frequency, 2D AOA and polarization estimation using fourth-order cumulants

    Institute of Scientific and Technical Information of China (English)

    王建英; 陈天琪

    2000-01-01

    Based on fourth-order cumulant and ESPRIT algorithm, a novel joint frequency, two-dimensional angle of arrival (2D AOA) and the polarization estimation method of incoming multiple independent spatial narrow-band non-Gaussian signals in arbitrary Gaussian noise environment are proposed . The array is composed of crossed dipoles parallel to the coordinate axes. The crossed dipole positions are arbitrarily distributed. Computer simulation confirms its feasibility.

  19. Joint frequency, 2D AOA and polarization estimation using fourth-order cumulants

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Based on fourth-order cumulant and ESPRIT algorithm, a novel joint frequency, two-dimensional angle of arrival (2D AOA) and the polarization estimation method of incoming multiple independent spatial narrow-band non-Gaussian signals in arbitrary Gaussian noise environment are proposed. The array is composed of crossed dipoles parallel to the coordinate axes. The crossed dipole positions are arbitrarily distributed. Computer simulation confirms its feasibility.

  20. On the estimation of the structure parameter of a normal distribution of order p

    Directory of Open Access Journals (Sweden)

    Angelo M. Mineo

    2007-10-01

    Full Text Available In this paper we compare four different approaches to estimate the structure parameter of a normal distribution of order p (often called exponential power distribution. In particular, we have considered the maximization of the log-likelihood, of the profile log-likelihood, of the conditional profile log-likelihood and a method based on an index of kurtosis. The results of a simulation study seem to indicate the latter approach as the best.

  1. First-order derivative spectrophotometric estimation of nabumetone and paracetamol in tablet dosage form

    OpenAIRE

    Rote, Ambadas R.; Bhalerao, Swapnil R.

    2011-01-01

    Aim: To develop and validate a simple, precise and accurate spectrophotometric method for the simultaneous estimation of nabumetone and paracetamol in their combined tablet dosage form. This method is based on first-order derivative spectroscopy. Materials and Methods: For determination of sampling wavelengths, each of nabumetone and paracetamol were scanned in the wavelength range of 200–400 nm in the spectrum mode and sampling wavelengths were selected at 261 nm (zero crossing of nabumetone...

  2. THE ESTIMATION OF ORDERING DEGREE OF CORONA-POLED NONLINEAR OPTICAL POLYMER FILMS

    Institute of Scientific and Technical Information of China (English)

    YE Cheng; DONG Haiou; WANG Jiafu

    1992-01-01

    The investigation of electrochromic effect of corona-poled nonlinear optical polymer films is an effective method for the estimation of poling level and the selection of poling conditions. The poling electric field Ep and orientational order parameter φ, which are the important parameters to predict d33 of poled tilms, can be calculated by a simple operation from the number of red shift of charge transfer absorption band. The calculated results are in good agreement with the experimental data.

  3. MAXIMUM LIKELIHOOD ESTIMATION FOR PERIODIC AUTOREGRESSIVE MOVING AVERAGE MODELS.

    Science.gov (United States)

    Vecchia, A.V.

    1985-01-01

    A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.

  4. Flocking of Second-Order Multiagent Systems With Connectivity Preservation Based on Algebraic Connectivity Estimation.

    Science.gov (United States)

    Fang, Hao; Wei, Yue; Chen, Jie; Xin, Bin

    2017-04-01

    The problem of flocking of second-order multiagent systems with connectivity preservation is investigated in this paper. First, for estimating the algebraic connectivity as well as the corresponding eigenvector, a new decentralized inverse power iteration scheme is formulated. Then, based on the estimation of the algebraic connectivity, a set of distributed gradient-based flocking control protocols is built with a new class of generalized hybrid potential fields which could guarantee collision avoidance, desired distance stabilization, and the connectivity of the underlying communication network simultaneously. What is important is that the proposed control scheme allows the existing edges to be broken without violation of connectivity constraints, and thus yields more flexibility of motions and reduces the communication cost for the multiagent system. In the end, nontrivial comparative simulations and experimental results are performed to demonstrate the effectiveness of the theoretical results and highlight the advantages of the proposed estimation scheme and control algorithm.

  5. A Refinement of the Classical Order Point Model

    OpenAIRE

    Farhad Moeeni; Stephen Replogle; Zariff Chaudhury; Ahmad Syamil

    2012-01-01

    Factors such as demand volume and replenishment lead time that influence production and inventory control systems are random variables. Existing inventory models incorporate the parameters (e.g., mean and standard deviation) of these statistical quantities to formulate inventory policies. In practice, only sample estimates of these parameters are available. The estimates are subject to sampling variation and hence are random variables. Whereas the effect of sampling variability on estimates o...

  6. Innovative first order elimination kinetics working model for easy learning

    Directory of Open Access Journals (Sweden)

    Navin Budania

    2016-06-01

    Conclusions: First order elimination kinetics is easily understood with the help of above working model. More and more working models could be developed for teaching difficult topics. [Int J Basic Clin Pharmacol 2016; 5(3.000: 862-864

  7. The second-order decomposition model of nonlinear irregular waves

    DEFF Research Database (Denmark)

    Yang, Zhi Wen; Bingham, Harry B.; Li, Jin Xuan;

    2013-01-01

    into the first- and the second-order super-harmonic as well as the second-order sub-harmonic components by transferring them into an identical Fourier frequency-space and using a Newton-Raphson iteration method. In order to evaluate the present model, a variety of monochromatic waves and the second...

  8. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing...... cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss......Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent...

  9. Joint Channel Estimation and Data Detection for Multihop OFDM Relaying System under Unknown Channel Orders and Doppler Frequencies

    CERN Document Server

    Min, Rui

    2012-01-01

    In this paper, channel estimation and data detection for multihop relaying orthogonal frequency division multiplexing (OFDM) system is investigated under time-varying channel. Different from previous works, which highly depend on the statistical information of the doubly-selective channel (DSC) and noise to deliver accurate channel estimation and data detection results, we focus on more practical scenarios with unknown channel orders and Doppler frequencies. Firstly, we integrate the multilink, multihop channel matrices into one composite channel matrix. Then, we formulate the unknown channel using generalized complex exponential basis expansion model (GCE-BEM) with a large oversampling factor to introduce channel sparsity on delay-Doppler domain. To enable the identification of nonzero entries, sparsity enhancing Gaussian distributions with Gamma hyperpriors are adopted. An iterative algorithm is developed under variational inference (VI) framework. The proposed algorithm iteratively estimate the channel, re...

  10. Research on Modeling of Hydropneumatic Suspension Based on Fractional Order

    OpenAIRE

    Junwei Zhang; Sizhong Chen; Yuzhuang Zhao; Jianbo Feng; Chang Liu; Ying Fan

    2015-01-01

    With such excellent performance as nonlinear stiffness, adjustable vehicle height, and good vibration resistance, hydropneumatic suspension (HS) has been more and more applied to heavy vehicle and engineering vehicle. Traditional modeling methods are still confined to simple models without taking many factors into consideration. A hydropneumatic suspension model based on fractional order (HSM-FO) is built with the advantage of fractional order (FO) in viscoelastic material modeling considerin...

  11. Exponential order statistic models of software reliability growth

    Science.gov (United States)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  12. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  13. Modeling Reflective Higher-Order Constructs using Three Approaches with PLS Path Modeling: A Monte Carlo Comparison

    NARCIS (Netherlands)

    Wilson, B.; Henseler, J.

    2007-01-01

    Many studies in the social sciences are increasingly modeling higher-order constructs. PLS can be used to investigate models at a higher level of abstraction (Lohmöller, 1989). It is often chosen due to its’ ability to estimate complex models (Chin, 1998). The primary goal of this paper is to demons

  14. On the Economic Order Quantity Model With Transportation Costs

    NARCIS (Netherlands)

    S.I. Birbil (Ilker); K. Bulbul; J.B.G. Frenk (Hans); H.M. Mulder (Henry)

    2009-01-01

    textabstractWe consider an economic order quantity type model with unit out-of-pocket holding costs, unit opportunity costs of holding, fixed ordering costs and general transportation costs. For these models, we analyze the associated optimization problem and derive an easy procedure for determining

  15. Model-order reduction of biochemical reaction networks

    NARCIS (Netherlands)

    Rao, Shodhan; Schaft, Arjan van der; Eunen, Karen van; Bakker, Barbara M.; Jayawardhana, Bayu

    2013-01-01

    In this paper we propose a model-order reduction method for chemical reaction networks governed by general enzyme kinetics, including the mass-action and Michaelis-Menten kinetics. The model-order reduction method is based on the Kron reduction of the weighted Laplacian matrix which describes the gr

  16. Reduced order modeling of steady flows subject to aerodynamic constraints

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Vendl, Alexander; Goertz, Stefan

    2014-01-01

    A novel reduced-order modeling method based on proper orthogonal decomposition for predicting steady, turbulent flows subject to aerodynamic constraints is introduced. Model-order reduction is achieved by replacing the governing equations of computational fluid dynamics with a nonlinear weighted ...

  17. Second-order model selection in mixture experiments

    Energy Technology Data Exchange (ETDEWEB)

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  18. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-04-01

    Full Text Available A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%. At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane

  19. On Local Homogeneity and Stochastically Ordered Mixed Rasch Models

    Science.gov (United States)

    Kreiner, Svend; Hansen, Mogens; Hansen, Carsten Rosenberg

    2006-01-01

    Mixed Rasch models add latent classes to conventional Rasch models, assuming that the Rasch model applies within each class and that relative difficulties of items are different in two or more latent classes. This article considers a family of stochastically ordered mixed Rasch models, with ordinal latent classes characterized by increasing total…

  20. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been......Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... done for Denmark and two areas in South Africa. For South Africa, the extreme gust atlases from South Africa were created from the output of the mesoscale modelling using Climate Forecasting System Reanalysis (CFSR) forcing for the period 1998 – 2010. The extensive measurements including turbulence...

  1. Four Order Electrostatic Discharge Circuit Model and its Simulation

    Directory of Open Access Journals (Sweden)

    Xiaodong Wang

    2012-12-01

    Full Text Available According to the international electrotechnical commission issued IEC61000-4-2 test standard, through the electrostatic discharge current waveform characteristics analysis and numerical experiment method, and construct a new ESD current expression. Using Laplasse transform, established the ESD system mathematical model. According to the mathematical model, construction of passive four order ESD system circuit model and active four order ESD system circuit model, and simulation. The simulation results meet the IEC61000-4-2 standard, and verify the consistency of the ESD current expression, the mathematical model and the circuit model.

  2. Order reduction of large-scale linear oscillatory system models

    Energy Technology Data Exchange (ETDEWEB)

    Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))

    1994-02-01

    Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.

  3. Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling

    Directory of Open Access Journals (Sweden)

    Ioana Cornel

    2005-01-01

    Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.

  4. First and Higher Order Effects on Zero Order Radiative Transfer Model

    Science.gov (United States)

    Neelam, M.; Mohanty, B.

    2014-12-01

    Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.

  5. Robust Head Pose Estimation Using a 3D Morphable Model

    Directory of Open Access Journals (Sweden)

    Ying Cai

    2015-01-01

    Full Text Available Head pose estimation from single 2D images has been considered as an important and challenging research task in computer vision. This paper presents a novel head pose estimation method which utilizes the shape model of the Basel face model and five fiducial points in faces. It adjusts shape deformation according to Laplace distribution to afford the shape variation across different persons. A new matching method based on PSO (particle swarm optimization algorithm is applied both to reduce the time cost of shape reconstruction and to achieve higher accuracy than traditional optimization methods. In order to objectively evaluate accuracy, we proposed a new way to compute the pose estimation errors. Experiments on the BFM-synthetic database, the BU-3DFE database, the CUbiC FacePix database, the CMU PIE face database, and the CAS-PEAL-R1 database show that the proposed method is robust, accurate, and computationally efficient.

  6. Stochastic Models for Phylogenetic Trees on Higher-order Taxa

    CERN Document Server

    Aldous, David; Popovic, Lea

    2007-01-01

    Simple stochastic models for phylogenetic trees on species have been well studied. But much paleontology data concerns time series or trees on higher-order taxa, and any broad picture of relationships between extant groups requires use of higher-order taxa. A coherent model for trees on (say) genera should involve both a species-level model and a model for the classification scheme by which species are assigned to genera. We present a general framework for such models, and describe three alternate classification schemes. Combining with the species-level model of Aldous-Popovic (2005), one gets models for higher-order trees, and we initiate analytic study of such models. In particular we derive formulas for the lifetime of genera, for the distribution of number of species per genus, and for the offspring structure of the tree on genera.

  7. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  8. A first-order seismotectonic regionalization of Mexico for seismic hazard and risk estimation

    Science.gov (United States)

    Zúñiga, F. Ramón; Suárez, Gerardo; Figueroa-Soto, Ángel; Mendoza, Avith

    2017-06-01

    The purpose of this work is to define a seismic regionalization of Mexico for seismic hazard and risk analyses. This seismic regionalization is based on seismic, geologic, and tectonic characteristics. To this end, a seismic catalog was compiled using the more reliable sources available. The catalog was made homogeneous in magnitude in order to avoid the differences in the way this parameter is reported by various agencies. Instead of using a linear regression to converts from m b and M d to M s or M w , using only events for which estimates of both magnitudes are available (i.e., paired data), we used the frequency-magnitude relations relying on the a and b values of the Gutenberg-Richter relation. The seismic regions are divided into three main categories: seismicity associated with the subduction process along the Pacific coast of Mexico, in-slab events within the down-going COC and RIV plates, and crustal seismicity associated to various geologic and tectonic regions. In total, 18 seismic regions were identified and delimited. For each, the a and b values of the Gutenberg-Richter relation were determined using a maximum likelihood estimation. The a and b parameters were repeatedly estimated as a function of time for each region, in order to confirm their reliability and stability. The recurrence times predicted by the resulting Gutenberg-Richter relations obtained are compared with the observed recurrence times of the larger events in each region of both historical and instrumental earthquakes.

  9. Entropy Based Modelling for Estimating Demographic Trends.

    Directory of Open Access Journals (Sweden)

    Guoqi Li

    Full Text Available In this paper, an entropy-based method is proposed to forecast the demographical changes of countries. We formulate the estimation of future demographical profiles as a constrained optimization problem, anchored on the empirically validated assumption that the entropy of age distribution is increasing in time. The procedure of the proposed method involves three stages, namely: 1 Prediction of the age distribution of a country's population based on an "age-structured population model"; 2 Estimation the age distribution of each individual household size with an entropy-based formulation based on an "individual household size model"; and 3 Estimation the number of each household size based on a "total household size model". The last stage is achieved by projecting the age distribution of the country's population (obtained in stage 1 onto the age distributions of individual household sizes (obtained in stage 2. The effectiveness of the proposed method is demonstrated by feeding real world data, and it is general and versatile enough to be extended to other time dependent demographic variables.

  10. Model-based estimation of individual fitness

    Science.gov (United States)

    Link, W.A.; Cooch, E.G.; Cam, E.

    2002-01-01

    Fitness is the currency of natural selection, a measure of the propagation rate of genotypes into future generations. Its various definitions have the common feature that they are functions of survival and fertility rates. At the individual level, the operative level for natural selection, these rates must be understood as latent features, genetically determined propensities existing at birth. This conception of rates requires that individual fitness be defined and estimated by consideration of the individual in a modelled relation to a group of similar individuals; the only alternative is to consider a sample of size one, unless a clone of identical individuals is available. We present hierarchical models describing individual heterogeneity in survival and fertility rates and allowing for associations between these rates at the individual level. We apply these models to an analysis of life histories of Kittiwakes (Rissa tridactyla ) observed at several colonies on the Brittany coast of France. We compare Bayesian estimation of the population distribution of individual fitness with estimation based on treating individual life histories in isolation, as samples of size one (e.g. McGraw & Caswell, 1996).

  11. Modeling of higher order systems using artificial bee colony algorithm

    Directory of Open Access Journals (Sweden)

    Aytekin Bağış

    2016-05-01

    Full Text Available In this work, modeling of the higher order systems based on the use of the artificial bee colony (ABC algorithm were examined. Proposed model parameters for the sample systems in the literature were obtained by using the algorithm, and its performance was presented comparatively with the other methods. Simulation results show that the ABC algorithm based system modeling approach can be used as an efficient and powerful method for higher order systems.

  12. Dual equivalence in models with higher-order derivatives

    CERN Document Server

    Bazeia, D; Nascimento, J R S; Ribeiro, R F; Wotzasek, C

    2003-01-01

    We introduce a class of higher-order derivative models in (2,1) space-time dimensions. The models are described by a vector field, and contain a Proca-like mass term which prevents gauge invariance. We use the gauge embedding procedure to generate another class of higher-order derivative models, gauge-invariant and dual to the former class. We also show that the gauge embedding approach works appropriately when the vector field couples with fermionic matter.

  13. Impacts of Stochastic Modeling on GPS-derived ZTD Estimations

    CERN Document Server

    Jin, Shuanggen

    2010-01-01

    GPS-derived ZTD (Zenith Tropospheric Delay) plays a key role in near real-time weather forecasting, especially in improving the precision of Numerical Weather Prediction (NWP) models. The ZTD is usually estimated using the first-order Gauss-Markov process with a fairly large correlation, and under the assumption that all the GPS measurements, carrier phases or pseudo-ranges, have the same accuracy. However, these assumptions are unrealistic. This paper aims to investigate the impact of several stochastic modeling methods on GPS-derived ZTD estimations using Australian IGS data. The results show that the accuracy of GPS-derived ZTD can be improved using a suitable stochastic model for the GPS measurements. The stochastic model using satellite elevation angle-based cosine function is better than other investigated stochastic models. It is noted that, when different stochastic modeling strategies are used, the variations in estimated ZTD can reach as much as 1cm. This improvement of ZTD estimation is certainly c...

  14. Hidden Markov models estimation and control

    CERN Document Server

    Elliott, Robert J; Moore, John B

    1995-01-01

    As more applications are found, interest in Hidden Markov Models continues to grow. Following comments and feedback from colleagues, students and other working with Hidden Markov Models the corrected 3rd printing of this volume contains clarifications, improvements and some new material, including results on smoothing for linear Gaussian dynamics. In Chapter 2 the derivation of the basic filters related to the Markov chain are each presented explicitly, rather than as special cases of one general filter. Furthermore, equations for smoothed estimates are given. The dynamics for the Kalman filte

  15. Modelling Limit Order Execution Times from Market Data

    Science.gov (United States)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  16. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    2015-01-01

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  17. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  18. Bayesian parameter estimation for nonlinear modelling of biological pathways

    Directory of Open Access Journals (Sweden)

    Ghasemi Omid

    2011-12-01

    parameterized dynamic systems. Conclusions Our proposed Bayesian algorithm successfully estimated parameters in nonlinear mathematical models for biological pathways. This method can be further extended to high order systems and thus provides a useful tool to analyze biological dynamics and extract information using temporal data.

  19. Order-of-magnitude physics of neutron stars. Estimating their properties from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Reisenegger, Andreas; Zepeda, Felipe S. [Pontificia Universidad Catolica de Chile, Instituto de Astrofisica, Facultad de Fisica, Macul (Chile)

    2016-03-15

    We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of ''everyday'' matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties. (orig.)

  20. Unvoiced/voiced classification and voiced harmonic parameters estimation using the third-order statistics

    Institute of Scientific and Technical Information of China (English)

    YING Na; ZHAO Xiao-hui; DONG Jing

    2007-01-01

    Unvoiced/voiced classification of speech is a challenging problem especially under conditions of low signal-to-noise ratio or the non-white-stationary noise environment. To solve this problem, an algorithm for speech classification, and a technique for the estimation of pairwise magnitude frequency in voiced speech are proposed. By using third order spectrum of speech signal to remove noise, in this algorithm the least spectrum difference to get refined pitch and the max harmonic number is given. And this algorithm utilizes spectral envelope to estimate signal-to-noise ratio of speech harmonics. Speech classification, voicing probability, and harmonic parameters of the voiced frame can be obtained.Simulation results indicate that the proposed algorithm, under complicated background noise, especially Gaussian noise, can effectively classify speech in high accuracy for voicing probability and the voiced parameters.

  1. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    Science.gov (United States)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  2. Cosmic Acceleration in a Model of Fourth Order Gravity

    CERN Document Server

    Banerjee, Shreya; Singh, Tejinder P

    2015-01-01

    We investigate a fourth order model of gravity, having a free length parameter, and no cosmological constant or dark energy. We consider cosmological evolution of a flat Friedmann universe in this model for the case that the length parameter is of the order of present Hubble radius. By making a suitable choice for the present value of the Hubble parameter, and value of third derivative of the scale factor (the jerk) we find that the model can explain cosmic acceleration to the same degree of accuracy as the standard concordance model. If the free length parameter is assumed to be time-dependent, and of the order of the Hubble parameter of the corresponding epoch, the model can still explain cosmic acceleration, and provides a possible resolution of the cosmic coincidence problem. We also compare redshift drift in this model, with that in the standard model.

  3. Pointwise estimates of the Green's function of a second order differential operator with the variable coefficient

    Science.gov (United States)

    Ashyralyev, Allaberen; Tetikoglu, Fatih Sabahattin

    2015-09-01

    In this study, the Green's function of the second order differential operator Ax defined by the formula Axu =-a (x )ux x(x )+δ u (x ), δ ≥0 , a (x )=a (x +2 π ), x ∈ℝ1 with domain D (Ax)={ u (x ):u (x ),u '(x ),u″(x )∈C (ℝ1),u (x )=u (x +2 π ), x ∈ℝ1,∫0 2 π u (x )d x =0 } is presented. The estimates for the Green's function and it's derivative are obtained. The positivity of the operator Ax is proved.

  4. On Bayes linear unbiased estimation of estimable functions for the singular linear model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weiping; WEI Laisheng

    2005-01-01

    The unique Bayes linear unbiased estimator (Bayes LUE) of estimable functions is derived for the singular linear model. The superiority of Bayes LUE over ordinary best linear unbiased estimator is investigated under mean square error matrix (MSEM)criterion.

  5. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  6. Building Higher-Order Markov Chain Models with EXCEL

    Science.gov (United States)

    Ching, Wai-Ki; Fung, Eric S.; Ng, Michael K.

    2004-01-01

    Categorical data sequences occur in many applications such as forecasting, data mining and bioinformatics. In this note, we present higher-order Markov chain models for modelling categorical data sequences with an efficient algorithm for solving the model parameters. The algorithm can be implemented easily in a Microsoft EXCEL worksheet. We give a…

  7. Estimation of Upper Bound for Order of Filters used in Perfect Reconstruction Filter Banks

    Directory of Open Access Journals (Sweden)

    B. R. Nagendra

    2014-07-01

    Full Text Available Filter banks are widely used in variety of applications such as signal compression, multi-channel transmission, conditioning of power supply, coding and decoding of signals, etc. Perfect reconstruction filter banks are used in the applications where it is essential to reconstruct the original signal with minimum errors. Compression of satellite vibration test data is one such application where perfect reconstruction filter banks can be used to design wavelets. These wavelets are used in transform coding stage of compression algorithm. It is required to have higher order for filters used in perfect reconstruction filter banks, to ensure better filter characteristics. The study carried out in this work, estimates the upper bound for order that can be assigned to filters used in perfect reconstruction filter banks

  8. Estimating the robustness of composite CBA & MCA assessments by variation of criteria importance order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    , the proposed method uses surrogate weights based on rankings of the criteria, by the use of Rank Order Distribution (ROD) weights [3]. This reduces the problem to assigning a rank order value for each criterion. A method for combining the MCA with the cost-benefit analysis (CBA) is applied as described...... by Salling et al. in [4]. This methodology, COSIMA, uses a calibration indicator which expresses the trade-off between the CBA and MCA part resulting in a total rate expressing the attractiveness of each alternative. However, it should be mentioned that the proposed procedure for estimating the importance...... that the outcome of the method is a subset of the total solution space. The paper finishes up with a discussion and considerations about how to present the results. The question whether to present a single decision criterion, such as the benefit-cost rate or the net present value, or instead to present graphs...

  9. Model and Controller Order Reduction for Infinite Dimensional Systems

    Directory of Open Access Journals (Sweden)

    Fatmawati

    2010-05-01

    Full Text Available This paper presents a reduced order model problem using reciprocal transformation and balanced truncation followed by low order controller design of infinite dimensional systems. The class of systems considered is that of an exponentially stable state linear systems (A, B, C, where operator A has a bounded inverse, and the operator B and C are of finite-rank and bounded. We can connect the system (A, B, C with its reciprocal system via the solutions of the Lyapunov equations. The realization of the reciprocal system is reduced by balanced truncation. This result is further translated using reciprocal transformation as the reduced-order model for the systems (A, B, C. Then the low order controller is designed based on the reduced order model. The numerical examples are studied using simulations of Euler-Bernoulli beam to show the closed-loop performance.

  10. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    Science.gov (United States)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  11. Second-Order Model Reduction Based on Gramians

    Directory of Open Access Journals (Sweden)

    Cong Teng

    2012-01-01

    Full Text Available Some new and simple Gramian-based model order reduction algorithms are presented on second-order linear dynamical systems, namely, SVD methods. Compared to existing Gramian-based algorithms, that is, balanced truncation methods, they are competitive and more favorable for large-scale systems. Numerical examples show the validity of the algorithms. Error bounds on error systems are discussed. Some observations are given on structures of Gramians of second order linear systems.

  12. Space-Time Estimates of Mild Solutions of a Class of Higher-Order Semilinear Parabolic Equations in Lp

    Directory of Open Access Journals (Sweden)

    Sandjo Albert N.

    2014-01-01

    Full Text Available We establish the well-posedness of boundary value problems for a family of nonlinear higherorder parabolic equations which comprises some models of epitaxial growth and thin film theory. In order to achieve this result, we provide a unified framework for constructing local mild solutions in C0([0, T]; Lp(Ω by introducing appropriate time-weighted Lebesgue norms inspired by a priori estimates of solutions. This framework allows us to obtain global existence of solutions under the proviso that initial data are reasonably small

  13. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  14. Bayesian Estimation of a Mixture Model

    Directory of Open Access Journals (Sweden)

    Ilhem Merah

    2015-05-01

    Full Text Available We present the properties of a bathtub curve reliability model having both a sufficient adaptability and a minimal number of parameters introduced by Idée and Pierrat (2010. This one is a mixture of a Gamma distribution G(2, (1/θ and a new distribution L(θ. We are interesting by Bayesian estimation of the parameters and survival function of this model with a squared-error loss function and non-informative prior using the approximations of Lindley (1980 and Tierney and Kadane (1986. Using a statistical sample of 60 failure data relative to a technical device, we illustrate the results derived. Based on a simulation study, comparisons are made between these two methods and the maximum likelihood method of this two parameters model.

  15. Hierarchical Boltzmann simulations and model error estimation

    Science.gov (United States)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  16. Estimation in Dirichlet random effects models

    CERN Document Server

    Kyung, Minjung; Casella, George; 10.1214/09-AOS731

    2010-01-01

    We develop a new Gibbs sampler for a linear mixed model with a Dirichlet process random effect term, which is easily extended to a generalized linear mixed model with a probit link function. Our Gibbs sampler exploits the properties of the multinomial and Dirichlet distributions, and is shown to be an improvement, in terms of operator norm and efficiency, over other commonly used MCMC algorithms. We also investigate methods for the estimation of the precision parameter of the Dirichlet process, finding that maximum likelihood may not be desirable, but a posterior mode is a reasonable approach. Examples are given to show how these models perform on real data. Our results complement both the theoretical basis of the Dirichlet process nonparametric prior and the computational work that has been done to date.

  17. Design of sensor networks for instantaneous inversion of modally reduced order models in structural dynamics

    Science.gov (United States)

    Maes, K.; Lourens, E.; Van Nimmen, K.; Reynders, E.; De Roeck, G.; Lombaert, G.

    2015-02-01

    In structural dynamics, the forces acting on a structure are often not well known. System inversion techniques may be used to estimate these forces from the measured response of the structure. This paper first derives conditions for the invertibility of linear system models that apply to any instantaneous input estimation or joint input-state estimation algorithm. The conditions ensure the identifiability of the dynamic forces and system states, their stability and uniqueness. The present paper considers the specific case of modally reduced order models, which are generally obtained from a physical, finite element model, or from experimental data. It is shown how in this case the conditions can be directly expressed in terms of the modal properties of the structure. A distinction is made between input estimation and joint input-state estimation. Each of the conditions is illustrated by a conceptual example. The practical implementation is discussed for a case study where a sensor network for a footbridge is designed.

  18. High order fluid model for ionization fronts in streamer discharges

    NARCIS (Netherlands)

    Markosyan, A.; Dujko, S.; Ebert, U.; Almeida, P.G.C.; Alves, L.L.; Guerra, V.

    2012-01-01

    A high order fluid model for streamer dynamics is developed by closing the system after the 4th mo- ment of the Boltzmann equation in local mean energy approximation. This is done by approximating the high order pressure tensor in the heat flux equation through the previous moments. The electric fi

  19. An Non-parametrical Approach to Estimate Location Parameters under Simple Order%简单半序约束下估计位置参数的一个非参方法

    Institute of Scientific and Technical Information of China (English)

    孙旭

    2005-01-01

    This paper deals with estimating parameters under simple order whensamples come from location models. Based on the idea of Hodges and Lehmann es-timator (H-L estimator), a new approach to estimate parameters is proposed, whichis difference with the classical L1 isotonic regression and L2 isotonic regression. Analgorithm to compute estimators is given. Simulations by the Monte-Carlo methodis applied to compare the likelihood functions with respect to L1 estimators andweighted isotonic H-L estimators.

  20. Reduced-order models for vertical human-structure interaction

    Science.gov (United States)

    Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-09-01

    For slender and lightweight structures, the vibration serviceability under crowd- induced loading is often critical in design. Currently, designers rely on equivalent load models, upscaled from single-person force measurements. Furthermore, it is important to consider the mechanical interaction with the human body as this can significantly reduce the structural response. To account for these interaction effects, the contact force between the pedestrian and the structure can be modelled as the superposition of the force induced by the pedestrian on a rigid floor and the force resulting from the mechanical interaction between the structure and the human body. For the case of large crowds, however, this approach leads to models with a very high system order. In the present contribution, two equivalent reduced-order models are proposed to approximate the dynamic behaviour of the full-order coupled crowd-structure system. A numerical study is performed to evaluate the impact of the modelling assumptions on the structural response to pedestrian excitation. The results show that the full-order moving crowd model can be well approximated by a reduced-order model whereby the interaction with the pedestrians in the crowd is modelled using a single (equivalent) SDOF system.

  1. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  2. Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2007-07-01

    Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.

  3. Direction-of-Arrival Estimation Based on Sparse Recovery with Second-Order Statistics

    Directory of Open Access Journals (Sweden)

    H. Chen

    2015-04-01

    Full Text Available Traditional direction-of-arrival (DOA estimation techniques perform Nyquist-rate sampling of the received signals and as a result they require high storage. To reduce sampling ratio, we introduce level-crossing (LC sampling which captures samples whenever the signal crosses predetermined reference levels, and the LC-based analog-to-digital converter (LC ADC has been shown to efficiently sample certain classes of signals. In this paper, we focus on the DOA estimation problem by using second-order statistics based on the LC samplings recording on one sensor, along with the synchronous samplings of the another sensors, a sparse angle space scenario can be found by solving an $ell_1$ minimization problem, giving the number of sources and their DOA's. The experimental results show that our proposed method, when compared with some existing norm-based constrained optimization compressive sensing (CS algorithms, as well as subspace method, improves the DOA estimation performance, while using less samples when compared with Nyquist-rate sampling and reducing sensor activity especially for long time silence signal.

  4. Estimation of Model Parameters for Steerable Needles

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B.; Okamura, Allison M.; Chirikjian, Gregory S.

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%. PMID:21643451

  5. Estimation of Model Parameters for Steerable Needles.

    Science.gov (United States)

    Park, Wooram; Reed, Kyle B; Okamura, Allison M; Chirikjian, Gregory S

    2010-01-01

    Flexible needles with bevel tips are being developed as useful tools for minimally invasive surgery and percutaneous therapy. When such a needle is inserted into soft tissue, it bends due to the asymmetric geometry of the bevel tip. This insertion with bending is not completely repeatable. We characterize the deviations in needle tip pose (position and orientation) by performing repeated needle insertions into artificial tissue. The base of the needle is pushed at a constant speed without rotating, and the covariance of the distribution of the needle tip pose is computed from experimental data. We develop the closed-form equations to describe how the covariance varies with different model parameters. We estimate the model parameters by matching the closed-form covariance and the experimentally obtained covariance. In this work, we use a needle model modified from a previously developed model with two noise parameters. The modified needle model uses three noise parameters to better capture the stochastic behavior of the needle insertion. The modified needle model provides an improvement of the covariance error from 26.1% to 6.55%.

  6. Modeling Business Processes with Azzurra: Order Fulfilment

    OpenAIRE

    Canobbio, Giulia; Dalpiaz, Fabiano

    2012-01-01

    Azzurra is a specification language for modeling and enacting business processes. Azzurra is founded on social concepts, such as roles, agents and commitments among them, and Azzurra models are social models consisting of networks of commitments. As such, Azzurra models support the flexible enactment of business processes, and provide a semantic notion of actor accountability and business process compliance. In this technical report, we apply Azzurra to the order fulfilment exemplar from ...

  7. PARAMETER ESTIMATION IN BREAD BAKING MODEL

    Directory of Open Access Journals (Sweden)

    Hadiyanto Hadiyanto

    2012-05-01

    Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels.  Abstrak  PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan

  8. Towards the Development of a Second-Order Approximation in Activity Coefficient Models Based on Group Contributions

    DEFF Research Database (Denmark)

    Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul

    1996-01-01

    A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...

  9. GARCH modelling of covariance in dynamical estimation of inverse solutions

    Energy Technology Data Exchange (ETDEWEB)

    Galka, Andreas [Institute of Experimental and Applied Physics, University of Kiel, 24098 Kiel (Germany) and Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)]. E-mail: galka@physik.uni-kiel.de; Yamashita, Okito [ATR Computational Neuroscience Laboratories, Hikaridai 2-2-2, Kyoto 619-0288 (Japan); Ozaki, Tohru [Institute of Statistical Mathematics (ISM), Minami-Azabu 4-6-7, Tokyo 106-8569 (Japan)

    2004-12-06

    The problem of estimating unobserved states of spatially extended dynamical systems poses an inverse problem, which can be solved approximately by a recently developed variant of Kalman filtering; in order to provide the model of the dynamics with more flexibility with respect to space and time, we suggest to combine the concept of GARCH modelling of covariance, well known in econometrics, with Kalman filtering. We formulate this algorithm for spatiotemporal systems governed by stochastic diffusion equations and demonstrate its feasibility by presenting a numerical simulation designed to imitate the situation of the generation of electroencephalographic recordings by the human cortex.

  10. Regionalized rainfall-runoff model to estimate low flow indices

    Science.gov (United States)

    Garcia, Florine; Folton, Nathalie; Oudin, Ludovic

    2016-04-01

    Estimating low flow indices is of paramount importance to manage water resources and risk assessments. These indices are derived from river discharges which are measured at gauged stations. However, the lack of observations at ungauged sites bring the necessity of developing methods to estimate these low flow indices from observed discharges in neighboring catchments and from catchment characteristics. Different estimation methods exist. Regression or geostatistical methods performed on the low flow indices are the most common types of methods. Another less common method consists in regionalizing rainfall-runoff model parameters, from catchment characteristics or by spatial proximity, to estimate low flow indices from simulated hydrographs. Irstea developed GR2M-LoiEau, a conceptual monthly rainfall-runoff model, combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on only two parameters, which are regionalized and mapped throughout France. This model allows to cartography monthly reference low flow indices. The inputs data come from SAFRAN, the distributed mesoscale atmospheric analysis system, which provides daily solid and liquid precipitation and temperature data from everywhere in the French territory. To exploit fully these data and to estimate daily low flow indices, a new version of GR-LoiEau has been developed at a daily time step. The aim of this work is to develop and regionalize a GR-LoiEau model that can provide any daily, monthly or annual estimations of low flow indices, yet keeping only a few parameters, which is a major advantage to regionalize them. This work includes two parts. On the one hand, a daily conceptual rainfall-runoff model is developed with only three parameters in order to simulate daily and monthly low flow indices, mean annual runoff and seasonality. On the other hand, different regionalization methods, based on spatial proximity and similarity, are tested to estimate the model parameters and to simulate

  11. Estimating order-picking times for return heuristic - equations and simulations

    Directory of Open Access Journals (Sweden)

    Grzegorz Tarczyński

    2015-09-01

    Full Text Available Background: A key element of the evaluation of warehouse operation is the average order-picking time. In warehouses where the order-picking process is carried out according to the "picker-to-part" rule the order-picking time is usually proportional to the distance covered by the picker while picking items. This distance can by estimated by simulations or using mathematical equations. In the paper only the best described in the literature one-block rectangular warehouses are considered. Material and methods: For the one-block rectangular warehouses there are well known five routing heuristics. In the paper the author considers the return heuristic in two variants. The paper presents well known Hall's and De Koster's equations for the average distance traveled by the picker while completing items from one pick list. The author presents own proposals for calculating the expected distance. Results: the results calculated by the use of mathematical equations (the formulas of Hall, De Koster and own propositions were compared with the average values obtained using computer simulations. For the most cases the average error does not exceed 1% (except for Hall's equations. To carry out simulation the computer software Warehouse Real-Time Simulator was used. Conclusions: the order-picking time is a function of many variables and its optimization is not easy. It can be done in two stages: firstly using mathematical equations the set of the potentially best variants is established, next the results are verified using simulations. The results calculated by the use of equations are not precise, but possible to achieve immediately. The simulations are more time-consuming, but allow to analyze the order-picking process more accurately.

  12. Fractional-Order Nonlinear Systems Modeling, Analysis and Simulation

    CERN Document Server

    Petráš, Ivo

    2011-01-01

    "Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation" presents a study of fractional-order chaotic systems accompanied by Matlab programs for simulating their state space trajectories, which are shown in the illustrations in the book. Description of the chaotic systems is clearly presented and their analysis and numerical solution are done in an easy-to-follow manner. Simulink models for the selected fractional-order systems are also presented. The readers will understand the fundamentals of the fractional calculus, how real dynamical systems can be described using fractional derivatives and fractional differential equations, how such equations can be solved, and how to simulate and explore chaotic systems of fractional order. The book addresses to mathematicians, physicists, engineers, and other scientists interested in chaos phenomena or in fractional-order systems. It can be used in courses on dynamical systems, control theory, and applied mathematics at graduate or postgraduate level. ...

  13. A High-Order Multiscale Global Atmospheric Model

    Science.gov (United States)

    Nair, Ram

    2016-04-01

    The High-Order Method Modeling Environment (HOMME), developed at NCAR, is a petascale hydrostatic framework, which employs the cubed-sphere grid system and high-order continuous or discontinuous Galerkin (DG) methods. Recently, the HOMME framework is being extended to a non-hydrostatic dynamical core, named as the "High-Order Multiscale Atmospheric Model (HOMAM)." The spatial discretization is based on DG or high-order finite-volume methods. Orography is handled by the terrain-following height-based coordinate system. To alleviate the stringent CFL stability requirement resulting from the vertical aspects of the dynamics, an operator-splitting time integration scheme based on the horizontally explicit and vertically implicit (HEVI) philosophy is adopted for HOMAM. Preliminary results with the benchmark test cases proposed in the Dynamical Core Model Intercomparison project (DCMIP) test-suite will be presented in the seminar.

  14. The Renormalization of the Electroweak Standard Model to All Orders

    CERN Document Server

    Kraus, E

    1998-01-01

    We give the renormalization of the standard model of electroweak interactions to all orders of perturbation theory by using the method of algebraic renormalization, which is based on general properties of renormalized perturbation theory and not on a specific regularization scheme. The Green functions of the standard model are uniquely constructed to all orders, if one defines the model by the Slavnov-Taylor identity, Ward-identities of rigid symmetry and a specific form of the abelian local gauge Ward-identity, which continues the Gell-Mann Nishijima relation to higher orders. Special attention is directed to the mass diagonalization of massless and massive neutral vectors and ghosts. For obtaining off-shell infrared finite expressions it is required to take into account higher order corrections into the functional symmetry operators. It is shown, that the normalization conditions of the on-shell schemes are in agreement with the most general symmetry transformations allowed by the algebraic constraints.

  15. NEW METHOD FOR LOW ORDER SPECTRAL MODEL AND ITS APPLICATION

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In order to overcome the deficiency in classical method of low order spectral model, a new method for low order spectral model was advanced. Through calculating the multiple correlation coefficients between combinations of different functions and the recorded data under the least square criterion, the truncated functions which can mostly reflect the studied physical phenomenon were objectively distilled from these data. The new method overcomes the deficiency of artificially selecting the truncated functions in the classical low order spectral model. The new method being applied to study the inter-annual variation of summer atmospheric circulation over Northern Hemisphere, the truncated functions were obtained with the atmospheric circulation data of June 1994 and June 1998. The mechanisms for the two-summer atmospheric circulation variations over Northern Hemisphere were obtained with two-layer quasi-geostrophic baroclinic equation.

  16. Order reduction for a model of marine bacteriophage evolution

    Science.gov (United States)

    Pagliarini, Silvia; Korobeinikov, Andrei

    2017-02-01

    A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.

  17. ESTIMATING OF ORDER-RESTRICTED LOCATION PARAMETERS OF TWO-EXPONENTIAL DISTRIBUTIONS UNDER MULTIPLE TYPE- Ⅱ CENSORING

    Institute of Scientific and Technical Information of China (English)

    ShenYike; FeiHeliang

    1999-01-01

    In this article, Bayes estimation of location parameters under restriction is broughtforth. Since Bayes estimator is closely connected with the first value of order statistics that canbe observed, it is possible to consider “complete data” method, through which the pseudo-value of first order statistics and pseudo-right censored samples can he obtained. Thus the results under Type- Ⅱ right censoring can be used directly to get more accurate estimators by Bayes method.

  18. Group-ICA model order highlights patterns of functional brain connectivity

    Directory of Open Access Journals (Sweden)

    Ahmed eAbou Elseoud

    2011-06-01

    Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.

  19. Modeling Human Behaviour with Higher Order Logic: Insider Threats

    DEFF Research Database (Denmark)

    Boender, Jaap; Ivanova, Marieta Georgieva; Kammuller, Florian

    2014-01-01

    In this paper, we approach the problem of modeling the human component in technical systems with a view on the difference between the use of model and theory in sociology and computer science. One aim of this essay is to show that building of theories and models for sociology can be compared...... it to the sociological process of logical explanation. As a case study on modeling human behaviour, we present the modeling and analysis of insider threats as a Higher Order Logic theory in Isabelle/HOL. We show how each of the three step process of sociological explanation can be seen in our modeling of insider’s state...

  20. Robust estimation of unbalanced mixture models on samples with outliers.

    Science.gov (United States)

    Galimzianova, Alfiia; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-01

    Mixture models are often used to compactly represent samples from heterogeneous sources. However, in real world, the samples generally contain an unknown fraction of outliers and the sources generate different or unbalanced numbers of observations. Such unbalanced and contaminated samples may, for instance, be obtained by high density data sensors such as imaging devices. Estimation of unbalanced mixture models from samples with outliers requires robust estimation methods. In this paper, we propose a novel robust mixture estimator incorporating trimming of the outliers based on component-wise confidence level ordering of observations. The proposed method is validated and compared to the state-of-the-art FAST-TLE method on two data sets, one consisting of synthetic samples with a varying fraction of outliers and a varying balance between mixture weights, while the other data set contained structural magnetic resonance images of the brain with tumors of varying volumes. The results on both data sets clearly indicate that the proposed method is capable to robustly estimate unbalanced mixtures over a broad range of outlier fractions. As such, it is applicable to real-world samples, in which the outlier fraction cannot be estimated in advance.

  1. Improved first-order uncertainty method for water-quality modeling

    Science.gov (United States)

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  2. First-order estimate of the Canary Islands plate-scale stress field: Implications for volcanic hazard assessment

    Science.gov (United States)

    Geyer, A.; Martí, J.; Villaseñor, A.

    2016-06-01

    In volcanic areas, the existing stress field is a key parameter controlling magma generation, location and geometry of the magmatic plumbing systems and the distribution of the resulting volcanism at surface. Therefore, knowing the stress configuration in the lithosphere at any scale (i.e. local, regional and plate-scale) is fundamental to understand the distribution of volcanism and, subsequently, to interpret volcanic unrest and potential tectonic controls of future eruptions. The objective of the present work is to provide a first-order estimate of the plate-scale tectonic stresses acting on the Canary Islands, one of the largest active intraplate volcanic regions of the World. In order to obtain the orientation of the minimum and maximum horizontal compressive stresses, we perform a series of 2D finite element models of plate scale kinematics assuming plane stress approximation. Results obtained are used to develop a regional model, which takes into account recognized archipelago-scale structural discontinuities. Maximum horizontal compressive stress directions obtained are compared with available stress, geological and geodynamic data. The methodology used may be easily applied to other active volcanic regions, where a first order approach of their plate/regional stresses can be essential information to be used as input data for volcanic hazard assessment models.

  3. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    G R Bouroun; E Esfandyari

    2016-01-01

    A seminumerical solution to the valon model at next-to-leading order (NLO) in the Laguerre polynomials is presented. We used the valon model to generate the structure of proton with respect to the Laguerre polynomials method. The results are compared with H1 data and other parametrizations.

  4. Data-Driven Model Order Reduction for Bayesian Inverse Problems

    KAUST Repository

    Cui, Tiangang

    2014-01-06

    One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.

  5. A Fractional Order Recovery SIR Model from a Stochastic Process.

    Science.gov (United States)

    Angstmann, C N; Henry, B I; McGann, A V

    2016-03-01

    Over the past several decades, there has been a proliferation of epidemiological models with ordinary derivatives replaced by fractional derivatives in an ad hoc manner. These models may be mathematically interesting, but their relevance is uncertain. Here we develop an SIR model for an epidemic, including vital dynamics, from an underlying stochastic process. We show how fractional differential operators arise naturally in these models whenever the recovery time from the disease is power-law distributed. This can provide a model for a chronic disease process where individuals who are infected for a long time are unlikely to recover. The fractional order recovery model is shown to be consistent with the Kermack-McKendrick age-structured SIR model, and it reduces to the Hethcote-Tudor integral equation SIR model. The derivation from a stochastic process is extended to discrete time, providing a stable numerical method for solving the model equations. We have carried out simulations of the fractional order recovery model showing convergence to equilibrium states. The number of infecteds in the endemic equilibrium state increases as the fractional order of the derivative tends to zero.

  6. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    Science.gov (United States)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  7. An Order Statistics Approach to the Halo Model for Galaxies

    Science.gov (United States)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.

  8. Reduced Order Internal Models in the Frequency Domain

    OpenAIRE

    Laakkonen, Petteri; Paunonen, Lassi

    2016-01-01

    The internal model principle states that all robustly regulating controllers must contain a suitably reduplicated internal model of the signal to be regulated. Using frequency domain methods, we show that the number of the copies may be reduced if the class of perturbations in the problem is restricted. We present a two step design procedure for a simple controller containing a reduced order internal model achieving robust regulation. The results are illustrated with an example of a five tank...

  9. On the verification of PGD reduced-order models

    OpenAIRE

    Pled, Florent; Chamoin, Ludovic; Ladevèze, Pierre

    2014-01-01

    International audience; In current computational mechanics practice, multidimensional as well as multiscale or parametric models encountered in a wide variety of scientific and engineering fields often require either the resolution of significantly large complexity problems or the direct calculation of very numerous solutions of such complex models. In this framework, the use of model order reduction allows to dramatically reduce the computational requirements engendered by the increasing mod...

  10. Optimal ordering policies for continuous review perishable inventory models.

    Science.gov (United States)

    Weiss, H J

    1980-01-01

    This paper extends the notions of perishable inventory models to the realm of continuous review inventory systems. The traditional perishable inventory costs of ordering, holding, shortage or penalty, disposal and revenue are incorporated into the continuous review framework. The type of policy that is optimal with respect to long run average expected cost is presented for both the backlogging and lost-sales models. In addition, for the lost-sales model the cost function is presented and analyzed.

  11. Estimation of ibuprofen and famotidine in tablets by second order derivative spectrophotometery method

    Directory of Open Access Journals (Sweden)

    Dimal A. Shah

    2017-02-01

    Full Text Available A simple and accurate method for the analysis of ibuprofen (IBU and famotidine (FAM in their combined dosage form was developed using second order derivative spectrophotometery. IBU and FAM were quantified using second derivative responses at 272.8 nm and 290 nm in the spectra of their solutions in methanol. The calibration curves were linear in the concentration range of 100–600 μg/mL for IBU and 5–25 μg/mL for FAM. The method was validated and found to be accurate and precise. Developed method was successfully applied for the estimation of IBU and FAM in their combined dosage form.

  12. Atrial fibrillatory signal estimation using blind source extraction algorithm based on high-order statistics

    Institute of Scientific and Technical Information of China (English)

    WANG Gang; RAO NiNi; ZHANG Ying

    2008-01-01

    The analysis and the characterization of atrial fibrillation (AF) requires,in a previous key step,the extraction of the atrial activity (AA) free from 12-lead electrocardiogram (ECG).This contribution proposes a novel non-invasive approach for the AA estimation in AF episodes.The method is based on blind source extraction (BSE) using high order statistics (HOS).The validity and performance of this algorithm are confirmed by extensive computer simulations and experiments on realworld data.In contrast to blind source separation (BSS) methods,BSE only extract one desired signal,and it is easy for the machine to judge whether the extracted signal is AA source by calculating its spectrum concentration,while it is hard for the machine using BSS method to judge which one of the separated twelve signals is AA source.Therefore,the proposed method is expected to have great potential in clinical monitoring.

  13. Fast and Adaptive Bidimensional Empirical Mode Decomposition Using Order-Statistics Filter Based Envelope Estimation

    Directory of Open Access Journals (Sweden)

    Jesmin F. Khan

    2008-08-01

    Full Text Available A novel approach for bidimensional empirical mode decomposition (BEMD is proposed in this paper. BEMD decomposes an image into multiple hierarchical components known as bidimensional intrinsic mode functions (BIMFs. In each iteration of the process, two-dimensional (2D interpolation is applied to a set of local maxima (minima points to form the upper (lower envelope. But, 2D scattered data interpolation methods cause huge computation time and other artifacts in the decomposition. This paper suggests a simple, but effective, method of envelope estimation that replaces the surface interpolation. In this method, order statistics filters are used to get the upper and lower envelopes, where filter size is derived from the data. Based on the properties of the proposed approach, it is considered as fast and adaptive BEMD (FABEMD. Simulation results demonstrate that FABEMD is not only faster and adaptive, but also outperforms the original BEMD in terms of the quality of the BIMFs.

  14. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    OpenAIRE

    Baker Syed; Poskar C; Junker Björn

    2011-01-01

    Abstract In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. Wh...

  15. DETERMINANTS OF SOVEREIGN RATING: FACTOR BASED ORDERED PROBIT MODELS FOR PANEL DATA ANALYSIS MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Dilek Teker

    2013-01-01

    Full Text Available The aim of this research is to compose a new rating methodology and provide credit notches to 23 countries which of 13 are developed and 10 are emerging. There are various literature that explains the determinants of credit ratings. Following the literature, we select 11 variables for our model which of 5 are eliminated by the factor analysis. We use specific dummies to investigate the structural breaks in time and cross section such as pre crises, post crises, BRIC membership, EU membership, OPEC membership, shipbuilder country and platinum reserved country. Then we run an ordered probit model and give credit notches to the countries. We use FITCH ratings as benchmark. Thus, at the end we compare the notches of FITCH with the ones we derive out of our estimated model.

  16. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  17. On the decay of higher order derivatives of solutions to Ladyzhenskaya model for incompressible viscous flows

    Institute of Scientific and Technical Information of China (English)

    DONG BoQing; JIANG Wei

    2008-01-01

    This article concerns large time behavior of Ladyzhenskaya model for incompressible viscous flows in R3. Based on linear Lp-Lq estimates, the auxiliary decay properties of the solutions and generalized Gronwall type arguments, some optimal upper and lower bounds for the decay of higher order derivatives of solutions are derived without assuming any decay properties of solutions and using Fourier splitting technology.

  18. Provable first-order transitions for nonlinear vector and gauge models with continuous symmetries

    NARCIS (Netherlands)

    Enter, Aernout C.D. van; Shlosman, Senya B.

    2005-01-01

    We consider various sufficiently nonlinear vector models of ferromagnets, of nematic liquid crystals and of nonlinear lattice gauge theories with continuous symmetries. We show, employing the method of Reflection Positivity and Chessboard Estimates, that they all exhibit first-order transitions in t

  19. A first-order thermal model for building design

    Energy Technology Data Exchange (ETDEWEB)

    Mathews, E.H. [Centre for Experimental and Numerical Thermoflow, Univ. of Pretoria (South Africa); Richards, P.G. [Centre for Experimental and Numerical Thermoflow, Univ. of Pretoria (South Africa); Lombard, C. [Centre for Experimental and Numerical Thermoflow, Univ. of Pretoria (South Africa)

    1994-12-31

    Simplified thermal models of buildings can successfully be applied in building design. This paper describes the derivation and validation of a first-order thermal model which has a clear physical interpretation, is based on uncomplicated calculation procedures and requires limited input information. Because extensive simplifications and assumptions are inherent in the development of the model, a comprehensive validation study is reported. The validity of the thermal model was proven with 70 validation studies in 32 buildings comprising a wide range of thermal characteristics. The accuracy of predictions compares well with other sophisticated programs. The proposed model is considered to be eminently suitable for incorporation in an efficient design tool. (orig.)

  20. Abnormal Waves Modelled as Second-order Conditional Waves

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2005-01-01

    The paper presents results for the expected second order short-crested wave conditional of a given wave crest at a specific point in time and space. The analysis is based on the second order Sharma and Dean shallow water wave theory. Numerical results showing the importance of the spectral density......, the water depth and the directional spreading on the conditional mean wave profile are presented. Application of conditional waves to model and explain abnormal waves, e.g. the well-known New Year Wave measured at the Draupner platform January 1st 1995, is discussed. Whereas the wave profile can be modelled...... quite well by the second order conditional wave including directional spreading and finite water depth the probability to encounter such a wave is still, however, extremely rare. The use of the second order conditional wave as initial condition to a fully non-linear three-dimensional analysis...

  1. Shape parameter estimate for a glottal model without time position

    OpenAIRE

    Degottex, Gilles; Roebel, Axel; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Degottex09a; None / None; National audience; From a recorded speech signal, we propose to estimate a shape parameter of a glottal model without estimating his time position. Indeed, the literature usually propose to estimate the time position first (ex. by detecting Glottal Closure Instants). The vocal-tract filter estimate is expressed as a minimum-phase envelope estimation after removing the glottal model and a standard lips radiation model. Since this filter is mainly b...

  2. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  3. Using of "pseudo-second-order model" in adsorption.

    Science.gov (United States)

    Ho, Yuh-Shan

    2014-01-01

    A research paper's contribution exists not only in its originality and creativity but also in its continuity and development for research that follows. However, the author easily ignores it. Citation error and quotation error occurred very frequently in a scientific paper. Numerous researchers use secondary references without knowing the original idea from authors. Sulaymon et al. (Environ Sci Pollut Res 20:3011-3023, 2013) and Spiridon et al. (Environ Sci Pollut Res 20:6367-6381, 2013) presented wrong pseudo-second-order models in Environmental Science and Pollution Research, vol. 20. This comment pointed the errors of the kinetic models and offered information for citing original idea of pseudo-second-order kinetic expression. In order to stop the proliferation of the mistake, it is suggested to cite the original paper for the kinetic model which provided greater accuracy and more details about the kinetic expression.

  4. Reduced order modeling of some fluid flows of industrial interest

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  5. First-order estimate of the planktic foraminifer biomass in the modern ocean

    Directory of Open Access Journals (Sweden)

    R. Schiebel

    2012-09-01

    Full Text Available Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. With a new non-destructive protocol developed from the bicinchoninic acid (BCA method and nano-photospectrometry, we have analysed the protein-biomass, along with test size and weight, of 754 individual planktic foraminifers from 21 different species and morphotypes. From additional CHN analysis, it can be assumed that protein-biomass equals carbon-biomass. Accordingly, the average individual planktic foraminifer protein- and carbon-biomass amounts to 0.845 μg. Samples include symbiont bearing and symbiont-barren species from the sea surface down to 2500 m water depth. Conversion factors between individual biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum test diameter. Assemblage-biomass data presented here include 1128 sites and water depth intervals. The regional coverage of data includes the North Atlantic, Arabian Sea, Red Sea, and Caribbean as well as literature data from the eastern and western North Pacific, and covers a wide range of oligotrophic to eutrophic waters over six orders of magnitude of planktic-foraminifer assemblage-biomass (PFAB. A first order estimate of the average global planktic foraminifer biomass production (>125 μm ranges from 8.2–32.7 Tg C yr−1 (i.e. 0.008–0.033 Gt C yr−1, and might be more than three times as high including neanic and juvenile individuals adding up to 25–100 Tg C yr−1. However, this is a first

  6. First-order estimate of the planktic foraminifer biomass in the modern global oceans

    Science.gov (United States)

    Schiebel, R.; Movellan, A.

    2012-04-01

    Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. We have calculated the assemblage carbon biomass from data on standing stocks between the sea surface and 2500 m water depth, based on 754 protein-biomass data of 21 planktic foraminifer species and morphotypes, produced with a newly developed method to analyze the protein biomass of single planktic foraminifer specimens. Samples include symbiont bearing and symbiont barren species, characteristic of surface and deep-water habitats. Conversion factors between individual protein-biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum diameter). The calculated assemblage biomass data presented here include 1057 sites and water depth intervals. Although the regional coverage of database is limited to the North Atlantic, Arabian Sea, Red Sea, and Caribbean, our data include a wide range of oligotrophic to eutrophic waters covering six orders of magnitude of assemblage biomass. A first order estimate of the global planktic foraminifer biomass from average standing stocks (>125 μm) ranges at 8.5-32.7 Tg C yr-1 (i.e. 0.008-0.033 Gt C yr-1), and might be more than three time as high including the entire fauna including neanic and juvenile individuals adding up to 25-100 Tg C yr-1. However, this is a first estimate of regional planktic-foraminifer assemblage-biomass (PFAB) extrapolated to the global scale, and future estimates based on larger data-sets might considerably deviate from the one presented here. This paper is

  7. First-order estimate of the planktic foraminifer biomass in the modern global oceans

    Directory of Open Access Journals (Sweden)

    R. Schiebel

    2012-04-01

    Full Text Available Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. We have calculated the assemblage carbon biomass from data on standing stocks between the sea surface and 2500 m water depth, based on 754 protein-biomass data of 21 planktic foraminifer species and morphotypes, produced with a newly developed method to analyze the protein biomass of single planktic foraminifer specimens. Samples include symbiont bearing and symbiont barren species, characteristic of surface and deep-water habitats. Conversion factors between individual protein-biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum diameter. The calculated assemblage biomass data presented here include 1057 sites and water depth intervals. Although the regional coverage of database is limited to the North Atlantic, Arabian Sea, Red Sea, and Caribbean, our data include a wide range of oligotrophic to eutrophic waters covering six orders of magnitude of assemblage biomass. A first order estimate of the global planktic foraminifer biomass from average standing stocks (>125 μm ranges at 8.5–32.7 Tg C yr−1 (i.e. 0.008–0.033 Gt C yr−1, and might be more than three time as high including the entire fauna including neanic and juvenile individuals adding up to 25–100 Tg C yr−1. However, this is a first estimate of regional planktic-foraminifer assemblage-biomass (PFAB extrapolated to the global scale, and future estimates based on larger data-sets might

  8. Research on Modeling of Hydropneumatic Suspension Based on Fractional Order

    Directory of Open Access Journals (Sweden)

    Junwei Zhang

    2015-01-01

    Full Text Available With such excellent performance as nonlinear stiffness, adjustable vehicle height, and good vibration resistance, hydropneumatic suspension (HS has been more and more applied to heavy vehicle and engineering vehicle. Traditional modeling methods are still confined to simple models without taking many factors into consideration. A hydropneumatic suspension model based on fractional order (HSM-FO is built with the advantage of fractional order (FO in viscoelastic material modeling considering the mechanics property of multiphase medium of HS. Then, the detailed calculation method is proposed based on Oustaloup filtering approximation algorithm. The HSM-FO is implemented in Matlab/Simulink, and the results of comparison among the simulation curve of fractional order, integral order, and the curve of real experiment prove the feasibility and validity of HSM-FO. The damping force property of the suspension system under different fractional orders is also studied. In the end of this paper, several conclusions concerning HSM-FO are drawn according to analysis of simulation.

  9. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  10. Nonlocal order parameters for the 1D Hubbard model.

    Science.gov (United States)

    Montorsi, Arianna; Roncaglia, Marco

    2012-12-07

    We characterize the Mott-insulator and Luther-Emery phases of the 1D Hubbard model through correlators that measure the parity of spin and charge strings along the chain. These nonlocal quantities order in the corresponding gapped phases and vanish at the critical point U(c)=0, thus configuring as hidden order parameters. The Mott insulator consists of bound doublon-holon pairs, which in the Luther-Emery phase turn into electron pairs with opposite spins, both unbinding at U(c). The behavior of the parity correlators is captured by an effective free spinless fermion model.

  11. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  12. Perspectives on the application of order-statistics in best-estimate plus uncertainty nuclear safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Robert P., E-mail: smsrpm@owt.co [AREVA NP, Inc., 3315 Old Forest Road, Lynchburg, VA (United States); Nutt, William T. [Nuclear Safety Analysis Services, 500 Aloha Street 403, Seattle, WA 98109 (United States)

    2011-01-15

    Research highlights: Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.

  13. The Variance of Energy Estimates for the Product Model

    Directory of Open Access Journals (Sweden)

    David Smallwood

    2003-01-01

    , is the product of a slowly varying random window, {w(t}, and a stationary random process, {g(t}, is defined. A single realization of the process will be defined as x(t. This is slightly different from the usual definition of the product model where the window is typically defined as deterministic. An estimate of the energy (the zero order temporal moment, only in special cases is this physical energy of the random process, {x(t}, is defined as m0=∫∞∞|x(t|2dt=∫−∞∞|w(tg(t|2dt Relationships for the mean and variance of the energy estimates, m0, are then developed. It is shown that for many cases the uncertainty (4π times the product of rms duration, Dt, and rms bandwidth, Df is approximately the inverse of the normalized variance of the energy. The uncertainty is a quantitative measure of the expected error in the energy estimate. If a transient has a significant random component, a small uncertainty parameter implies large error in the energy estimate. Attempts to resolve a time/frequency spectrum near the uncertainty limits of a transient with a significant random component will result in large errors in the spectral estimates.

  14. Fuzzy Economic Order Quantity Inventory Models Without Backordering

    Institute of Scientific and Technical Information of China (English)

    WANG Xiaobin; TANG Wansheng; ZHAO Ruiqing

    2007-01-01

    In economic order quantity models without backordering, both the stock cost of each unit quantity and the order cost of each cycle are characterized as independent fuzzy variables rather than fuzzy numbers as in previous studies. Based on an expected value criterion or a credibility criterion, a fuzzy expected value model and a fuzzy dependent hance programming (DCP) model are constructed. The purpose of the fuzzy expected value model is to find the optimal order quantity such that the fuzzy expected value of the total cost is minimal. The fuzzy DCP model is used to find the optimal order quantity for maximizing the credibility of an event such that the total cost in the planning periods does not exceed a certain budget level.Fuzzy simulations are designed to calculate the expected value of the fuzzy objective function and the credibility of each fuzzy event. A particle swarm optimization (PSO) algorithm based on a fuzzy simulation is designed, by integrating the fuzzy simulation and the PSO algorithm. Finally, a numerical example is given to illustrate the feasibility and validity of the proposed algorithm.

  15. Model Order Reduction for Fluid Dynamics with Moving Solid Boundary

    Science.gov (United States)

    Gao, Haotian; Wei, Mingjun

    2016-11-01

    We extended the application of POD-Galerkin projection for model order reduction from usual fixed-domain problems to more general fluid-solid systems when moving boundary/interface is involved. The idea is similar to numerical simulation approaches using embedded forcing terms to represent boundary motion and domain change. However, such a modified approach will not get away with the unsteadiness of boundary terms which appear as time-dependent coefficients in the new Galerkin model. These coefficients need to be pre-computed for prescribed motion, or worse, to be computed at each time step for non-prescribed motion. The extra computational cost gets expensive in some cases and eventually undermines the value of using reduced-order models. One solution is to decompose the moving boundary/domain to orthogonal modes and derive another low-order model with fixed coefficients for boundary motion. Further study shows that the most expensive integrations resulted from the unsteady motion (in both original and domain-decomposition approaches) have almost negligible impact on the overall dynamics. Dropping these expensive terms reduces the computation cost by at least one order while no obvious effect on model accuracy is noticed. Supported by ARL.

  16. Ordering kinetics in model systems with inhibited interfacial adsorption

    DEFF Research Database (Denmark)

    Willart, J.-F.; Mouritsen, Ole G.; Naudts, J.

    1992-01-01

    The ordering kinetics in two-dimensional Ising-like spin moels with inhibited interfacial adsorption are studied by computer-simulation calculations. The inhibited interfacial adsorption is modeled by a particular interfacial adsorption condition on the structure of the domain wall between...... neighboring domains. This condition can be either hard, as modeled by a singularity in the domain-boundary potential, or soft, as modeled by a version of the Blume-Capel model. The results show that the effect of the steric hindrance, be it hard or soft, is only manifested in the amplitude, A...

  17. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... distribution. Methods from structural reliability analysis are used to model the uncertainties and to assess the reliability for fatigue failure. Maximum Likelihood and Least Square estimation techniques are used to estimate fatigue life distribution parameters....

  18. STRONGLY CONSISTENT ESTIMATION FOR A MULTIVARIATE LINEAR RELATIONSHIP MODEL WITH ESTIMATED COVARIANCES MATRIX

    Institute of Scientific and Technical Information of China (English)

    Yee LEUNG; WU Kefa; DONG Tianxin

    2001-01-01

    In this paper, a multivariate linear functional relationship model, where the covariance matrix of the observational errors is not restricted, is considered. The parameter estimation of this model is discussed. The estimators are shown to be a strongly consistent estimation under some mild conditions on the incidental parameters.

  19. Statistical models for estimating daily streamflow in Michigan

    Science.gov (United States)

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l error magnitudes were compared by computing ratios of the mean standard deviation

  20. Parameterized reduced-order models using hyper-dual numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.

  1. An Order Statistics Approach to the Halo Model for Galaxies

    CERN Document Server

    Paul, Niladri; Sheth, Ravi K

    2016-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models -- one in which this luminosity function $p(L)$ is universal -- naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts $\\textit{no}$ luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a $\\textit{halo mass dependent}$ luminosity function $p(L|m)$, is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-pre...

  2. Reduced order modeling of grid-connected photovoltaic inverter systems

    Science.gov (United States)

    Wasynczuk, O.; Krause, P. C.; Anwah, N. A.

    1988-04-01

    This report summarizes the development of reduced order models of three-phase, line- and self-commutated inverter systems. This work was performed as part of the National Photovoltaics Program within the United States Department of Energy and was supervised by Sandia National Laboratories. The overall objective of the national program is to promote the development of low cost, reliable terrestrial photovoltaic systems for widespread use in residential, commercial and utility applications. The purpose of the effort reported herein is to provide reduced order models of three-phase, line- and self-commutated PV systems suitable for implementation into transient stability programs, which are commonly used to predict the stability characteristics of large-scale power systems. The accuracy of the reduced models is verified by comparing the response characteristics predicted therefrom with the response established using highly detailed PV system models in which the inverter switching is represented in detail.

  3. Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates

    Directory of Open Access Journals (Sweden)

    Piotr Białowolski

    2012-03-01

    Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period.  Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.

  4. Numerical modeling of higher order magnetic moments in UXO discrimination

    Science.gov (United States)

    Sanchez, V.; Yaoguo, L.; Nabighian, M.N.; Wright, D.L.

    2008-01-01

    The surface magnetic anomaly observed in unexploded ordnance (UXO) clearance is mainly dipolar, and consequently, the dipole is the only magnetic moment regularly recovered in UXO discrimination. The dipole moment contains information about the intensity of magnetization but lacks information about the shape of the target. In contrast, higher order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and to show its potential utility in UXO clearance, we present a numerical modeling study of UXO and related metallic objects. The tool for the modeling is a nonlinear integral equation describing magnetization within isolated compact objects of high susceptibility. A solution for magnetization distribution then allows us to compute the magnetic multipole moments of the object, analyze their relationships, and provide a depiction of the anomaly produced by different moments within the object. Our modeling results show the presence of significant higher order moments for more asymmetric objects, and the fields of these higher order moments are well above the noise level of magnetic gradient data. The contribution from higher order moments may provide a practical tool for improved UXO discrimination. ?? 2008 IEEE.

  5. Feedback control of unstable steady states of flow past a flat plate using reduced-order estimators

    CERN Document Server

    Ahuja, Sunil

    2009-01-01

    We present an estimator-based control design procedure for flow control, using reduced-order models of the governing equations, linearized about a possibly unstable steady state. The reduced models are obtained using an approximate balanced truncation method that retains the most controllable and observable modes of the system. The original method is valid only for stable linear systems, and we present an extension to unstable linear systems. The dynamics on the unstable subspace are represented by projecting the original equations onto the global unstable eigenmodes, assumed to be small in number. A snapshot-based algorithm is developed, using approximate balanced truncation, for obtaining a reduced-order model of the dynamics on the stable subspace. The proposed algorithm is used to study feedback control of 2-D flow over a flat plate at a low Reynolds number and at large angles of attack, where the natural flow is vortex shedding, though there also exists an unstable steady state. For control design, we de...

  6. Modeling Uncertainty when Estimating IT Projects Costs

    OpenAIRE

    Winter, Michel; Mirbel, Isabelle; Crescenzo, Pierre

    2014-01-01

    In the current economic context, optimizing projects' cost is an obligation for a company to remain competitive in its market. Introducing statistical uncertainty in cost estimation is a good way to tackle the risk of going too far while minimizing the project budget: it allows the company to determine the best possible trade-off between estimated cost and acceptable risk. In this paper, we present new statistical estimators derived from the way IT companies estimate the projects' costs. In t...

  7. A reduced order model for nonlinear vibroacoustic problems

    Directory of Open Access Journals (Sweden)

    Ouisse Morvan

    2012-07-01

    Full Text Available This work is related to geometrical nonlinearities applied to thin plates coupled with fluid-filled domain. Model reduction is performed to reduce the computation time. Reduced order model (ROM is issued from the uncoupled linear problem and enriched with residues to describe the nonlinear behavior and coupling effects. To show the efficiency of the proposed method, numerical simulations in the case of an elastic plate closing an acoustic cavity are presented.

  8. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  9. Estimating the Relative Order of Speciation or Coalescence Events on a Given Phylogeny

    Directory of Open Access Journals (Sweden)

    Rutger Vos

    2006-01-01

    Full Text Available The reconstruction of large phylogenetic trees from data that violates clocklike evolution (or as a supertree constructed from any m input trees raises a difficult question for biologists– how can one assign relative dates to the vertices of the tree? In this paper we investigate this problem, assuming a uniform distribution on the order of the inner vertices of the tree (which includes, but is more general than, the popular Yule distribution on trees. We derive fast algorithms for computing the probability that (i any given vertex in the tree was the j–th speciation event (for each j, and (ii any one given vertex is earlier in the tree than a second given vertex. We show how the first algorithm can be used to calculate the expected length of any given interior edge in any given tree that has been generated under either a constant- rate speciation model, or the coalescent model.

  10. Maximum likelihood estimates with order restrictions on probabilities and odds ratios: A geometric programming approach

    Directory of Open Access Journals (Sweden)

    D. L. Bricker

    1997-01-01

    Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.

  11. Robust simulation of buckled structures using reduced order modeling

    Science.gov (United States)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  12. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    Science.gov (United States)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  13. A multi agent model for the limit order book dynamics

    NARCIS (Netherlands)

    Bartolozzi, M.

    2010-01-01

    In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book.aEuro (c) The agents follow a noise decision making process where the

  14. Update rules and interevent time distributions: Slow ordering vs. no ordering in the Voter Model

    CERN Document Server

    Fernández-Gracia, Juan; Miguel, M San

    2011-01-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time distributions in simulations of interacting agents. In particular we consider update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully-connected, random and scale free networks with an update probability inversely proportional to the persistence, that is, the time since the last event. We find that in the thermodynamic limit, at variance with standard updates, the system orders slowly. The approach to the absorbing state is characterized by a power law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined.

  15. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  16. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    OpenAIRE

    Ridi Ferdiana; Paulus Insap Santoso; Lukito Edi Nugroho; Ahmad Ashari

    2011-01-01

    Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or s...

  17. USER STORY SOFTWARE ESTIMATION:A SIMPLIFICATION OF SOFTWARE ESTIMATION MODEL WITH DISTRIBUTED EXTREME PROGRAMMING ESTIMATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Ridi Ferdiana

    2011-01-01

    Full Text Available Software estimation is an area of software engineering concerned with the identification, classification and measurement of features of software that affect the cost of developing and sustaining computer programs [19]. Measuring the software through software estimation has purpose to know the complexity of the software, estimate the human resources, and get better visibility of execution and process model. There is a lot of software estimation that work sufficiently in certain conditions or step in software engineering for example measuring line of codes, function point, COCOMO, or use case points. This paper proposes another estimation technique called Distributed eXtreme Programming Estimation (DXP Estimation. DXP estimation provides a basic technique for the team that using eXtreme Programming method in onsite or distributed development. According to writer knowledge this is a first estimation technique that applied into agile method in eXtreme Programming.

  18. A first order system model of fracture healing

    Institute of Scientific and Technical Information of China (English)

    WANG Xiao-ping; ZHANG Xian-long; LI Zhu-guo; YU Xin-gang

    2005-01-01

    A first order system model is proposed for simulating the influence of stress stimulation on fracture strength during fracture healing. To validate the model, the diaphyses of bilateral tibiae in 70 New Zealand rabbits were osteotomized and fixed with rigid plates and stress-relaxation plates, respectively. Stress shielding rate and ultimate bending strength of the healing bone were measured at 2 to 48 weeks postoperatively. Ratios of stress stimulation and fracture strength of the healing bone to those of intact bone were taken as the system input and output. The assumed first order system model can approximate the experimental data on fracture strength from the input of stress stimulation over time, both for the rigid plate group and the stress-relaxation plate group, with different system parameters of time constant and gain. The fitting curve indicates that the effect of mechanical stimulus occurs mainly in late stages of healing. First order system can model the stress adaptation process of fracture healing. This approach presents a simple bio-mathematical model of the relationship between stress stimulation and fracture strength, and has the potential to optimize planning of functional exercises and conduct parametric studies.

  19. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  20. The rigid-flexible robotic manipulator: Nonlinear control and state estimation considering a different mathematical model for estimation

    Science.gov (United States)

    Fenili, André

    2012-11-01

    In this paper the author investigates the angular position and vibration control of a nonlinear rigid-flexible two link robotic manipulator considering fast angular maneuvers. The nonlinear control technique named State-Dependent Riccati Equation (SDRE) is used here to achieve these aims. In a more realistic approach, it is considered that some states can be measured and some states cannot be measured. The states not measured are estimated in order to be used for the SDRE control. These states are all the angular velocities and the velocity of deformation of the flexible link. A state-dependent Riccati equation-based estimator is used here. Not only different initial conditions between the system to be controlled (here named "real" system) and the estimator but also a different mathematical model is considered as the estimation model in order to verify the limitations of the proposed estimation and control techniques. The mathematical model that emulates the real system to be controlled considers two modes expansion and the estimation model considers only one mode expansion. The results for the different approaches are compared and discussed.

  1. Allometric models for estimating biomass and carbon in Alnus acuminata

    Directory of Open Access Journals (Sweden)

    William Fonseca

    2013-12-01

    Full Text Available In order to quantify the climate change mitigation potential of forest plantations, information on total biomass and its growth rate is required. Depending on the method used, the study of the biomass behavior can be a complex and expensive activity. The main objective of this research was to develop allometric models to estimate biomass for different tree components (leaves, branches, stem and root and total tree biomass in Alnus acuminata (Kunth in Costa Rica. Additionally, models were developed to estimate biomass and carbon in trees per hectare and for total plant biomass per hectare (trees + herbaceous vegetation + necromass. To construct the tree models, 41 sampling plots were evaluated in seven sites from which 47 trees with a diametric from 4.5 to 44.5 cm were selected to be harvested. In the selected models for the stem, root and total tree biomass, a r 2 >93.87 % was accomplished, while the r 2aj for leaves and branches was 88 %. For the biomass and carbon models for total trees and total plant biomass per hectare the r2 was >99 %. Average biomass expansion factor was 1.22 for aboveground and 1.43 for total biomass (when the root was included. The carbon fraction in plant biomass varied between 32.9 and 46.7 % and the percentage of soil carbon was 3 %.

  2. Comparison of Parameter Estimation Methods for Transformer Weibull Lifetime Modelling

    Institute of Scientific and Technical Information of China (English)

    ZHOU Dan; LI Chengrong; WANG Zhongdong

    2013-01-01

    Two-parameter Weibull distribution is the most widely adopted lifetime model for power transformers.An appropriate parameter estimation method is essential to guarantee the accuracy of a derived Weibull lifetime model.Six popular parameter estimation methods (i.e.the maximum likelihood estimation method,two median rank regression methods including the one regressing X on Y and the other one regressing Y on X,the Kaplan-Meier method,the method based on cumulative hazard plot,and the Li's method) are reviewed and compared in order to find the optimal one that suits transformer's Weibull lifetime modelling.The comparison took several different scenarios into consideration:10 000 sets of lifetime data,each of which had a sampling size of 40 ~ 1 000 and a censoring rate of 90%,were obtained by Monte-Carlo simulations for each scienario.Scale and shape parameters of Weibull distribution estimated by the six methods,as well as their mean value,median value and 90% confidence band are obtained.The cross comparison of these results reveals that,among the six methods,the maximum likelihood method is the best one,since it could provide the most accurate Weibull parameters,i.e.parameters having the smallest bias in both mean and median values,as well as the shortest length of the 90% confidence band.The maximum likelihood method is therefore recommended to be used over the other methods in transformer Weibull lifetime modelling.

  3. Bayesian model evidence for order selection and correlation testing.

    Science.gov (United States)

    Johnston, Leigh A; Mareels, Iven M Y; Egan, Gary F

    2011-01-01

    Model selection is a critical component of data analysis procedures, and is particularly difficult for small numbers of observations such as is typical of functional MRI datasets. In this paper we derive two Bayesian evidence-based model selection procedures that exploit the existence of an analytic form for the linear Gaussian model class. Firstly, an evidence information criterion is proposed as a model order selection procedure for auto-regressive models, outperforming the commonly employed Akaike and Bayesian information criteria in simulated data. Secondly, an evidence-based method for testing change in linear correlation between datasets is proposed, which is demonstrated to outperform both the traditional statistical test of the null hypothesis of no correlation change and the likelihood ratio test.

  4. An improved model for reduced-order physiological fluid flows

    CERN Document Server

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  5. Higher-Order Markov Tag-Topic Models for Tagged Documents and Images

    CERN Document Server

    Zeng, Jia; Cheung, William K; Li, Chun-Hung

    2011-01-01

    This paper studies the topic modeling problem of tagged documents and images. Higher-order relations among tagged documents and images are major and ubiquitous characteristics, and play positive roles in extracting reliable and interpretable topics. In this paper, we propose the tag-topic models (TTM) to depict such higher-order topic structural dependencies within the Markov random field (MRF) framework. First, we use the novel factor graph representation of latent Dirichlet allocation (LDA)-based topic models from the MRF perspective, and present an efficient loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Second, we propose the factor hypergraph representation of TTM, and focus on both pairwise and higher-order relation modeling among tagged documents and images. Efficient loopy BP algorithm is developed to learn TTM, which encourages the topic labeling smoothness among tagged documents and images. Extensive experimental results confirm the incorporation of highe...

  6. On estimation of survival function under random censoring model

    Institute of Scientific and Technical Information of China (English)

    JIANG; Jiancheng(蒋建成); CHENG; Bo(程博); WU; Xizhi(吴喜之)

    2002-01-01

    We study an estimator of the survival function under the random censoring model. Bahadur-type representation of the estimator is obtained and asymptotic expression for its mean squared errors is given, which leads to the consistency and asymptotic normality of the estimator. A data-driven local bandwidth selection rule for the estimator is proposed. It is worth noting that the estimator is consistent at left boundary points, which contrasts with the cases of density and hazard rate estimation. A Monte Carlo comparison of different estimators is made and it appears that the proposed data-driven estimators have certain advantages over the common Kaplan-Meier estmator.

  7. Learning curve estimation in medical devices and procedures: hierarchical modeling.

    Science.gov (United States)

    Govindarajulu, Usha S; Stillo, Marco; Goldfarb, David; Matheny, Michael E; Resnic, Frederic S

    2017-07-30

    In the use of medical device procedures, learning effects have been shown to be a critical component of medical device safety surveillance. To support their estimation of these effects, we evaluated multiple methods for modeling these rates within a complex simulated dataset representing patients treated by physicians clustered within institutions. We employed unique modeling for the learning curves to incorporate the learning hierarchy between institution and physicians and then modeled them within established methods that work with hierarchical data such as generalized estimating equations (GEE) and generalized linear mixed effect models. We found that both methods performed well, but that the GEE may have some advantages over the generalized linear mixed effect models for ease of modeling and a substantially lower rate of model convergence failures. We then focused more on using GEE and performed a separate simulation to vary the shape of the learning curve as well as employed various smoothing methods to the plots. We concluded that while both hierarchical methods can be used with our mathematical modeling of the learning curve, the GEE tended to perform better across multiple simulated scenarios in order to accurately model the learning effect as a function of physician and hospital hierarchical data in the use of a novel medical device. We found that the choice of shape used to produce the 'learning-free' dataset would be dataset specific, while the choice of smoothing method was negligibly different from one another. This was an important application to understand how best to fit this unique learning curve function for hierarchical physician and hospital data. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Precise local blur estimation based on the first-order derivative

    NARCIS (Netherlands)

    Bouma, H.; Dijk, J.; Eekeren, A.W.M. van

    2012-01-01

    Blur estimation is an important technique for super resolution, image restoration, turbulence mitigation, deblurring and autofocus. Low-cost methods have been proposed for blur estimation. However, they can have large stochastic errors when computed close to the edge location and biased estimates at

  9. Second order closure modeling of turbulent buoyant wall plumes

    Science.gov (United States)

    Zhu, Gang; Lai, Ming-Chia; Shih, Tsan-Hsing

    1992-01-01

    Non-intrusive measurements of scalar and momentum transport in turbulent wall plumes, using a combined technique of laser Doppler anemometry and laser-induced fluorescence, has shown some interesting features not present in the free jet or plumes. First, buoyancy-generation of turbulence is shown to be important throughout the flow field. Combined with low-Reynolds-number turbulence and near-wall effect, this may raise the anisotropic turbulence structure beyond the prediction of eddy-viscosity models. Second, the transverse scalar fluxes do not correspond only to the mean scalar gradients, as would be expected from gradient-diffusion modeling. Third, higher-order velocity-scalar correlations which describe turbulent transport phenomena could not be predicted using simple turbulence models. A second-order closure simulation of turbulent adiabatic wall plumes, taking into account the recent progress in scalar transport, near-wall effect and buoyancy, is reported in the current study to compare with the non-intrusive measurements. In spite of the small velocity scale of the wall plumes, the results showed that low-Reynolds-number correction is not critically important to predict the adiabatic cases tested and cannot be applied beyond the maximum velocity location. The mean and turbulent velocity profiles are very closely predicted by the second-order closure models. but the scalar field is less satisfactory, with the scalar fluctuation level underpredicted. Strong intermittency of the low-Reynolds-number flow field is suspected of these discrepancies. The trends in second- and third-order velocity-scalar correlations, which describe turbulent transport phenomena, are also predicted in general, with the cross-streamwise correlations better than the streamwise one. Buoyancy terms modeling the pressure-correlation are shown to improve the prediction slightly. The effects of equilibrium time-scale ratio and boundary condition are also discussed.

  10. Advanced Fluid Reduced Order Models for Compressible Flow.

    Energy Technology Data Exchange (ETDEWEB)

    Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas; Barone, Matthew F.; Maddix, Danielle; Mussoni, Erin E.; Balajewicz, Maciej (UIUC)

    2017-09-01

    This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.

  11. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

  12. Analytical Higher-Order Model for Flexible and Stretchable Sensors

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yongfang; ZHU Hongbin; LIU Cheng; LIU Xu; LIU Fuxi; L Yanjun

    2015-01-01

    The stretchable sensor wrapped around a foldable airfoil or embedded inside of it has great potential for use in the monitoring of the structural status of the foldable airfoil. The design methodology is important to the development of the stretchable sensor for status monitoring on the foldable airfoil. According to the requirement of mechanical flexibility of the sensor, the combined use of a layered flexible structural formation and a strain isolation layer is implemented. An analytical higher-order model is proposed to predict the stresses of the strain-isolation layer based on the shear-lag model for the safe design of the flexible and stretchable sensors. The normal stress and shear stress equations in the constructed structure of the sensors are obtained by the proposed model. The stress distribution in the structure is investigated when bending load is applied to the structures. The numerical results show that the proposed model can predict the variation of normal stress and shear stress along the thickness of the strain-isolation (polydimethylsiloxane) layer accurately. The results by the proposed model are in good agreement with the finite element method, in which the normal stress is variable while the shear stress is invariable along the thickness direction of strain-isolation layer. The high-order model is proposed to predict the stresses of the layered structure of the flexible and stretchable sensor for monitoring the status of the foldable airfoil.

  13. Source term boundary adaptive estimation in a first-order 1D hyperbolic PDE: Application to a one loop solar collector through

    KAUST Repository

    Mechhoud, Sarra

    2016-08-04

    In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term and only boundary measurements are available. Using boundary injection, the estimator is developed in the Lyapunov approach and consists of a combination of a state observer and a parameter adaptation law which guarantee the asymptotic convergence of the state and parameter estimation errors. Simulation results are provided to illustrate the performance of the proposed identifier.

  14. Performance of a reduced-order FSI model for flow-induced vocal fold vibration

    Science.gov (United States)

    Chang, Siyuan; Luo, Haoxiang; Luo's lab Team

    2016-11-01

    Vocal fold vibration during speech production involves a three-dimensional unsteady glottal jet flow and three-dimensional nonlinear tissue mechanics. A full 3D fluid-structure interaction (FSI) model is computationally expensive even though it provides most accurate information about the system. On the other hand, an efficient reduced-order FSI model is useful for fast simulation and analysis of the vocal fold dynamics, which is often needed in procedures such as optimization and parameter estimation. In this work, we study the performance of a reduced-order model as compared with the corresponding full 3D model in terms of its accuracy in predicting the vibration frequency and deformation mode. In the reduced-order model, we use a 1D flow model coupled with a 3D tissue model. Two different hyperelastic tissue behaviors are assumed. In addition, the vocal fold thickness and subglottal pressure are varied for systematic comparison. The result shows that the reduced-order model provides consistent predictions as the full 3D model across different tissue material assumptions and subglottal pressures. However, the vocal fold thickness has most effect on the model accuracy, especially when the vocal fold is thin. Supported by the NSF.

  15. Identification of reduced-order model for an aeroelastic system from flutter test data

    Directory of Open Access Journals (Sweden)

    Wei Tang

    2017-02-01

    Full Text Available Recently, flutter active control using linear parameter varying (LPV framework has attracted a lot of attention. LPV control synthesis usually generates controllers that are at least of the same order as the aeroelastic models. Therefore, the reduced-order model is required by synthesis for avoidance of large computation cost and high-order controller. This paper proposes a new procedure for generation of accurate reduced-order linear time-invariant (LTI models by using system identification from flutter testing data. The proposed approach is in two steps. The well-known poly-reference least squares complex frequency (p-LSCF algorithm is firstly employed for modal parameter identification from frequency response measurement. After parameter identification, the dominant physical modes are determined by clear stabilization diagrams and clustering technique. In the second step, with prior knowledge of physical poles, the improved frequency-domain maximum likelihood (ML estimator is presented for building accurate reduced-order model. Before ML estimation, an improved subspace identification considering the poles constraint is also proposed for initializing the iterative procedure. Finally, the performance of the proposed procedure is validated by real flight flutter test data.

  16. Consistent estimators in random censorship semiparametric models

    Institute of Scientific and Technical Information of China (English)

    王启华

    1996-01-01

    For the fixed design regression modelwhen Y, are randomly censored on the right, the estimators of unknown parameter and regression function g from censored observations are defined in the two cases .where the censored distribution is known and unknown, respectively. Moreover, the sufficient conditions under which these estimators are strongly consistent and pth (p>2) mean consistent are also established.

  17. Estimation of Wind Turbulence Using Spectral Models

    DEFF Research Database (Denmark)

    Soltani, Mohsen; Knudsen, Torben; Bak, Thomas

    2011-01-01

    The production and loading of wind farms are significantly influenced by the turbulence of the flowing wind field. Estimation of turbulence allows us to optimize the performance of the wind farm. Turbulence estimation is; however, highly challenging due to the chaotic behavior of the wind. In thi...

  18. A Note on Structural Equation Modeling Estimates of Reliability

    Science.gov (United States)

    Yang, Yanyun; Green, Samuel B.

    2010-01-01

    Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…

  19. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  20. Lessons from a low-order coupled chemistry meteorology model and applications to a high-dimensional chemical transport model

    Science.gov (United States)

    Haussaire, Jean-Matthieu; Bocquet, Marc

    2016-04-01

    Atmospheric chemistry models are becoming increasingly complex, with multiphasic chemistry, size-resolved particulate matter, and possibly coupled to numerical weather prediction models. In the meantime, data assimilation methods have also become more sophisticated. Hence, it will become increasingly difficult to disentangle the merits of data assimilation schemes, of models, and of their numerical implementation in a successful high-dimensional data assimilation study. That is why we believe that the increasing variety of problems encountered in the field of atmospheric chemistry data assimilation puts forward the need for simple low-order models, albeit complex enough to capture the relevant dynamics, physics and chemistry that could impact the performance of data assimilation schemes. Following this analysis, we developped a low-order coupled chemistry meteorology model named L95-GRS [1]. The advective wind is simulated by the Lorenz-95 model, while the chemistry is made of 6 reactive species and simulates ozone concentrations. With this model, we carried out data assimilation experiments to estimate the state of the system as well as the forcing parameter of the wind and the emissions of chemical compounds. This model proved to be a powerful playground giving insights on the hardships of online and offline estimation of atmospheric pollution. Building on the results on this low-order model, we test advanced data assimilation methods on a state-of-the-art chemical transport model to check if the conclusions obtained with our low-order model still stand. References [1] Haussaire, J.-M. and Bocquet, M.: A low-order coupled chemistry meteorology model for testing online and offline data assimilation schemes, Geosci. Model Dev. Discuss., 8, 7347-7394, doi:10.5194/gmdd-8-7347-2015, 2015.

  1. Polynomial-rooting based four th-order MUSIC for direction-of-arrival estimation of noncircular signals

    Institute of Scientific and Technical Information of China (English)

    Lei Shen; Zhiwen Liu; Xiaoming Gou; Yougen Xu

    2014-01-01

    A polynomial-rooting based fourth-order cumulant al-gorithm is presented for direction-of-arrival (DOA) estimation of second-order ful y noncircular source signals, using a uniform linear array (ULA). This algorithm inherits al merits of its spectral-searching counterpart except for the applicability to arbitrary ar-ray geometry, while reducing considerably the computation cost. Simulation results show that the proposed algorithm outperforms the previously developed closed-form second-order noncircular ESPRIT method, in terms of processing capacity and DOA esti-mation accuracy, especialy in the presence of spatial y colored noise.

  2. Parameter estimation of hidden periodic model in random fields

    Institute of Scientific and Technical Information of China (English)

    何书元

    1999-01-01

    Two-dimensional hidden periodic model is an important model in random fields. The model is used in the field of two-dimensional signal processing, prediction and spectral analysis. A method of estimating the parameters for the model is designed. The strong consistency of the estimators is proved.

  3. Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model

    KAUST Repository

    Mo, Qianxing

    2010-01-29

    ChIP-chip experiments are procedures that combine chromatin immunoprecipitation (ChIP) and DNA microarray (chip) technology to study a variety of biological problems, including protein-DNA interaction, histone modification, and DNA methylation. The most important feature of ChIP-chip data is that the intensity measurements of probes are spatially correlated because the DNA fragments are hybridized to neighboring probes in the experiments. We propose a simple, but powerful Bayesian hierarchical approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic resolutions. The model parameters are estimated using the Gibbs sampler. The proposed method is illustrated using two publicly available data sets from Affymetrix and Agilent platforms, and compared with three alternative Bayesian methods, namely, Bayesian hierarchical model, hierarchical gamma mixture model, and Tilemap hidden Markov model. The numerical results indicate that the proposed method performs as well as the other three methods for the data from Affymetrix tiling arrays, but significantly outperforms the other three methods for the data from Agilent promoter arrays. In addition, we find that the proposed method has better operating characteristics in terms of sensitivities and false discovery rates under various scenarios. © 2010, The International Biometric Society.

  4. A multi agent model for the limit order book dynamics

    Science.gov (United States)

    Bartolozzi, M.

    2010-11-01

    In the present work we introduce a novel multi-agent model with the aim to reproduce the dynamics of a double auction market at microscopic time scale through a faithful simulation of the matching mechanics in the limit order book. The agents follow a noise decision making process where their actions are related to a stochastic variable, the market sentiment, which we define as a mixture of public and private information. The model, despite making just few basic assumptions over the trading strategies of the agents, is able to reproduce several empirical features of the high-frequency dynamics of the market microstructure not only related to the price movements but also to the deposition of the orders in the book.

  5. A Robbins-Monro procedure for estimation in semiparametric regression models

    CERN Document Server

    Bercu, Bernard

    2011-01-01

    This paper is devoted to the parametric estimation of a shift together with the nonparametric estimation of a regression function in a semiparametric regression model. We implement a Robbins-Monro procedure very efficient and easy to handle. On the one hand, we propose a stochastic algorithm similar to that of Robbins-Monro in order to estimate the shift parameter. A preliminary evaluation of the regression function is not necessary for estimating the shift parameter. On the other hand, we make use of a recursive Nadaraya-Watson estimator for the estimation of the regression function. This kernel estimator takes in account the previous estimation of the shift parameter. We establish the almost sure convergence for both Robbins-Monro and Nadaraya-Watson estimators. The asymptotic normality of our estimates is also provided.

  6. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  7. Wave Transformation Modeling with Effective Higher-Order Finite Elements

    Directory of Open Access Journals (Sweden)

    Tae-Hwa Jung

    2016-01-01

    Full Text Available This study introduces a finite element method using a higher-order interpolation function for effective simulations of wave transformation. Finite element methods with a higher-order interpolation function usually employ a Lagrangian interpolation function that gives accurate solutions with a lesser number of elements compared to lower order interpolation function. At the same time, it takes a lot of time to get a solution because the size of the local matrix increases resulting in the increase of band width of a global matrix as the order of the interpolation function increases. Mass lumping can reduce computation time by making the local matrix a diagonal form. However, the efficiency is not satisfactory because it requires more elements to get results. In this study, the Legendre cardinal interpolation function, a modified Lagrangian interpolation function, is used for efficient calculation. Diagonal matrix generation by applying direct numerical integration to the Legendre cardinal interpolation function like conducting mass lumping can reduce calculation time with favorable accuracy. Numerical simulations of regular, irregular and solitary waves using the Boussinesq equations through applying the interpolation approaches are carried out to compare the higher-order finite element models on wave transformation and examine the efficiency of calculation.

  8. Regularization method for calibrated POD reduced-order models

    Directory of Open Access Journals (Sweden)

    El Majd Badr Abou

    2014-01-01

    Full Text Available In this work we present a regularization method to improve the accuracy of reduced-order models based on Proper Orthogonal Decomposition. The bench mark configuration retained corresponds to a case of relatively simple dynamics: a two-dimensional flow around a cylinder for a Reynolds number of 200. Finally, we show for this flow configuration that this procedure is efficient in term of reduction of errors.

  9. The Complexity of Model Checking Higher-Order Fixpoint Logic

    DEFF Research Database (Denmark)

    Axelsson, Roland; Lange, Martin; Somla, Rafal

    2007-01-01

    of solving rather large parity games of small index. As a consequence of this we obtain an ExpTime upper bound on the expression complexity of each HFLk,m. The lower bound is established by a reduction from the word problem for alternating (k-1)-fold exponential space bounded Turing Machines. As a corollary...... provides complexity results for its model checking problem. In particular we consider its fragments HFLk,m which are formed using types of bounded order k and arity m only. We establish k-ExpTime-completeness for model checking each HFLk,m fragment. For the upper bound we reduce the problem to the problem...

  10. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases......A two-step estimation method of stochastic volatility models is proposed: In the first step, we nonparametrically estimate the (unobserved) instantaneous volatility process. In the second step, standard estimation methods for fully observed diffusion processes are employed, but with the filtered...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...

  11. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  12. Second Order Model for Strongly Sheared Compressible Turbulence

    Directory of Open Access Journals (Sweden)

    marzougui hamed

    2015-01-01

    Full Text Available In this paper, we propose a model designed to describe a strongly sheared compressible homogeneous turbulent flows. Such flows are far from equilibrium and are well represented by the A3 and A4 cases of the DNS of Sarkar. Speziale and Xu developed a relaxation model in incompressible turbulence able to take into account significant departures from equilibrium. In a previous paper, Radhia et al. proposed a relaxation model similar to that of Speziale and Xu .This model is based on an algebraic representation of the Reynolds stress tensor, much simpler than that of Speziale and Xu and it gave a good result for rapid axisymetric contraction. In this work, we propose to extend the Radhia et al’s. model to compressible homogenous turbulence. This model is based on the pressure-strain model of Launder et al., where we incorporate turbulent Mach number in order to take into account compressibility effects. To assess this model, two numerical simulations were performed which are similar to the cases A3 and A4 of the DNS of Sarkar.

  13. Estimation in the polynomial errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Estimators are presented for the coefficients of the polynomial errors-in-variables (EV) model when replicated observations are taken at some experimental points. These estimators are shown to be strongly consistent under mild conditions.

  14. Adjoint based data assimilation for phase field model using second order information of a posterior distribution

    Science.gov (United States)

    Ito, Shin-Ichi; Nagao, Hiromichi; Yamanaka, Akinori; Tsukada, Yuhki; Koyama, Toshiyuki; Inoue, Junya

    Phase field (PF) method, which phenomenologically describes dynamics of microstructure evolutions during solidification and phase transformation, has progressed in the fields of hydromechanics and materials engineering. How to determine, based on observation data, an initial state and model parameters involved in a PF model is one of important issues since previous estimation methods require too much computational cost. We propose data assimilation (DA), which enables us to estimate the parameters and states by integrating the PF model and observation data on the basis of the Bayesian statistics. The adjoint method implemented on DA not only finds an optimum solution by maximizing a posterior distribution but also evaluates the uncertainty in the estimations by utilizing the second order information of the posterior distribution. We carried out an estimation test using synthetic data generated by the two-dimensional Kobayashi's PF model. The proposed method is confirmed to reproduce the true initial state and model parameters we assume in advance, and simultaneously estimate their uncertainties due to quality and quantity of the data. This result indicates that the proposed method is capable of suggesting the experimental design to achieve the required accuracy.

  15. Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.

    Science.gov (United States)

    Fernández-Gracia, J; Eguíluz, V M; San Miguel, M

    2011-07-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.

  16. Update rules and interevent time distributions: Slow ordering versus no ordering in the voter model

    Science.gov (United States)

    Fernández-Gracia, J.; Eguíluz, V. M.; San Miguel, M.

    2011-07-01

    We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.

  17. Computer Program for Estimation Multivariate Volatility Processes Using DVEC Model of CRM

    Directory of Open Access Journals (Sweden)

    Jelena Z. Minović

    2008-12-01

    Full Text Available This article presents computer program for estimation of multivariate (bivariate and trivariate volatility processes, written in EViews Version 4.1. In order to estimate multivariate volatility processes for analysis of the Serbian financial market, I had to write new subprograms within Eviews software package. The programs are written for the diagonal vector ARCH model (DVEC in bivariate and trivariate versions.

  18. Travel Time Estimation and Order Barching in a 2-Block Warehouse

    NARCIS (Netherlands)

    T. Le-Duc (Tho); M.B.M. de Koster (René)

    2004-01-01

    textabstractThe order batching problem (OBP) is the problem of determining the number of orders to be picked together in one picking tour. Although various objectives may arise in practice, minimizing the average throughput time of a random order is a common concern. In this paper, we consider the O

  19. Bayesian Estimation of Categorical Dynamic Factor Models

    Science.gov (United States)

    Zhang, Zhiyong; Nesselroade, John R.

    2007-01-01

    Dynamic factor models have been used to analyze continuous time series behavioral data. We extend 2 main dynamic factor model variations--the direct autoregressive factor score (DAFS) model and the white noise factor score (WNFS) model--to categorical DAFS and WNFS models in the framework of the underlying variable method and illustrate them with…

  20. Order-parameter model for unstable multilane traffic flow

    Science.gov (United States)

    Lubashevsky; Mahnke

    2000-11-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the "free flow synchronized mode jam" phase transitions as well as the hysteresis in these transitions. We introduce a variable called an order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the car interaction in contrast to such variables as the mean car density and velocity being actually the zeroth and first moments of the "one-particle" distribution function. Therefore, we regard the order parameter as an additional independent state variable of traffic flow. We assume that these correlations are due to a small group of "fast" drivers and by taking into account the general properties of the driver behavior we formulate a governing equation for the order parameter. In this context we analyze the instability of homogeneous traffic flow that manifested itself in the above-mentioned phase transitions and gave rise to the hysteresis in both of them. Besides, the jam is characterized by the vehicle flows at different lanes which are independent of one another. We specify a certain simplified model in order to study the general features of the car cluster self-formation under the "free flow synchronized motion" phase transition. In particular, we show that the main local parameters of the developed cluster are determined by the state characteristics of vehicle motion only.

  1. Cross gramian approximation with Laguerre polynomials for model order reduction

    Science.gov (United States)

    Perev, Kamen

    2015-11-01

    This paper considers the problem of model order reduction by approximate balanced truncation with Laguerre polynomials approximation of the system cross gramian. The cross gramian contains information both for the reachability of the system as well as for its observability. The main property of the cross gramian for a square symmetric stable linear system is that its square is equal to the product of the reachability and observability gramians and therefore, the absolute values of its eigenvalues are equal to the Hankel singular values. This is the reason to use the cross gramian for computing balancing transformations for model reduction. Laguerre polynomial series representations are used to approximate the cross gramian of the system at infinity. The orthogonal polynomials of Laguerre possess good convergence properties and allow to reduce the computational complexity of the model reduction problem. Numerical experiments are performed confirming the effectiveness of the proposed method.

  2. A parallel high-order accurate finite element nonlinear Stokes ice sheet model and benchmark experiments

    Energy Technology Data Exchange (ETDEWEB)

    Leng, Wei [Chinese Academy of Sciences; Ju, Lili [University of South Carolina; Gunzburger, Max [Florida State University; Price, Stephen [Los Alamos National Laboratory; Ringler, Todd [Los Alamos National Laboratory,

    2012-01-01

    The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.

  3. Using Count Data and Ordered Models in National Forest Recreation Demand Analysis

    Science.gov (United States)

    Simões, Paula; Barata, Eduardo; Cruz, Luis

    2013-11-01

    This research addresses the need to improve our knowledge on the demand for national forests for recreation and offers an in-depth data analysis supported by the complementary use of count data and ordered models. From a policy-making perspective, while count data models enable the estimation of monetary welfare measures, ordered models allow for the wider use of the database and provide a more flexible analysis of data. The main purpose of this article is to analyse the individual forest recreation demand and to derive a measure of its current use value. To allow a more complete analysis of the forest recreation demand structure the econometric approach supplements the use of count data models with ordered category models using data obtained by means of an on-site survey in the Bussaco National Forest (Portugal). Overall, both models reveal that travel cost and substitute prices are important explanatory variables, visits are a normal good and demographic variables seem to have no influence on demand. In particular, estimated price and income elasticities of demand are quite low. Accordingly, it is possible to argue that travel cost (price) in isolation may be expected to have a low impact on visitation levels.

  4. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    Science.gov (United States)

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  5. Simultaneous estimation of parameters in the bivariate Emax model.

    Science.gov (United States)

    Magnusdottir, Bergrun T; Nyquist, Hans

    2015-12-10

    In this paper, we explore inference in multi-response, nonlinear models. By multi-response, we mean models with m > 1 response variables and accordingly m relations. Each parameter/explanatory variable may appear in one or more of the relations. We study a system estimation approach for simultaneous computation and inference of the model and (co)variance parameters. For illustration, we fit a bivariate Emax model to diabetes dose-response data. Further, the bivariate Emax model is used in a simulation study that compares the system estimation approach to equation-by-equation estimation. We conclude that overall, the system estimation approach performs better for the bivariate Emax model when there are dependencies among relations. The stronger the dependencies, the more we gain in precision by using system estimation rather than equation-by-equation estimation.

  6. On the decay of higher order derivatives of solutions to Ladyzhenskaya model for incompressible viscous flows

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This article concerns large time behavior of Ladyzhenskaya model for incompressible viscous flows in R~3.Based on linear L~P-L~q estimates,the auxiliary decay properties of the solutions and generalized Gronwall type arguments,some optimal upper and lower bounds for the decay of higher order derivatives of solutions are derived without assuming any decay properties of solutions and using Fourier splitting technology.

  7. Ising and Gross-Neveu model in next-to-leading order

    CERN Document Server

    Knorr, Benjamin

    2016-01-01

    We study scalar and chiral fermionic models in next-to-leading order with the help of the functional renormalisation group. Their critical behaviour is of special interest in condensed matter systems, in particular graphene. To derive the beta functions, we make extensive use of computer algebra. The resulting flow equations were solved with pseudo-spectral methods to guarantee high accuracy. New estimates on critical quantities for both the Ising and the Gross-Neveu model are provided. For the Ising model, the estimates agree with earlier renormalisation group studies of the same level of approximation. By contrast, the approximation for the Gross-Neveu model retains many more operators than all earlier studies. For two Dirac fermions, the results agree with both lattice and large-$N_f$ calculations, but for a single flavour, different methods disagree quantitatively, and further studies are necessary.

  8. A Prototypical Model for Estimating High Tech Navy Recruiting Markets

    Science.gov (United States)

    1991-12-01

    Probability, Logit, and Probit Models, New York, New York 1990, p. 73. 37 Gujarati , D., ibid, p. 500. 31 V. MODELS ESTIMATION A. MODEL I ESTIMATION OF...Company, New York, N.Y., 1990. Gujarati , Damodar N., Basic Econometrics, Second Edition, McGraw-Hill Book Company, New York, N.Y., 1988. Jehn, Christopher

  9. Identification and Estimation of Exchange Rate Models with Unobservable Fundamentals

    NARCIS (Netherlands)

    Chambers, M.J.; McCrorie, J.R.

    2004-01-01

    This paper is concerned with issues of model specification, identification, and estimation in exchange rate models with unobservable fundamentals.We show that the model estimated by Gardeazabal, Reg´ulez and V´azquez (International Economic Review, 1997) is not identified and demonstrate how to spec

  10. Model approach for estimating potato pesticide bioconcentration factor.

    Science.gov (United States)

    Paraíba, Lourival Costa; Kataguiri, Karen

    2008-11-01

    We presented a model that estimates the bioconcentration factor (BCF) of pesticides in potatoes supposing that the pesticide in the soil solution is absorbed by the potato by passive diffusion, following Fick's second law. The pesticides in the model are nonionic organic substances, traditionally used in potato crops that degrade in the soil according to a first-order kinetic equation. This presents an expression that relates BCF with the pesticide elimination rate by the potato, with the pesticide accumulation rate within the potato, with the rate of growth of the potato and with the pesticide degradation rate in the soil. BCF was estimated supposing steady state equilibrium of the quotient between the pesticide concentration in the potato and the pesticide concentration in the soil solution. It is suggested that a negative correlation exists between the pesticide BCF and the soil sorption partition coefficient. The model was built based on the work of Trapp et al. [Trapp, S., Cammarano, A., Capri, E., Reichenberg, F., Mayer, P., 2007. Diffusion of PAH in potato and carrot slices and application for a potato model. Environ. Sci. Technol. 41 (9), 3103-3108], in which an expression to calculate the diffusivity of persistent organic substances in potatoes is presented. The model consists in adding to the expression of Trapp et al. [Trapp, S., Cammarano, A., Capri, E., Reichenberg, F., Mayer, P., 2007. Diffusion of PAH in potato and carrot slices and application for a potato model. Environ. Sci. Technol. 41 (9), 3103-3108] the hypothesis that the pesticide degrades in the soil. The value of BCF suggests which pesticides should be monitored in potatoes.

  11. Estimating the plasmonic field enhancement using high-order harmonic generation: The role of inhomogeneity of the fields

    CERN Document Server

    Shaaran, T; Lewenstein, M

    2012-01-01

    In strong field laser physics it is a common practice to use the high-order harmonic cutoff to estimate the laser intensity of the pulse that generates the harmonic radiation. Based on the semiclassical arguments it is possible to find a direct relationship between the maximum value of the photon energy and the laser intensity. This approach is only valid if the electric field driving HHG is spatially homogenous. In laser-matter processes driven by plasmonics fields, the enhanced fields present a spatial dependence that strongly modifies the electron motion and consequently the laser driven phenomena. As a result, this method should be revised in order to more realistically estimate the field. In this work, we demonstrate how the inhomogeneity of the fields will effect this estimation. Furthermore, by employing both quantum mechanical and classical calculations, we show how one can obtain a better estimation for the intensity of the enhanced field in plasmonic nanostructure.

  12. Bayesian approach to decompression sickness model parameter estimation.

    Science.gov (United States)

    Howle, L E; Weber, P W; Nichols, J M

    2017-03-01

    We examine both maximum likelihood and Bayesian approaches for estimating probabilistic decompression sickness model parameters. Maximum likelihood estimation treats parameters as fixed values and determines the best estimate through repeated trials, whereas the Bayesian approach treats parameters as random variables and determines the parameter probability distributions. We would ultimately like to know the probability that a parameter lies in a certain range rather than simply make statements about the repeatability of our estimator. Although both represent powerful methods of inference, for models with complex or multi-peaked likelihoods, maximum likelihood parameter estimates can prove more difficult to interpret than the estimates of the parameter distributions provided by the Bayesian approach. For models of decompression sickness, we show that while these two estimation methods are complementary, the credible intervals generated by the Bayesian approach are more naturally suited to quantifying uncertainty in the model parameters.

  13. Estimation of Boundary Conditions for Coastal Models,

    Science.gov (United States)

    1974-09-01

    equation: h(i) y ( t — i) di (3) The solution to Eq. (3) may be obtained by Fourier transformation. Because covariance function and spectral density function form...the cross— spectral density function estimate by a numerical Fourier transform, the even and odd parts of the cross—covariance function are determined...by A(k) = ½ [Y ~~ (k) + y (k)] (5) B(k) = ½ [Yxy (k) - y (k) ] (6) from which the co— spectral density function is estimated : k m—l -. C (f) = 2T[A(o

  14. Estimation of contact angle for hydrophobic silica nanoparticles in their hexagonally ordered layer

    Energy Technology Data Exchange (ETDEWEB)

    Detrich, Ádám; Nyári, Mária; Volentiru, Emőke; Hórvölgyi, Zoltán, E-mail: zhorvolgyi@mail.bme.hu

    2013-07-15

    Wetting properties of bilayered nanostructured coatings were studied in this work. Coatings were prepared by stratifying a compact silica sol–gel (SG) film and a Langmuir–Blodgett (LB) layer of differently sized (84, 131 and 227 nm) Stöber silica particles onto glass substrates. Joint high temperature annealing of the layers increased the mechanical stability of the particulate film facilitating its further investigations. Regular structure of LB layers allowed us to carry out wetting model investigations. Close-packed arrangement of particles was reinforced by optical measurements. Transmittance spectra of LB films were taken and evaluated by a theoretical model. Resulting effective refractive index and film thickness values indicated a good agreement between the real and the supposed (hexagonally close packed, monodisperse spherical particles) layer structure. It was confirmed by field emission scanning electron microscopy (FESEM) images, too. Advancing and receding water contact angles (CA) on differently (with mono- and bifunctional chlorosilanes) hydrophobized SG and combined films were measured applying the sessile drop method. SG films showed approximately the same CA-s after different silylation processes without considerable contact angle hysteresis (CAH). In case of combined films ca. 30° higher advancing CA-s were measured at stronger silylation and only the milder silylation resulted in significant CAH. It was explained by the surface heterogeneity of constituent particles. Layers of differently sized particles showed the same wetting properties in all cases in agreement to the Cassie–Baxter equation which was used for the estimation of CA values for the individual silica particles (88–90° and 105–106° depending on the silylation conditions). - Highlights: • Wetting of silica sol–gel films and LB layers of Stöber silica particles was studied. • Close-packed structure of LB layers was justified by optical and FESEM studies. • Only

  15. Electroviscoelasticity of liquid/liquid interfaces: fractional-order model.

    Science.gov (United States)

    Spasic, Aleksandar M; Lazarevic, Mihailo P

    2005-02-01

    A number of theories that describe the behavior of liquid-liquid interfaces have been developed and applied to various dispersed systems, e.g., Stokes, Reiner-Rivelin, Ericksen, Einstein, Smoluchowski, and Kinch. A new theory of electroviscoelasticity describes the behavior of electrified liquid-liquid interfaces in fine dispersed systems and is based on a new constitutive model of liquids. According to this model liquid-liquid droplet or droplet-film structure (collective of particles) is considered as a macroscopic system with internal structure determined by the way the molecules (ions) are tuned (structured) into the primary components of a cluster configuration. How the tuning/structuring occurs depends on the physical fields involved, both potential (elastic forces) and nonpotential (resistance forces). All these microelements of the primary structure can be considered as electromechanical oscillators assembled into groups, so that excitation by an external physical field may cause oscillations at the resonant/characteristic frequency of the system itself (coupling at the characteristic frequency). Up to now, three possible mathematical formalisms have been discussed related to the theory of electroviscoelasticity. The first is the tension tensor model, where the normal and tangential forces are considered, only in mathematical formalism, regardless of their origin (mechanical and/or electrical). The second is the Van der Pol derivative model, presented by linear and nonlinear differential equations. Finally, the third model presents an effort to generalize the previous Van der Pol equation: the ordinary time derivative and integral are now replaced with the corresponding fractional-order time derivative and integral of order p<1.

  16. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  17. Parameter estimation and error analysis in environmental modeling and computation

    Science.gov (United States)

    Kalmaz, E. E.

    1986-01-01

    A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

  18. Temporal aggregation in first order cointegrated vector autoregressive models

    DEFF Research Database (Denmark)

    La Cour, Lisbeth Funding; Milhøj, Anders

    We study aggregation - or sample frequencies - of time series, e.g. aggregation from weekly to monthly or quarterly time series. Aggregation usually gives shorter time series but spurious phenomena, in e.g. daily observations, can on the other hand be avoided. An important issue is the effect of ...... of aggregation on the adjustment coefficient in cointegrated systems. We study only first order vector autoregressive processes for n dimensional time series Xt, and we illustrate the theory by a two dimensional and a four dimensional model for prices of various grades of gasoline...

  19. Second order kinetic Kohn-Sham lattice model

    CERN Document Server

    Solorzano, Sergio; Herrmann, Hans

    2016-01-01

    In this work we introduce a new semi-implicit second order correction scheme to the kinetic Kohn-Sham lattice model. The new approach is validated by performing realistic exchange-correlation energy calculations of atoms and dimers of the first two rows of the periodic table finding good agreement with the expected values. Additionally we simulate the ethane molecule where we recover the bond lengths and compare the results with standard methods. Finally, we discuss the current applicability of pseudopotentials within the lattice kinetic Kohn-Sham approach.

  20. Local digital estimators of intrinsic volumes for Boolean models and in the design based setting

    DEFF Research Database (Denmark)

    Svane, Anne Marie

    In order to estimate the specific intrinsic volumes of a planar Boolean model from a binary image, we consider local digital algorithms based on weigted sums of 2×2 configuration counts. For Boolean models with balls as grains, explicit formulas for the bias of such algorithms are derived...... for the bias obtained for Boolean models are applied to existing algorithms in order to compare their accuracy....

  1. Tractable Latent State Filtering for Non-Linear DSGE Models Using a Second-Order Approximation

    OpenAIRE

    Kollmann, Robert

    2013-01-01

    This paper develops a novel approach for estimating latent state variables of Dynamic Stochastic General Equilibrium (DSGE) models that are solved using a second-order accurate approximation. I apply the Kalman filter to a state-space representation of the second-order solution based on the ‘pruning’ scheme of Kim, Kim, Schaumburg and Sims (2008). By contrast to particle filters, no stochastic simulations are needed for the filter here--the present method is thus much faster. In Monte Carlo e...

  2. On Frequency Domain Models for TDOA Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Nielsen, Jesper Kjær; Christensen, Mads Græsbøll

    2015-01-01

    of a much more general method. In this connection, we establish the conditions under which the cross-correlation method is a statistically efficient estimator. One of the conditions is that the source signal is periodic with a known fundamental frequency of 2π/N radians per sample, where N is the number...

  3. Phase-field-crystal model for fcc ordering.

    Science.gov (United States)

    Wu, Kuo-An; Adland, Ari; Karma, Alain

    2010-06-01

    We develop and analyze a two-mode phase-field-crystal model to describe fcc ordering. The model is formulated by coupling two different sets of crystal density waves corresponding to and reciprocal lattice vectors, which are chosen to form triads so as to produce a simple free-energy landscape with coexistence of crystal and liquid phases. The feasibility of the approach is demonstrated with numerical examples of polycrystalline and (111) twin growth. We use a two-mode amplitude expansion to characterize analytically the free-energy landscape of the model, identifying parameter ranges where fcc is stable or metastable with respect to bcc. In addition, we derive analytical expressions for the elastic constants for both fcc and bcc. Those expressions show that a nonvanishing amplitude of [200] density waves is essential to obtain mechanically stable fcc crystals with a nonvanishing tetragonal shear modulus (C11-C12)/2. We determine the model parameters for specific materials by fitting the peak liquid structure factor properties and solid-density wave amplitudes following the approach developed for bcc [K.-A. Wu and A. Karma, Phys. Rev. B 76, 184107 (2007)]. This procedure yields reasonable predictions of elastic constants for both bcc Fe and fcc Ni using input parameters from molecular dynamics simulations. The application of the model to two-dimensional square lattices is also briefly examined.

  4. A Priori Estimates for Solutions of Boundary Value Problems for Fractional-Order Equations

    CERN Document Server

    Alikhanov, A A

    2011-01-01

    We consider boundary value problems of the first and third kind for the diffusionwave equation. By using the method of energy inequalities, we find a priori estimates for the solutions of these boundary value problems.

  5. A New Approximate Formula for Variance of Horvitz–Thompson Estimator using first order Inclusion Probabilities

    Directory of Open Access Journals (Sweden)

    Muhammad Qaiser Shahbaz

    2007-01-01

    Full Text Available A new approximate formula for sampling variance of Horvitz–Thompson (1952 estimator has been obtained. Empirical study of the approximate formula has been given to see its performance.

  6. Proofreading of DNA polymerase: a new kinetic model with higher-order terminal effects

    Science.gov (United States)

    Song, Yong-Shun; Shu, Yao-Gen; Zhou, Xin; Ou-Yang, Zhong-Can; Li, Ming

    2017-01-01

    The fidelity of DNA replication by DNA polymerase (DNAP) has long been an important issue in biology. While numerous experiments have revealed details of the molecular structure and working mechanism of DNAP which consists of both a polymerase site and an exonuclease (proofreading) site, there were quite a few theoretical studies on the fidelity issue. The first model which explicitly considered both sites was proposed in the 1970s and the basic idea was widely accepted by later models. However, all these models did not systematically investigate the dominant factor on DNAP fidelity, i.e. the higher-order terminal effects through which the polymerization pathway and the proofreading pathway coordinate to achieve high fidelity. In this paper, we propose a new and comprehensive kinetic model of DNAP based on some recent experimental observations, which includes previous models as special cases. We present a rigorous and unified treatment of the corresponding steady-state kinetic equations of any-order terminal effects, and derive analytical expressions for fidelity in terms of kinetic parameters under bio-relevant conditions. These expressions offer new insights on how the higher-order terminal effects contribute substantially to the fidelity in an order-by-order way, and also show that the polymerization-and-proofreading mechanism is dominated only by very few key parameters. We then apply these results to calculate the fidelity of some real DNAPs, which are in good agreements with previous intuitive estimates given by experimentalists.

  7. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)

    2013-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  8. Robust Estimation and Forecasting of the Capital Asset Pricing Model

    NARCIS (Netherlands)

    G. Bian (Guorui); M.J. McAleer (Michael); W-K. Wong (Wing-Keung)

    2010-01-01

    textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more

  9. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  10. PARAMETER ESTIMATION IN LINEAR REGRESSION MODELS FOR LONGITUDINAL CONTAMINATED DATA

    Institute of Scientific and Technical Information of China (English)

    QianWeimin; LiYumei

    2005-01-01

    The parameter estimation and the coefficient of contamination for the regression models with repeated measures are studied when its response variables are contaminated by another random variable sequence. Under the suitable conditions it is proved that the estimators which are established in the paper are strongly consistent estimators.

  11. Model Order Selection Rules for Covariance Structure Classification in Radar

    Science.gov (United States)

    Carotenuto, Vincenzo; De Maio, Antonio; Orlando, Danilo; Stoica, Petre

    2017-10-01

    The adaptive classification of the interference covariance matrix structure for radar signal processing applications is addressed in this paper. This represents a key issue because many detection architectures are synthesized assuming a specific covariance structure which may not necessarily coincide with the actual one due to the joint action of the system and environment uncertainties. The considered classification problem is cast in terms of a multiple hypotheses test with some nested alternatives and the theory of Model Order Selection (MOS) is exploited to devise suitable decision rules. Several MOS techniques, such as the Akaike, Takeuchi, and Bayesian information criteria are adopted and the corresponding merits and drawbacks are discussed. At the analysis stage, illustrating examples for the probability of correct model selection are presented showing the effectiveness of the proposed rules.

  12. Ordered LOGIT Model approach for the determination of financial distress.

    Science.gov (United States)

    Kinay, B

    2010-01-01

    Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.

  13. Low-order modelling of droplets on hydrophobic surfaces

    Science.gov (United States)

    Matar, Omar; Wray, Alex; Kahouadji, Lyes; Davis, Stephen

    2015-11-01

    We consider the behaviour of a droplet deposited onto a hydrophobic substrate. This and associated problems have garnered a wide degree of attention due to their significance in industrial and experimental settings, such as the post-rupture dewetting problem. These problems have generally defied low-order analysis due to the multi-valued nature of the interface, but we show here how to overcome this in this instance. We first discuss the static problem: when the droplet is stationary, its shape is prescribed by an ordinary differential equation (ODE) given by balancing gravitational and capillary stresses at the interface. This is dependent on the contact angle, the Bond number and the volume of the drop. In the high Bond number limit, we derive several low-order models of varying complexity to predict the shape of such drops. These are compared against numerical calculations of the ODE. We then approach the dynamic problem: in this case, the full Stokes equations throughout the drop must be considered. A low-order approach is used by solving the biharmonic equation in a coordinate system naturally mapping to the droplet shape. The results are compared against direct numerical simulations. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1, EPSRC Doctoral Prize Fellowship (AWW).

  14. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  15. Insights on the role of accurate state estimation in coupled model parameter estimation by a conceptual climate model study

    Science.gov (United States)

    Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui

    2017-03-01

    The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.

  16. Efficient estimation of moments in linear mixed models

    CERN Document Server

    Wu, Ping; Zhu, Li-Xing; 10.3150/10-BEJ330

    2012-01-01

    In the linear random effects model, when distributional assumptions such as normality of the error variables cannot be justified, moments may serve as alternatives to describe relevant distributions in neighborhoods of their means. Generally, estimators may be obtained as solutions of estimating equations. It turns out that there may be several equations, each of them leading to consistent estimators, in which case finding the efficient estimator becomes a crucial problem. In this paper, we systematically study estimation of moments of the errors and random effects in linear mixed models.

  17. Obtaining Diagnostic Classification Model Estimates Using Mplus

    Science.gov (United States)

    Templin, Jonathan; Hoffman, Lesa

    2013-01-01

    Diagnostic classification models (aka cognitive or skills diagnosis models) have shown great promise for evaluating mastery on a multidimensional profile of skills as assessed through examinee responses, but continued development and application of these models has been hindered by a lack of readily available software. In this article we…

  18. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  19. A posteriori model validation for the temporal order of directed functional connectivity maps

    Directory of Open Access Journals (Sweden)

    Adriene M. Beltz

    2015-08-01

    Full Text Available A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests, and (b to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates and substantive implications (e.g., higher order lags may be common in resting state data.

  20. Nuclear Policy and World Order: Why Denuclearization. World Order Models Project. Occasional Paper Number Two.

    Science.gov (United States)

    Falk, Richard A.

    The monograph examines the relationship of nuclear power to world order. The major purpose of the document is to stimulate research, education, dialogue, and political action for a just and peaceful world order. The document is presented in five chapters. Chapter I stresses the need for a system of global security to counteract dangers brought…

  1. First Versus Second Order Latent Growth Curve Models: Some Insights From Latent State-Trait Theory.

    Science.gov (United States)

    Geiser, Christian; Keller, Brian; Lockhart, Ginger

    2013-07-01

    First order latent growth curve models (FGMs) estimate change based on a single observed variable and are widely used in longitudinal research. Despite significant advantages, second order latent growth curve models (SGMs), which use multiple indicators, are rarely used in practice, and not all aspects of these models are widely understood. In this article, our goal is to contribute to a deeper understanding of theoretical and practical differences between FGMs and SGMs. We define the latent variables in FGMs and SGMs explicitly on the basis of latent state-trait (LST) theory and discuss insights that arise from this approach. We show that FGMs imply a strict trait-like conception of the construct under study, whereas SGMs allow for both trait and state components. Based on a simulation study and empirical applications to the CES-D depression scale (Radloff, 1977) we illustrate that, as an important practical consequence, FGMs yield biased reliability estimates whenever constructs contain state components, whereas reliability estimates based on SGMs were found to be accurate. Implications of the state-trait distinction for the measurement of change via latent growth curve models are discussed.

  2. Highway traffic model-based density estimation

    OpenAIRE

    Morarescu, Irinel - Constantin; CANUDAS DE WIT, Carlos

    2011-01-01

    International audience; The travel time spent in traffic networks is one of the main concerns of the societies in developed countries. A major requirement for providing traffic control and services is the continuous prediction, for several minutes into the future. This paper focuses on an important ingredient necessary for the traffic forecasting which is the real-time traffic state estimation using only a limited amount of data. Simulation results illustrate the performances of the proposed ...

  3. Reduced order modeling in iTOUGH2

    Science.gov (United States)

    Pau, George Shu Heng; Zhang, Yingqi; Finsterle, Stefan; Wainwright, Haruko; Birkholzer, Jens

    2014-04-01

    The inverse modeling and uncertainty quantification capabilities of iTOUGH2 are augmented with reduced order models (ROMs) that act as efficient surrogates for computationally expensive high fidelity models (HFMs). The implementation of the ROM capabilities involves integration of three main computational components. The first component is the ROM itself. Two response surface approximations are currently implemented: Gaussian process regression (GPR) and radial basis function (RBF) interpolation. The second component is a multi-output adaptive sampling procedure that determines the sample points used to construct the ROMs. The third component involves defining appropriate error measures for the adaptive sampling procedure, allowing ROMs to be constructed efficiently with limited user intervention. Details in all three components must complement one another to obtain an accurate approximation. The new capability and its integration with other analysis tools within iTOUGH2 are demonstrated in two examples. The results from using the ROMs in an uncertainty quantification analysis and a global sensitivity analysis compare favorably with the results obtained using the HFMs. GPR is more accurate than RBF, but the difference can be small and similar conclusion can be deduced from the analyses. In the second example involving a realistic numerical model for a hypothetical industrial-scale carbon storage project in the Southern San Joaquin Basin, California, USA, significant reduction in computational effort can be achieved when ROMs are used to perform a rigorous global sensitivity analysis.

  4. Using the Neumann series expansion for assembling Reduced Order Models

    Directory of Open Access Journals (Sweden)

    Nasisi S.

    2014-06-01

    Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.

  5. Applicability of three complementary relationship models for estimating actual evapotranspiration in urban area

    Directory of Open Access Journals (Sweden)

    Nakamichi Takeshi

    2015-06-01

    Full Text Available The characteristics of evapotranspiration estimated by the complementary relationship actual evapotranspiration (CRAE, the advection-aridity (AA, and the modified advection-aridity (MAA models were investigated in six pairs of rural and urban areas of Japan in order to evaluate the applicability of the three models the urban area. The main results are as follows: 1 The MAA model could apply to estimating the actual evapotranspiration in the urban area. 2 The actual evapotranspirations estimated by the three models were much less in the urban area than in the rural. 3 The difference among the estimated values of evapotranspiration in the urban areas was significant, depending on each model, while the difference among the values in the rural areas was relatively small. 4 All three models underestimated the actual evapotranspiration in the urban areas from humid surfaces where water and green spaces exist. 5 Each model could take the effect of urbanization into account.

  6. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries

    Science.gov (United States)

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries. PMID:28212405

  7. A simplified fractional order impedance model and parameter identification method for lithium-ion batteries.

    Science.gov (United States)

    Yang, Qingxia; Xu, Jun; Cao, Binggang; Li, Xiuqing

    2017-01-01

    Identification of internal parameters of lithium-ion batteries is a useful tool to evaluate battery performance, and requires an effective model and algorithm. Based on the least square genetic algorithm, a simplified fractional order impedance model for lithium-ion batteries and the corresponding parameter identification method were developed. The simplified model was derived from the analysis of the electrochemical impedance spectroscopy data and the transient response of lithium-ion batteries with different states of charge. In order to identify the parameters of the model, an equivalent tracking system was established, and the method of least square genetic algorithm was applied using the time-domain test data. Experiments and computer simulations were carried out to verify the effectiveness and accuracy of the proposed model and parameter identification method. Compared with a second-order resistance-capacitance (2-RC) model and recursive least squares method, small tracing voltage fluctuations were observed. The maximum battery voltage tracing error for the proposed model and parameter identification method is within 0.5%; this demonstrates the good performance of the model and the efficiency of the least square genetic algorithm to estimate the internal parameters of lithium-ion batteries.

  8. A synthetic aperture radar sea surface distribution estimation by n-order Bézier curve and its application in ship detection

    Institute of Scientific and Technical Information of China (English)

    LANG Haitao; ZHANG Jie; WANG Yiduo; ZHANG Xi; MENG Junmin

    2016-01-01

    To dates, most ship detection approaches for single-pol synthetic aperture radar (SAR) imagery try to ensure a constant false-alarm rate (CFAR). A high performance ship detector relies on two key components: an accurate estimation to a sea surface distribution and a fine designed CFAR algorithm. First, a novel nonparametric sea surface distribution estimation method is developed based onn-order Bézier curve. To estimate the sea surface distribution usingn-order Bézier curve, an explicit analytical solution is derived based on a least square optimization, and the optimal selection also is presented to two essential parameters, the ordern of Bézier curve and the numberm of sample points. Next, to validate the ship detection performance of the estimated sea surface distribution, the estimated sea surface distribution byn-order Bézier curve is combined with a cell averaging CFAR (CA-CFAR). To eliminate the possible interfering ship targets in background window, an improved automatic censoring method is applied. Comprehensive experiments prove that in terms of sea surface estimation performance, the proposed method is as good as a traditional nonparametric Parzen window kernel method, and in most cases, outperforms two widely used parametric methods, K and G0 models. In terms of computation speed, a major advantage of the proposed estimation method is the time consuming only depended on the numberm of sample points while independent of imagery size, which makes it can achieve a significant speed improvement to the Parzen window kernel method, and in some cases, it is even faster than two parametric methods. In terms of ship detection performance, the experiments show that the ship detector which constructed by the proposed sea surface distribution model and the given CA-CFAR algorithm has wide adaptability to different SAR sensors, resolutions and sea surface homogeneities and obtains a leading performance on the test dataset.

  9. Estimation in partial linear EV models with replicated observations

    Institute of Scientific and Technical Information of China (English)

    CUI; Hengjian

    2004-01-01

    The aim of this work is to construct the parameter estimators in the partial linear errors-in-variables (EV) models and explore their asymptotic properties. Unlike other related References, the assumption of known error covariance matrix is removed when the sample can be repeatedly drawn at each designed point from the model. The estimators of interested regression parameters, and the model error variance, as well as the nonparametric function, are constructed. Under some regular conditions, all of the estimators prove strongly consistent. Meanwhile, the asymptotic normality for the estimator of regression parameter is also presented. A simulation study is reported to illustrate our asymptotic results.

  10. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  11. Assessing Uncertainty of Interspecies Correlation Estimation Models for Aromatic Compounds

    Science.gov (United States)

    We developed Interspecies Correlation Estimation (ICE) models for aromatic compounds containing 1 to 4 benzene rings to assess uncertainty in toxicity extrapolation in two data compilation approaches. ICE models are mathematical relationships between surrogate and predicted test ...

  12. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  13. Estimating the Robustness of Composite CBA and MCDA Assessments by Variation of Criteria Importance Order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    2011-01-01

    Abstract This paper discusses the concept of using rank variation concerning the stakeholder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis....... Furthermore, the relative weights can make a large difference in the resulting assessment of alternatives (Hobbs and Meier 2000). Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness...

  14. Estimation for the simple linear Boolean model

    OpenAIRE

    2006-01-01

    We consider the simple linear Boolean model, a fundamental coverage process also known as the Markov/General/infinity queue. In the model, line segments of independent and identically distributed length are located at the points of a Poisson process. The segments may overlap, resulting in a pattern of "clumps"-regions of the line that are covered by one or more segments-alternating with uncovered regions or "spacings". Study and application of the model have been impeded by the difficult...

  15. Bregman divergence as general framework to estimate unnormalized statistical models

    CERN Document Server

    Gutmann, Michael

    2012-01-01

    We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.

  16. Regularized Positive-Definite Fourth Order Tensor Field Estimation from DW-MRI★

    OpenAIRE

    2008-01-01

    In Diffusion Weighted Magnetic Resonance Image (DW-MRI) processing, a 2nd order tensor has been commonly used to approximate the diffusivity function at each lattice point of the DW-MRI data. From this tensor approximation, one can compute useful scalar quantities (e.g. anisotropy, mean diffusivity) which have been clinically used for monitoring encephalopathy, sclerosis, ischemia and other brain disorders. It is now well known that this 2nd-order tensor approximation fails to capture complex...

  17. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....

  18. CONSISTENCY OF LS ESTIMATOR IN SIMPLE LINEAR EV REGRESSION MODELS

    Institute of Scientific and Technical Information of China (English)

    Liu Jixue; Chen Xiru

    2005-01-01

    Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.

  19. Estimated Frequency Domain Model Uncertainties used in Robust Controller Design

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob;

    1994-01-01

    This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...

  20. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  1. FUNCTIONAL-COEFFICIENT REGRESSION MODEL AND ITS ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this paper,a class of functional-coefficient regression models is proposed and an estimation procedure based on the locally weighted least equares is suggested. This class of models,with the proposed estimation method,is a powerful means for exploratory data analysis.

  2. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    is required if one wants to perform inference in GARCH models relying on asymptotic normality of the estimator,see Pedersen and Rahbek (2014) and Francq et al. (2011). Such moment conditions may not be satisfied in practice for financial returns highlighting a large drawback of variance targeting estimation......As an alternative to quasi-maximum likelihood, targeting estimation is a much applied estimation method for univariate and multivariate GARCH models. In terms of variance targeting estimation recent research has pointed out that at least finite fourth-order moments of the data generating process....... In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...

  3. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    As an alternative to quasi-maximum likelihood, targeting estimation is a much applied estimation method for univariate and multivariate GARCH models. In terms of variance targeting estimation recent research has pointed out that at least finite fourth-order moments of the data generating process...... is required if one wants to perform inference in GARCH models relying on asymptotic normality of the estimator,see Pedersen and Rahbek (2014) and Francq et al. (2011). Such moment conditions may not be satisfied in practice for financial returns highlighting a large drawback of variance targeting estimation....... In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...

  4. Reduced order model for binary neutron star waveforms with tidal interactions

    Science.gov (United States)

    Lackey, Benjamin; Bernuzzi, Sebastiano; Galley, Chad

    2016-03-01

    Observations of inspiralling binary neutron star (BNS) systems with Advanced LIGO can be used to determine the unknown neutron-star equation of state by measuring the phase shift in the gravitational waveform due to tidal interactions. Unfortunately, this requires computationally efficient waveform models for use in parameter estimation codes that typically require 106-107 sequential waveform evaluations, as well as accurate waveform models with phase errors less than 1 radian over the entire inspiral to avoid systematic errors in the measured tidal deformability. The effective one body waveform model with l = 2 , 3, and 4 tidal multipole moments is currently the most accurate model for BNS systems, but takes several minutes to evaluate. We develop a reduced order model of this waveform by constructing separate orthonormal bases for the amplitude and phase evolution. We find that only 10-20 bases are needed to reconstruct any BNS waveform with a starting frequency of 10 Hz. The coefficients of these bases are found with Chebyshev interpolation over the waveform parameter space. This reduced order model has maximum errors of 0.2 radians, and results in a speedup factor of more than 103, allowing parameter estimation codes to run in days to weeks rather than decades.

  5. Parallel Computation of Air Pollution Using a Second-Order Closure Model

    Science.gov (United States)

    Pai, Prasad Prabhakar

    1991-02-01

    Rational analysis, prediction and policy making of air pollution problems depend on our understanding of the individual processes that govern the atmospheric system. In the past, computational constraints have prohibited the incorporation of detailed physics of many individual processes in air pollution models. This has resulted in poor model performance for realistic situations. Recent advances in computing capabilities make it possible to develop air pollution models which capture the essential physics of the individual processes. The present study uses a three -dimensional second-order closure diffusion model to simulate dispersion from ground level and elevated point sources in convective (daytime) boundary layers. The model uses mean and turbulence variables simulated with a one-dimensional second-order closure fluid dynamic model. The calculated mean profiles of wind and temperature are found to be in good agreement with the observed Day 33 Wangara data, whereas the calculated vertical profiles of turbulence variables agree well with those estimated from other numerical models and laboratory experiments. The three-dimensional second -order closure diffusion model can capture the plume behavior in daytime atmospheric boundary layer remarkably well in comparison with laboratory data. We also compare the second -order closure diffusion model with the commonly used K -diffusion model for the same meteorological conditions. In order to reduce the computational requirements for second -order closure models, we propose a parallel algorithm of a time-splitting finite element method for the numerical solution of the governing equations. The parallel time -splitting finite element method substantially reduces the model wallclock or turnaround time by exploiting the vector and parallel capabilities of modern supercomputers. The plethora of supercomputers in the market today made it important for us to study the key issue of algorithm "portability". In view of this, we

  6. Estimation and Uncertainty Analysis of Flammability Properties of Chemicals using Group-Contribution Property Models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence......, there is no information about the reliability of the data. Furthermore, the global optimality of the GC parameters estimation is often not ensured. In this research project flammability-related property data, like LFL and UFL, are estimated using the Marrero and Gani group contribution method (MG method). In addition...... the group contribution in three levels: The contributions from a specific functional group (1st order parameters), from polyfunctional (2nd order parameters) as well as from structural groups (3rd order parameters). The latter two classes of GC factors provide additional structural information beside...

  7. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly...

  8. Estimates of current debris from flux models

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-01-01

    Flux models that balance accuracy and simplicity are used to predict the growth of space debris to the present. Known and projected launch rates, decay models, and numerical integrations are used to predict distributions that closely resemble the current catalog-particularly in the regions containing most of the debris.

  9. Computational estimation of the constant beta (1) characterizing the order of zeta (1+it)

    Science.gov (United States)

    Kotnik, Tadej

    2008-09-01

    The paper describes a computational estimation of the constant beta (1) characterizing the bounds of left\\vert zeta (1+it)right\\vert . It is known that as trightarrow infty frac{zeta (2)}{2beta (1)e^{gamma }left[ 1+o(1)right] log \\... ... (1+it)right\\vert leq 2beta (1)e^{gamma }left[ 1+o(1) right] log log t with beta (1)geq frac{1}{2} , while the truth of the Riemann hypothesis would also imply that beta (1)leq 1 . In the range 1estimates of beta (1) are computed, one for increasingly small minima and another for increasingly large maxima of left\\vert zeta (1+it)right\\vert . As t increases, the estimates in the first set rapidly fall below 1 and gradually reach values slightly below 0.70 , while the estimates in the second set rapidly exceed frac{1}{2} and gradually reach values slightly above 0.64 . The obtained numerical results are discussed and compared to the implications of recent theoretical work of Granville and Soundararajan.

  10. Capillary wave approach to order-order fluid interfaces in the 3D three-state Potts model

    CERN Document Server

    Provero, P

    1994-01-01

    The physics of fluid interfaces between domains of different magnetization in the ordered phase of the 3D three-state Potts model is studied by means of a Monte Carlo simulation. It is shown that finite--size effects in the interface free energy are well described by the capillary wave model at two loop order, supporting the idea of the universality of this description of fluid interfaces in 3D statistical models.

  11. Synthesis of models for order-sorted first-order theories using linear algebra and constraint solving

    Directory of Open Access Journals (Sweden)

    Salvador Lucas

    2015-12-01

    Full Text Available Recent developments in termination analysis for declarative programs emphasize the use of appropriate models for the logical theory representing the program at stake as a generic approach to prove termination of declarative programs. In this setting, Order-Sorted First-Order Logic provides a powerful framework to represent declarative programs. It also provides a target logic to obtain models for other logics via transformations. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis, in particular in termination analysis of declarative programs. We use convex domains to give domains to the different sorts of an order-sorted signature; we interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra and arithmetic constraint solving to synthesize the models.

  12. Order Under Uncertainty: Robust Differential Expression Analysis Using Probabilistic Models for Pseudotime Inference

    Science.gov (United States)

    Campbell, Kieran R.

    2016-01-01

    Single cell gene expression profiling can be used to quantify transcriptional dynamics in temporal processes, such as cell differentiation, using computational methods to label each cell with a ‘pseudotime’ where true time series experimentation is too difficult to perform. However, owing to the high variability in gene expression between individual cells, there is an inherent uncertainty in the precise temporal ordering of the cells. Pre-existing methods for pseudotime estimation have predominantly given point estimates precluding a rigorous analysis of the implications of uncertainty. We use probabilistic modelling techniques to quantify pseudotime uncertainty and propagate this into downstream differential expression analysis. We demonstrate that reliance on a point estimate of pseudotime can lead to inflated false discovery rates and that probabilistic approaches provide greater robustness and measures of the temporal resolution that can be obtained from pseudotime inference. PMID:27870852

  13. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  14. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  15. ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION

    Directory of Open Access Journals (Sweden)

    Malika CHIKHI

    2012-06-01

    Full Text Available Cet article présente  le modèle linéaire généralisé englobant les  techniques de modélisation telles que la régression linéaire, la régression logistique, la régression  log linéaire et la régression  de Poisson . On Commence par la présentation des modèles  des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification  de la vraie valeur du paramètre  basé sur l'estimation de l'échantillon.

  16. An empirical model to estimate ultraviolet erythemal transmissivity

    Science.gov (United States)

    Antón, M.; Serrano, A.; Cancillo, M. L.; García, J. A.

    2009-04-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than ±0.5 unit and ±1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure.

  17. An empirical model to estimate ultraviolet erythemal transmissivity

    Energy Technology Data Exchange (ETDEWEB)

    Anton, M.; Serrano, A.; Cancillo, M.L.; Garcia, J.A. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2009-07-01

    An empirical model to estimate the solar ultraviolet erythemal irradiance (UVER) for all-weather conditions is presented. This model proposes a power expression with the UV transmissivity as a dependent variable, and the slant ozone column and the clearness index as independent variables. The UVER were measured at three stations in South-Western Spain during a five year period (2001-2005). A dataset corresponding to the period 2001-2004 was used to develop the model and an independent dataset (year 2005) for validation purposes. For all three locations, the empirical model explains more than 95% of UV transmissivity variability due to changes in the two independent variables. In addition, the coefficients of the models show that when the slant ozone amount decreases 1%, UV transmissivity and, therefore, UVER values increase approximately 1.33%-1.35%. The coefficients also show that when the clearness index decreases 1%, UV transmissivity increase 0.75%-0.78%. The validation of the model provided satisfactory results, with low mean absolute bias error (MABE), about 7%-8% for all stations. Finally, a one-day ahead forecast of the UV Index for cloud-free cases is presented, assuming the persistence in the total ozone column. The percentage of days with differences between forecast and experimental UVI lower than {+-}0.5 unit and {+-}1 unit is within the range of 28% to 37%, and 60% to 75%, respectively. Therefore, the empirical model proposed in this work provides reliable forecast cloud-free UVI in order to inform the public about the possible harmful effects of UV radiation over-exposure. (orig.)

  18. High order discretization schemes for stochastic volatility models

    CERN Document Server

    Jourdain, Benjamin

    2009-01-01

    In usual stochastic volatility models, the process driving the volatility of the asset price evolves according to an autonomous one-dimensional stochastic differential equation. We assume that the coefficients of this equation are smooth. Using It\\^o's formula, we get rid, in the asset price dynamics, of the stochastic integral with respect to the Brownian motion driving this SDE. Taking advantage of this structure, we propose - a scheme, based on the Milstein discretization of this SDE, with order one of weak trajectorial convergence for the asset price, - a scheme, based on the Ninomiya-Victoir discretization of this SDE, with order two of weak convergence for the asset price. We also propose a specific scheme with improved convergence properties when the volatility of the asset price is driven by an Orstein-Uhlenbeck process. We confirm the theoretical rates of convergence by numerical experiments and show that our schemes are well adapted to the multilevel Monte Carlo method introduced by Giles [2008a,b].

  19. Computational design of patterned interfaces using reduced order models

    Science.gov (United States)

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  20. Basic first-order model theory in Mizar

    Directory of Open Access Journals (Sweden)

    Marco Bright Caminati

    2010-01-01

    Full Text Available The author has submitted to Mizar Mathematical Library a series of five articles introducing a framework for the formalization of classical first-order model theory.In them, Goedel's completeness and Lowenheim-Skolem theorems have also been formalized for the countable case, to offer a first application of it and to showcase its utility.This is an overview and commentary on some key aspects of this setup.It features exposition and discussion of a new encoding of basic definitions and theoretical gears needed for the task, remarks about the design strategies and approaches adopted in their implementation, and more general reflections about proof checking induced by the work done.

  1. Inflationary scenarios in Starobinsky model with higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Artymowski, Michał [Institute of Physics, Jagiellonian University,Łojasiewicza 11, 30-348 Kraków (Poland); Lalak, Zygmunt [Institute of Theoretical Physics, Faculty of Physics, University of Warsaw,ul. Pasteura 5, 02-093 Warsaw (Poland); Lewicki, Marek [Institute of Theoretical Physics, Faculty of Physics, University of Warsaw,ul. Pasteura 5, 02-093 Warsaw (Poland); Michigan Center for Theoretical Physics, University of Michigan,450 Church Street, Ann Arbor MI 48109 (United States)

    2015-06-17

    We consider the Starobinsky inflation with a set of higher order corrections parametrised by two real coefficients λ{sub 1} ,λ{sub 2}. In the Einstein frame we have found a potential with the Starobinsky plateau, steep slope and possibly with an additional minimum, local maximum or a saddle point. We have identified three types of inflationary behaviour that may be generated in this model: i) inflation on the plateau, ii) at the local maximum (topological inflation), iii) at the saddle point. We have found limits on parameters λ{sub i} and initial conditions at the Planck scale which enable successful inflation and disable eternal inflation at the plateau. We have checked that the local minimum away from the GR vacuum is stable and that the field cannot leave it neither via quantum tunnelling nor via thermal corrections.

  2. Model-based Small Area Estimates of Cancer Risk Factors and Screening Behaviors - Small Area Estimates

    Science.gov (United States)

    These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.

  3. Recharge estimation for transient ground water modeling.

    Science.gov (United States)

    Jyrkama, Mikko I; Sykes, Jon F; Normani, Stefano D

    2002-01-01

    Reliable ground water models require both an accurate physical representation of the system and appropriate boundary conditions. While physical attributes are generally considered static, boundary conditions, such as ground water recharge rates, can be highly variable in both space and time. A practical methodology incorporating the hydrologic model HELP3 in conjunction with a geographic information system was developed to generate a physically based and highly detailed recharge boundary condition for ground water modeling. The approach uses daily precipitation and temperature records in addition to land use/land cover and soils data. The importance of the method in transient ground water modeling is demonstrated by applying it to a MODFLOW modeling study in New Jersey. In addition to improved model calibration, the results from the study clearly indicate the importance of using a physically based and highly detailed recharge boundary condition in ground water quality modeling, where the detailed knowledge of the evolution of the ground water flowpaths is imperative. The simulated water table is within 0.5 m of the observed values using the method, while the water levels can differ by as much as 2 m using uniform recharge conditions. The results also show that the combination of temperature and precipitation plays an important role in the amount and timing of recharge in cooler climates. A sensitivity analysis further reveals that increasing the leaf area index, the evaporative zone depth, or the curve number in the model will result in decreased recharge rates over time, with the curve number having the greatest impact.

  4. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  5. Adaptive Unified Biased Estimators of Parameters in Linear Model

    Institute of Scientific and Technical Information of China (English)

    Hu Yang; Li-xing Zhu

    2004-01-01

    To tackle multi collinearity or ill-conditioned design matrices in linear models,adaptive biased estimators such as the time-honored Stein estimator,the ridge and the principal component estimators have been studied intensively.To study when a biased estimator uniformly outperforms the least squares estimator,some suficient conditions are proposed in the literature.In this paper,we propose a unified framework to formulate a class of adaptive biased estimators.This class includes all existing biased estimators and some new ones.A suficient condition for outperforming the least squares estimator is proposed.In terms of selecting parameters in the condition,we can obtain all double-type conditions in the literature.

  6. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... for various configurations of the system in order to explore the best possible implementation....

  7. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  8. Parameter estimation based synchronization for an epidemic model with application to tuberculosis in Cameroon

    Energy Technology Data Exchange (ETDEWEB)

    Bowong, Samuel, E-mail: sbowong@gmail.co [Laboratory of Applied Mathematics, Department of Mathematics and Computer Science, Faculty of Science, University of Douala, P.O. Box 24157 Douala (Cameroon); Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Kurths, Jurgen [Postdam Institute for Climate Impact Research (PIK), Telegraphenberg A 31, 14412 Potsdam (Germany); Department of Physics, Humboldt Universitat zu Berlin, 12489 Berlin (Germany)

    2010-10-04

    We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.

  9. Parameter estimation based synchronization for an epidemic model with application to tuberculosis in Cameroon

    Science.gov (United States)

    Bowong, Samuel; Kurths, Jurgen

    2010-10-01

    We propose a method based on synchronization to identify the parameters and to estimate the underlying variables for an epidemic model from real data. We suggest an adaptive synchronization method based on observer approach with an effective guidance parameter to update rule design only from real data. In order, to validate the identifiability and estimation results, numerical simulations of a tuberculosis (TB) model using real data of the region of Center in Cameroon are performed to estimate the parameters and variables. This study shows that some tools of synchronization of nonlinear systems can help to deal with the parameter and state estimation problem in the field of epidemiology. We exploit the close link between mathematical modelling, structural identifiability analysis, synchronization, and parameter estimation to obtain biological insights into the system modelled.

  10. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  11. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  12. Hospital Case Cost Estimates Modelling - Algorithm Comparison

    CERN Document Server

    Andru, Peter

    2008-01-01

    Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.

  13. A regression model to estimate regional ground water recharge.

    Science.gov (United States)

    Lorenz, David L; Delin, Geoffrey N

    2007-01-01

    A regional regression model was developed to estimate the spatial distribution of ground water recharge in subhumid regions. The regional regression recharge (RRR) model was based on a regression of basin-wide estimates of recharge from surface water drainage basins, precipitation, growing degree days (GDD), and average basin specific yield (SY). Decadal average recharge, precipitation, and GDD were used in the RRR model. The RRR estimates were derived from analysis of stream base flow using a computer program that was based on the Rorabaugh method. As expected, there was a strong correlation between recharge and precipitation. The model was applied to statewide data in Minnesota. Where precipitation was least in the western and northwestern parts of the state (50 to 65 cm/year), recharge computed by the RRR model also was lowest (0 to 5 cm/year). A strong correlation also exists between recharge and SY. SY was least in areas where glacial lake clay occurs, primarily in the northwest part of the state; recharge estimates in these areas were in the 0- to 5-cm/year range. In sand-plain areas where SY is greatest, recharge estimates were in the 15- to 29-cm/year range on the basis of the RRR model. Recharge estimates that were based on the RRR model compared favorably with estimates made on the basis of other methods. The RRR model can be applied in other subhumid regions where region wide data sets of precipitation, streamflow, GDD, and soils data are available.

  14. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  15. Application of Bayesian Hierarchical Prior Modeling to Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Shutin, Dmitriy

    2012-01-01

    . The estimators result as an application of the variational message-passing algorithm on the factor graph representing the signal model extended with the hierarchical prior models. Numerical results demonstrate the superior performance of our channel estimators as compared to traditional and state......Existing methods for sparse channel estimation typically provide an estimate computed as the solution maximizing an objective function defined as the sum of the log-likelihood function and a penalization term proportional to the l1-norm of the parameter of interest. However, other penalization......-of-the-art sparse methods....

  16. Estimation of the Heteroskedastic Canonical Contagion Model with Instrumental Variables

    Science.gov (United States)

    2016-01-01

    Knowledge of contagion among economies is a relevant issue in economics. The canonical model of contagion is an alternative in this case. Given the existence of endogenous variables in the model, instrumental variables can be used to decrease the bias of the OLS estimator. In the presence of heteroskedastic disturbances this paper proposes the use of conditional volatilities as instruments. Simulation is used to show that the homoscedastic and heteroskedastic estimators which use them as instruments have small bias. These estimators are preferable in comparison with the OLS estimator and their asymptotic distribution can be used to construct confidence intervals. PMID:28030628

  17. A new estimate of the parameters in linear mixed models

    Institute of Scientific and Technical Information of China (English)

    王松桂; 尹素菊

    2002-01-01

    In linear mixed models, there are two kinds of unknown parameters: one is the fixed effect, theother is the variance component. In this paper, new estimates of these parameters, called the spectral decom-position estimates, are proposed, Some important statistical properties of the new estimates are established,in particular the linearity of the estimates of the fixed effects with many statistical optimalities. A new methodis applied to two important models which are used in economics, finance, and mechanical fields. All estimatesobtained have good statistical and practical meaning.

  18. The Adaptive LASSO Spline Estimation of Single-Index Model

    Institute of Scientific and Technical Information of China (English)

    LU Yiqiang; ZHANG Riquan; HU Bin

    2016-01-01

    In this paper,based on spline approximation,the authors propose a unified variable selection approach for single-index model via adaptive L1 penalty.The calculation methods of the proposed estimators are given on the basis of the known lars algorithm.Under some regular conditions,the authors demonstrate the asymptotic properties of the proposed estimators and the oracle properties of adaptive LASSO (aLASSO) variable selection.Simulations are used to investigate the performances of the proposed estimator and illustrate that it is effective for simultaneous variable selection as well as estimation of the single-index models.

  19. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  20. A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model

    Directory of Open Access Journals (Sweden)

    Pedro Donoso

    2011-08-01

    Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.

  1. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen;

    2008-01-01

    Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D s...

  2. Measurement of Large Dipolar Couplings of a Liquid Crystal with Terminal Phenyl Rings and Estimation of the Order Parameters.

    Science.gov (United States)

    Kumar, R V Sudheer; Ramanathan, Krishna V

    2015-07-20

    NMR spectroscopy is a powerful means of studying liquid-crystalline systems at atomic resolutions. Of the many parameters that can provide information on the dynamics and order of the systems, (1) H-(13) C dipolar couplings are an important means of obtaining such information. Depending on the details of the molecular structure and the magnitude of the order parameters, the dipolar couplings can vary over a wide range of values. Thus the method employed to estimate the dipolar couplings should be capable of estimating both large and small dipolar couplings at the same time. For this purpose, we consider here a two-dimensional NMR experiment that works similar to the insensitive nuclei enhanced by polarization transfer (INEPT) experiment in solution. With the incorporation of a modification proposed earlier for experiments with low radio frequency power, the scheme is observed to enable a wide range of dipolar couplings to be estimated at the same time. We utilized this approach to obtain dipolar couplings in a liquid crystal with phenyl rings attached to either end of the molecule, and estimated its local order parameters.

  3. Muscle synergy control model-tuned EMG driven torque estimation system with a musculo-skeletal model.

    Science.gov (United States)

    Min, Kyuengbo; Shin, Duk; Lee, Jongho; Kakei, Shinji

    2013-01-01

    Muscle activity is the final signal for motion control from the brain. Based on this biological characteristic, Electromyogram (EMG) signals have been applied to various systems that interface human with external environments such as external devices. In order to use EMG signals as input control signal for this kind of system, the current EMG driven torque estimation models generally employ the mathematical model that estimates the nonlinear transformation function between the input signal and the output torque. However, these models need to estimate too many parameters and this process cause its estimation versatility in various conditions to be poor. Moreover, as these models are designed to estimate the joint torque, the input EMG signals are tuned out of consideration for the physiological synergetic contributions of multiple muscles for motion control. To overcome these problems of the current models, we proposed a new tuning model based on the synergy control mechanism between multiple muscles in the cortico-spinal tract. With this synergetic tuning model, the estimated contribution of multiple muscles for the motion control is applied to tune the EMG signals. Thus, this cortico-spinal control mechanism-based process improves the precision of torque estimation. This system is basically a forward dynamics model that transforms EMG signals into the joint torque. It should be emphasized that this forward dynamics model uses a musculo-skeletal model as a constraint. The musculo-skeletal model is designed with precise musculo-skeletal data, such as origins and insertions of individual muscles or maximum muscle force. Compared with the mathematical model, the proposed model can be a versatile model for the torque estimation in the various conditions and estimates the torque with improved accuracy. In this paper, we also show some preliminary experimental results for the discussion about the proposed model.

  4. Second-order closure PBL model with new third-order moments: Comparison with LES data

    Science.gov (United States)

    Canuto, V. M.; Minotti, F.; Ronchi, C.; Ypma, R. M.; Zeman, O.

    1994-01-01

    This paper contains two parts. In the first part, a new set of diagnostic equations is derived for the third-order moments for a buoyancy-driven flow, by exact inversion of the prognostic equations for the third-order moment equations in the stationary case. The third-order moments exhibit a universal structure: they all are a linear combination of the derivatives of all the second-order moments, bar-w(exp 2), bar-w theta, bar-theta(exp 2), and bar-q(exp 2). Each term of the sum contains a turbulent diffusivity D(sub t), which also exhibits a universal structure of the form D(sub t) = a nu(sub t) + b bar-w theta. Since the sign of the convective flux changes depending on stable or unstable stratification, D(sub t) varies according to the type of stratification. Here nu(sub t) approximately equal to wl (l is a mixing length and w is an rms velocity) represents the 'mechanical' part, while the 'buoyancy' part is represented by the convective flux bar-w theta. The quantities a and b are functions of the variable N(sub tau)(exp 2), where N(exp 2) = g alpha derivative of Theta with respect to z and tau is the turbulence time scale. The new expressions for the third-order moments generalize those of Zeman and Lumley, which were subsequently adopted by Sun and Ogura, Chen and Cotton, and Finger and Schmidt in their treatments of the convective boundary layer. In the second part, the new expressions for the third-order moments are used to solve the ensemble average equations describing a purely convective boundary laye r heated from below at a constant rate. The computed second- and third-order moments are then compared with the corresponding Large Eddy Simulation (LES) results, most of which are obtained by running a new LES code, and part of which are taken from published results. The ensemble average results compare favorably with the LES data.

  5. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Ned; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  6. An Estimated DSGE Model of the Indian Economy

    OpenAIRE

    2010-01-01

    We develop a closed-economy DSGE model of the Indian economy and estimate it by Bayesian Maximum Likelihood methods using Dynare. We build up in stages to a model with a number of features important for emerging economies in general and the Indian economy in particular: a large proportion of credit-constrained consumers, a financial accelerator facing domestic firms seeking to finance their investment, and an informal sector. The simulation properties of the estimated model are examined under...

  7. Higher-order models versus direct hierarchical models: g as superordinate or breadth factor?

    Directory of Open Access Journals (Sweden)

    GILLES E. GIGNAC

    2008-03-01

    Full Text Available Intelligence research appears to have overwhelmingly endorsed a superordinate (higher-order model conceptualization of g, in comparison to the relatively less well-known breadth conceptualization of g, as represented by the direct hierarchical model. In this paper, several similarities and distinctions between the indirect and direct hierarchical models are delineated. Based on the re-analysis of five correlation matrices, it was demonstrated via CFA that the conventional conception of g as a higher-order superordinate factor was likely not as plausible as a first-order breadth factor. The results are discussed in light of theoretical advantages of conceptualizing g as a first-order factor. Further, because the associations between group-factors and g are constrained to zero within a direct hierarchical model, previous observations of isomorphic associations between a lower-order group factor and g are questioned.

  8. Model Order Selection in Multi-baseline Interferometric Radar Systems

    Directory of Open Access Journals (Sweden)

    Fulvio Gini

    2005-12-01

    Full Text Available Synthetic aperture radar interferometry (InSAR is a powerful technique to derive three-dimensional terrain images. Interest is growing in exploiting the advanced multi-baseline mode of InSAR to solve layover effects from complex orography, which generate reception of unexpected multicomponent signals that degrade imagery of both terrain radar reflectivity and height. This work addresses a few problems related to the implementation into interferometric processing of nonlinear algorithms for estimating the number of signal components, including a system trade-off analysis. Performance of various eigenvalues-based information-theoretic criteria (ITC algorithms is numerically investigated under some realistic conditions. In particular, speckle effects from surface and volume scattering are taken into account as multiplicative noise in the signal model. Robustness to leakage of signal power into the noise eigenvalues and operation with a small number of looks are investigated. The issue of baseline optimization for detection is also addressed. The use of diagonally loaded ITC methods is then proposed as a tool for robust operation in the presence of speckle decorrelation. Finally, case studies of a nonuniform array are studied and recommendations for a proper combination of ITC methods and system configuration are given.

  9. Selection of higher order regression models in the analysis of multi-factorial transcription data.

    Directory of Open Access Journals (Sweden)

    Olivia Prazeres da Costa

    Full Text Available INTRODUCTION: Many studies examine gene expression data that has been obtained under the influence of multiple factors, such as genetic background, environmental conditions, or exposure to diseases. The interplay of multiple factors may lead to effect modification and confounding. Higher order linear regression models can account for these effects. We present a new methodology for linear model selection and apply it to microarray data of bone marrow-derived macrophages. This experiment investigates the influence of three variable factors: the genetic background of the mice from which the macrophages were obtained, Yersinia enterocolitica infection (two strains, and a mock control, and treatment/non-treatment with interferon-γ. RESULTS: We set up four different linear regression models in a hierarchical order. We introduce the eruption plot as a new practical tool for model selection complementary to global testing. It visually compares the size and significance of effect estimates between two nested models. Using this methodology we were able to select the most appropriate model by keeping only relevant factors showing additional explanatory power. Application to experimental data allowed us to qualify the interaction of factors as either neutral (no interaction, alleviating (co-occurring effects are weaker than expected from the single effects, or aggravating (stronger than expected. We find a biologically meaningful gene cluster of putative C2TA target genes that appear to be co-regulated with MHC class II genes. CONCLUSIONS: We introduced the eruption plot as a tool for visual model comparison to identify relevant higher order interactions in the analysis of expression data obtained under the influence of multiple factors. We conclude that model selection in higher order linear regression models should generally be performed for the analysis of multi-factorial microarray data.

  10. Asymptotic Properties of Spectral Estimates of Second-Order with Missed Observations

    Directory of Open Access Journals (Sweden)

    G. S. Mokaddis

    2010-01-01

    Full Text Available Problem statement: As a complement of the periodogram study the asymptotic properties of the spectral density using data window for stationary stochastic process are investigated. Some statistical properties of covariance estimation function with missing observations are studied. Approach: The asymptotic normality was discussed. A numerical example was discussed by using computer programming. Results: The study of time series with missed observations and with the modified periodogram had the same results of the study of the classic time series. Conclusion: Modified periodogram with expanded finite Fourier transformation for time series with missed observation has improved the results of the classic time series.

  11. The Interaction Between Control Rods as Estimated by Second-Order One-Group Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-10-15

    The interaction effect between control rods is an important problem for the reactivity control of a reactor. The approach of second order one-group perturbation theory is shown to be attractive due to its simplicity. Formulas are derived for the fully inserted control rods in a bare reactor. For a single rod we introduce a correction parameter b, which with good approximation is proportional to the strength of the absorber. For two and more rods we introduce an interaction function g(r{sub ij}), which is assumed to depend only on the distance r{sub ij} between the rods. The theoretical expressions are correlated with the results of several experiments in R0, ZEBRA and the Aagesta reactor, as well as with more sophisticated calculations. The approximate formulas are found to give quite good agreement with exact values, but in the case of about 8 or more rods higher-order effects are likely to be important.

  12. Optimising a Model of Minimum Stock Level Control and a Model of Standing Order Cycle in Selected Foundry Plant

    Directory of Open Access Journals (Sweden)

    Szymszal J.

    2013-09-01

    Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.

  13. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    Energy Technology Data Exchange (ETDEWEB)

    Rypdal, Kristin; Baritz, Rainer

    2002-07-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  14. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  15. Formal modeling and verification of fractional order linear systems.

    Science.gov (United States)

    Zhao, Chunna; Shi, Likun; Guan, Yong; Li, Xiaojuan; Shi, Zhiping

    2016-05-01

    This paper presents a formalization of a fractional order linear system in a higher-order logic (HOL) theorem proving system. Based on the formalization of the Grünwald-Letnikov (GL) definition, we formally specify and verify the linear and superposition properties of fractional order systems. The proof provides a rigor and solid underpinnings for verifying concrete fractional order linear control systems. Our implementation in HOL demonstrates the effectiveness of our approach in practical applications.

  16. INTERACTING MULTIPLE MODEL ALGORITHM BASED ON JOINT LIKELIHOOD ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    Sun Jie; Jiang Chaoshu; Chen Zhuming; Zhang Wei

    2011-01-01

    A novel approach is proposed for the estimation of likelihood on Interacting Multiple-Model (IMM) filter.In this approach,the actual innovation,based on a mismatched model,can be formulated as sum of the theoretical innovation based on a matched model and the distance between matched and mismatched models,whose probability distributions are known.The joint likelihood of innovation sequence can be estimated by convolution of the two known probability density functions.The likelihood of tracking models can be calculated by conditional probability formula.Compared with the conventional likelihood estimation method,the proposed method improves the estimation accuracy of likelihood and robustness of IMM,especially when maneuver occurs.

  17. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    is simulation based and allows performance estimation to be carried out throughout all design phases ranging from early functional to cycle accurate and bit true descriptions of the system, modelling both hardware and software components in a unied way. Design space exploration and performance estimation...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  18. Gaussian estimation for discretely observed Cox-Ingersoll-Ross model

    Science.gov (United States)

    Wei, Chao; Shu, Huisheng; Liu, Yurong

    2016-07-01

    This paper is concerned with the parameter estimation problem for Cox-Ingersoll-Ross model based on discrete observation. First, a new discretized process is built based on the Euler-Maruyama scheme. Then, the parameter estimators are obtained by employing the maximum likelihood method and the explicit expressions of the error of estimation are given. Subsequently, the consistency property of all parameter estimators are proved by applying the law of large numbers for martingales, Holder's inequality, B-D-G inequality and Cauchy-Schwarz inequality. Finally, a numerical simulation example for estimators and the absolute error between estimators and true values is presented to demonstrate the effectiveness of the estimation approach used in this paper.

  19. Battery Calendar Life Estimator Manual Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  20. DR-model-based estimation algorithm for NCS

    Institute of Scientific and Technical Information of China (English)

    HUANG Si-niu; CHEN Zong-ji; WEI Chen

    2006-01-01

    A novel estimation scheme based on dead reckoning (DR) model for networked control system (NCS)is proposed in this paper.Both the detailed DR estimation algorithm and the stability analysis of the system are given.By using DR estimation of the state,the effect of communication delays is overcome.This makes a controller designed without considering delays still applicable in NCS Moreover,the scheme can effectively solve the problem of data packet loss or timeout.