WorldWideScience

Sample records for model order estimation

  1. Decimative Spectral Estimation with Unconstrained Model Order

    Directory of Open Access Journals (Sweden)

    Stavroula-Evita Fotinea

    2012-01-01

    Full Text Available This paper presents a new state-space method for spectral estimation that performs decimation by any factor, it makes use of the full set of data and brings further apart the poles under consideration, while imposing almost no constraints to the size of the Hankel matrix (model order, as decimation increases. It is compared against two previously proposed techniques for spectral estimation (along with derived decimative versions, that lie among the most promising methods in the field of spectroscopy, where accuracy of parameter estimation is of utmost importance. Moreover, it is compared against a state-of-the-art purely decimative method proposed in literature. Experiments performed on simulated NMR signals prove the new method to be more robust, especially for low signal-to-noise ratio.

  2. Fundamental Frequency and Model Order Estimation Using Spatial Filtering

    DEFF Research Database (Denmark)

    Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment...... on a trumpet signal show the applicability on real signals....

  3. The formulation and estimation of a spatial skew-normal generalized ordered-response model.

    Science.gov (United States)

    2016-06-01

    This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...

  4. Mixed Lp Estimators Variety for Model Order Reduction in Control Oriented System Identification

    Directory of Open Access Journals (Sweden)

    Christophe Corbier

    2015-01-01

    Full Text Available A new family of MLE type Lp estimators for model order reduction in dynamical systems identification is presented in this paper. A family of Lp distributions proposed in this work combines Lp2 (1estimation criterion and reduce the estimated model complexity. Convergence consistency properties of the estimator are analysed and the model order reduction is established. Experimental results are presented and discussed on a real vibration complex dynamical system and pseudo-linear models are considered.

  5. A Probabilistic Model of Visual Working Memory: Incorporating Higher Order Regularities into Working Memory Capacity Estimates

    Science.gov (United States)

    Brady, Timothy F.; Tenenbaum, Joshua B.

    2013-01-01

    When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…

  6. Comparisons of Modeling and State of Charge Estimation for Lithium-Ion Battery Based on Fractional Order and Integral Order Methods

    Directory of Open Access Journals (Sweden)

    Renxin Xiao

    2016-03-01

    Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.

  7. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  8. Estimation and asymptotic inference in the first order AR-ARCH model

    DEFF Research Database (Denmark)

    Lange, Theis; Rahbek, Anders; Jensen, Søren Tolver

    2011-01-01

    This article studies asymptotic properties of the quasi-maximum likelihood estimator (QMLE) for the parameters in the autoregressive (AR) model with autoregressive conditional heteroskedastic (ARCH) errors. A modified QMLE (MQMLE) is also studied. This estimator is based on truncation of individu...... for the QMLE to be asymptotically normal. Finally, geometric ergodicity for AR-ARCH processes is shown to hold under mild and classic conditions on the AR and ARCH processes.......This article studies asymptotic properties of the quasi-maximum likelihood estimator (QMLE) for the parameters in the autoregressive (AR) model with autoregressive conditional heteroskedastic (ARCH) errors. A modified QMLE (MQMLE) is also studied. This estimator is based on truncation of individual...

  9. Estimation of Higher Order Moments for Compound Models of Clutter by Mellin Transform

    OpenAIRE

    Bhattacharya, C

    2008-01-01

    The compound models of clutter statistics are found suitable to describe the nonstationary nature of radar backscattering from high-resolution observations. In this letter, we show that the properties of Mellin transform can be utilized to generate higher order moments of simple and compound models of clutter statistics in a compact manner

  10. Order statistics & inference estimation methods

    CERN Document Server

    Balakrishnan, N

    1991-01-01

    The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co

  11. An extended Kalman filter approach to non-stationary Bayesian estimation of reduced-order vocal fold model parameters.

    Science.gov (United States)

    Hadwin, Paul J; Peterson, Sean D

    2017-04-01

    The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.

  12. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part I: Model development and observability analysis

    Science.gov (United States)

    Li, Xiaoyu; Fan, Guodong; Pan, Ke; Wei, Guo; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    The design of a lumped parameter battery model preserving physical meaning is especially desired by the automotive researchers and engineers due to the strong demand for battery system control, estimation, diagnosis and prognostics. In light of this, a novel simplified fractional order electrochemical model is developed for electric vehicle (EV) applications in this paper. In the model, a general fractional order transfer function is designed for the solid phase lithium ion diffusion approximation. The dynamic characteristics of the electrolyte concentration overpotential are approximated by a first-order resistance-capacitor transfer function in the electrolyte phase. The Ohmic resistances and electrochemical reaction kinetics resistance are simplified to a lumped Ohmic resistance parameter. Overall, the number of model parameters is reduced from 30 to 9, yet the accuracy of the model is still guaranteed. In order to address the dynamics of phase-change phenomenon in the active particle during charging and discharging, variable solid-state diffusivity is taken into consideration in the model. Also, the observability of the model is analyzed on two types of lithium ion batteries subsequently. Results show the fractional order model with variable solid-state diffusivity agrees very well with experimental data at various current input conditions and is suitable for electric vehicle applications.

  13. Sinusoidal Order Estimation Using Angles between Subspaces

    Directory of Open Access Journals (Sweden)

    Søren Holdt Jensen

    2009-01-01

    Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.

  14. Application of numerical modelling in order to estimate the interaction between surface water and thermal groundwater use

    Science.gov (United States)

    Goetzl, Gregor; Hoyer, Stefan; Bruestle, Anna Katharina

    2014-05-01

    In Vienna the thermal use of shallow groundwater usage for heating and cooling purposes is of increasing interest during the past years. In this context the focal areas are located in the vicinity of the Danube River, which intersects the urban area of Vienna. This is a consequence of excellent aquifers, which predominately consist of poorly consolidated gravels of Holocene age deposited by the Danube River. Of course these shallow aquifer systems are hydraulically connected to the Danube. In addition most of the focal areas in Vienna are affected by abandoned meanders and ponds, which correspond the groundwater and eventually to the Danube River. These wide spread ponds remain from abandoned gravel pits, which are directly alimented by the groundwater. Focusing on these abandoned meanders and ponds the intensity of hydraulic correspondence to groundwater variations is strongly governed by the degree of colmatation. As thermal groundwater utilization is influencing the local hydraulic regime by means of well fields, enforced interflow between surface- and ground water have to be expected at the nearby surrounding of abandoned rivers, abandoned meanders and groundwater ponds. This leads to an attenuation of the capacity of the thermal utilizations as surface water and ground water show different annual temperature variations. Depending on the total pumping rate of a geothermal well field as well as on the spatially varying colmatation of surface waters restricted zones for thermal groundwater use have to be defined in order to avoid inefficient utilizations. Based on two presented case studies in the city of Vienna we aim to show methods based on numerical modelling and empirical studies (observation of gauges) in order to estimate the degree of colmatation of surface waters and to predict the interaction between thermal groundwater use and surface waters. As the heat budget of shallow surface waters (e.g. small ponds or lentic meanders) is affected by various

  15. Nitrogen Removal in a Horizontal Subsurface Flow Constructed Wetland Estimated Using the First-Order Kinetic Model

    Directory of Open Access Journals (Sweden)

    Lijuan Cui

    2016-11-01

    Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.

  16. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima

    2015-07-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  17. Partially ordered models

    NARCIS (Netherlands)

    Fernandez, R.; Deveaux, V.

    2010-01-01

    We provide a formal definition and study the basic properties of partially ordered chains (POC). These systems were proposed to model textures in image processing and to represent independence relations between random variables in statistics (in the later case they are known as Bayesian networks).

  18. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills

    DEFF Research Database (Denmark)

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-01-01

    Methane (CH4) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required......-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH4 generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only...... landfills (from the start of disposal until 2020 and until 2100). Through a CH4 mass balance approach, fugitive CH4 emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field...

  19. Spatio-temporal hazard estimation in the Auckland Volcanic Field, New Zealand, with a new event-order model

    Science.gov (United States)

    Bebbington, Mark S.; Cronin, Shane J.

    2011-01-01

    The Auckland Volcanic Field (AVF) with 49 eruptive centres in the last c. 250 ka presents many challenges to our understanding of distributed volcanic field construction and evolution. We re-examine the age constraints within the AVF and perform a correlation exercise matching the well-dated record of tephras from cores distributed throughout the field to the most likely source volcanoes, using thickness and location information and a simple attenuation model. Combining this augmented age information with known stratigraphic constraints, we produce a new age-order algorithm for the field, with errors incorporated using a Monte Carlo procedure. Analysis of the new age model discounts earlier appreciations of spatio-temporal clustering in the AVF. Instead the spatial and temporal aspects appear independent; hence the location of the last eruption provides no information about the next location. The temporal hazard intensity in the field has been highly variable, with over 63% of its centres formed in a high-intensity period between 40 and 20 ka. Another, smaller, high-intensity period may have occurred at the field onset, while the latest event, at 504 ± 5 years B.P., erupted 50% of the entire field's volume. This emphasises the lack of steady-state behaviour that characterises the AVF, which may also be the case in longer-lived fields with a lower dating resolution. Spatial hazard intensity in the AVF under the new age model shows a strong NE-SW structural control of volcanism that may reflect deep-seated crustal or subduction zone processes and matches the orientation of the Taupo Volcanic Zone to the south.

  20. On nonlinear reduced order modeling

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.

    2011-01-01

    When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to

  1. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills.

    Science.gov (United States)

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-06-01

    Methane (CH₄) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required categories. In general, the single-phase model, LandGEM, significantly overestimated CH₄generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH₄potential (BMP) and CH₄generation rate constant (k-value). In comparison to the IPCC model, the Afvalzorg model was more suitable for estimating CH₄generation at Danish landfills, because it defined more proper waste categories rather than traditional municipal solid waste (MSW) fractions. Moreover, the Afvalzorg model could better show the influence of not only the total disposed waste amount, but also various waste categories. By using laboratory-determined BMPs and k-values for shredder, sludge, mixed bulky waste, and street-cleaning waste, the Afvalzorg model was revised. The revised model estimated smaller cumulative CH₄generation results at the four Danish landfills (from the start of disposal until 2020 and until 2100). Through a CH₄mass balance approach, fugitive CH₄emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field measurements, indicating that the revised Afvalzorg model could provide practical and accurate estimation for Danish LFG emissions. This study is valuable for both researchers and engineers aiming to predict, control, and mitigate fugitive CH₄emissions from landfills receiving low-organic waste. Landfill operators use the first-order decay (FOD) models to estimate methane (CH₄) generation. A single-phase model (LandGEM) and a traditional model (IPCC) could result in

  2. A Novel Observer for Lithium-Ion Battery State of Charge Estimation in Electric Vehicles Based on a Second-Order Equivalent Circuit Model

    Directory of Open Access Journals (Sweden)

    Bizhong Xia

    2017-08-01

    Full Text Available Accurate state of charge (SOC estimation can prolong lithium-ion battery life and improve its performance in practice. This paper proposes a new method for SOC estimation. The second-order resistor-capacitor (2RC equivalent circuit model (ECM is applied to describe the dynamic behavior of lithium-ion battery on deriving state space equations. A novel method for SOC estimation is then presented. This method does not require any matrix calculation, so the computation cost can be very low, making it more suitable for hardware implementation. The Federal Urban Driving Schedule (FUDS, The New European Driving Cycle (NEDC, and the West Virginia Suburban Driving Schedule (WVUSUB experiments are carried to evaluate the performance of the proposed method. Experimental results show that the SOC estimation error can converge to 3% error boundary within 30 seconds when the initial SOC estimation error is 20%, and the proposed method can maintain an estimation error less than 3% with 1% voltage noise and 5% current noise. Further, the proposed method has excellent robustness against parameter disturbance. Also, it has higher estimation accuracy than the extended Kalman filter (EKF, but with decreased hardware requirements and faster convergence rate.

  3. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    Science.gov (United States)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  4. A reduced-order adaptive neuro-fuzzy inference system model as a software sensor for rapid estimation of five-day biochemical oxygen demand

    Science.gov (United States)

    Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin

    2013-07-01

    The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.

  5. Higher Order Numerical Methods and Use of Estimation Techniques to Improve Modeling of Two-Phase Flow in Pipelines and Wells

    Energy Technology Data Exchange (ETDEWEB)

    Lorentzen, Rolf Johan

    2002-04-01

    The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation

  6. Simultaneous estimation of phase and its pth order derivatives

    Science.gov (United States)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2016-05-01

    One unaddressed challenge in optical metrology has been the measurement of higher order derivatives of rough specimens subjected to loading. In this paper, we investigate an approach that allows for the simultaneous estimation of the phase and its higher order derivatives from a noisy interference field. The interference phase is represented as a weighted linear combination of linearly independent Fourier basis functions. The interference field is represented as a state space model with the weights of the basis functions as the elements of the state vector. These weights are accurately estimated by employing the extended Kalman filter. The interference phase and phase derivatives are subsequently computed using the estimated weights. Since the Fourier basis functions are infinitely differentiable, phase derivatives of any arbitrary order can be estimated. Simulation and experimental results are provided to substantiate the effectiveness of the proposed method in the presence of high noise.

  7. Reduced order ARMA spectral estimation of ocean waves

    Digital Repository Service at National Institute of Oceanography (India)

    Mandal, S.; Witz, J.A; Lyons, G.J.

    Research 14 (1992) 303-312 Reduced order ARMA spectral estimation of ocean waves S. Mandal,* J.A. Witz & G.J. Lyons Department of Mechanical Engineering, University College London, Torrington Place, London WC1E 7JE, UK (Received 15 September 1991...," accepted 12 March 1992 ) Several system identification techniques are available for determining parametric models of dynamic systems based on the input and output of stochastic processes such as ocean waves. Here we establish a reduced order...

  8. Optical method of atomic ordering estimation

    Energy Technology Data Exchange (ETDEWEB)

    Prutskij, T. [Instituto de Ciencias, BUAP, Privada 17 Norte, No 3417, col. San Miguel Huyeotlipan, Puebla, Pue. (Mexico); Attolini, G. [IMEM/CNR, Parco Area delle Scienze 37/A - 43010, Parma (Italy); Lantratov, V.; Kalyuzhnyy, N. [Ioffe Physico-Technical Institute, 26 Polytekhnicheskaya, St Petersburg 194021, Russian Federation (Russian Federation)

    2013-12-04

    It is well known that within metal-organic vapor-phase epitaxy (MOVPE) grown semiconductor III-V ternary alloys atomically ordered regions are spontaneously formed during the epitaxial growth. This ordering leads to bandgap reduction and to valence bands splitting, and therefore to anisotropy of the photoluminescence (PL) emission polarization. The same phenomenon occurs within quaternary semiconductor alloys. While the ordering in ternary alloys is widely studied, for quaternaries there have been only a few detailed experimental studies of it, probably because of the absence of appropriate methods of its detection. Here we propose an optical method to reveal atomic ordering within quaternary alloys by measuring the PL emission polarization.

  9. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  10. Joint fundamental frequency and order estimation using optimal filtering

    Directory of Open Access Journals (Sweden)

    Jakobsson Andreas

    2011-01-01

    Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.

  11. Generalized linear model for partially ordered data.

    Science.gov (United States)

    Zhang, Qiang; Ip, Edward Haksing

    2012-01-13

    Within the rich literature on generalized linear models, substantial efforts have been devoted to models for categorical responses that are either completely ordered or completely unordered. Few studies have focused on the analysis of partially ordered outcomes, which arise in practically every area of study, including medicine, the social sciences, and education. To fill this gap, we propose a new class of generalized linear models--the partitioned conditional model--that includes models for both ordinal and unordered categorical data as special cases. We discuss the specification of the partitioned conditional model and its estimation. We use an application of the method to a sample of the National Longitudinal Study of Youth to illustrate how the new method is able to extract from partially ordered data useful information about smoking youths that is not possible using traditional methods. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Randomized Local Model Order Reduction

    NARCIS (Netherlands)

    Buhr, Andreas; Smetana, Kathrin

    2017-01-01

    In this paper we propose local approximation spaces for localized model order reduction procedures such as domain decomposition and multiscale methods. Those spaces are constructed from local solutions of the partial differential equation (PDE) with random boundary conditions, yield an approximation

  13. Parameters and Fractional Differentiation Orders Estimation for Linear Continuous-Time Non-Commensurate Fractional Order Systems

    KAUST Repository

    Belkhatir, Zehor

    2017-05-31

    This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.

  14. Are Low-order Covariance Estimates Useful in Error Analyses?

    Science.gov (United States)

    Baker, D. F.; Schimel, D.

    2005-12-01

    Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb

  15. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  16. Accurate estimates of solutions of second order recursions

    NARCIS (Netherlands)

    Mattheij, R.M.M.

    1975-01-01

    Two important types of two dimensional matrix-vector and second order scalar recursions are studied. Both types possess two kinds of solutions (to be called forward and backward dominant solutions). For the directions of these solutions sharp estimates are derived, from which the solutions

  17. Optimal heavy tail estimation - Part 1: Order selection

    Science.gov (United States)

    Mudelsee, Manfred; Bermejo, Miguel A.

    2017-12-01

    The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1) serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  18. Optimal heavy tail estimation – Part 1: Order selection

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2017-12-01

    Full Text Available The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1 serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.

  19. Second order statistics of bilinear forms of robust scatter estimators

    KAUST Repository

    Kammoun, Abla

    2015-08-12

    This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.

  20. A Parametric Procedure for Ultrametric Tree Estimation from Conditional Rank Order Proximity Data.

    Science.gov (United States)

    Young, Martin R.; DeSarbo, Wayne S.

    1995-01-01

    A new parametric maximum likelihood procedure is proposed for estimating ultrametric trees for the analysis of conditional rank order proximity data. Technical aspects of the model and the estimation algorithm are discussed, and Monte Carlo results illustrate its application. A consumer psychology application is also examined. (SLD)

  1. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  2. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  3. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  4. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  5. Cognitive profiles and heritability estimates in the Old Order Amish.

    Science.gov (United States)

    Kuehner, Ryan M; Kochunov, Peter; Nugent, Katie L; Jurius, Deanna E; Savransky, Anya; Gaudiot, Christopher; Bruce, Heather A; Gold, James; Shuldiner, Alan R; Mitchell, Braxton D; Hong, L Elliot

    2016-08-01

    This study aimed to establish the applicability of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) in the Old Order Amish (OOA) and to assess the genetic contribution toward the RBANS total score and its cognitive domains using a large family-based sample of OOA. RBANS data were collected in 103 OOA individuals from Lancaster County, Pennsylvania, including 85 individuals without psychiatric illness and 18 individuals with current psychiatric diagnoses. The RBANS total score and all five cognitive domains of in nonpsychiatric OOA were within half a SD of the normative data of the general population. The RBANS total score was highly heritable (h=0.51, P=0.019). OOA with psychiatric diagnoses had a numerically lower RBANS total score and domain scores compared with the nonpsychiatric participants. The RBANS appears to be a suitable cognitive battery for the OOA population as measurements obtained from the OOA are comparable with normative data in the US population. The heritability estimated from the OOA is in line with heritabilities of other cognitive batteries estimated in other populations. These results support the use of RBANS in cognitive assessment, clinical care, and behavioral genetic studies of neuropsychological functioning in this population.

  6. Asymptotic Optimality of Estimating Function Estimator for CHARN Model

    Directory of Open Access Journals (Sweden)

    Tomoyuki Amano

    2012-01-01

    Full Text Available CHARN model is a famous and important model in the finance, which includes many financial time series models and can be assumed as the return processes of assets. One of the most fundamental estimators for financial time series models is the conditional least squares (CL estimator. However, recently, it was shown that the optimal estimating function estimator (G estimator is better than CL estimator for some time series models in the sense of efficiency. In this paper, we examine efficiencies of CL and G estimators for CHARN model and derive the condition that G estimator is asymptotically optimal.

  7. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    NARCIS (Netherlands)

    Baran, Ismet; Tutum, Cem C.; Hattel, Jesper H.

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles with

  8. Investigation of Effectiveness of Order Review and Release Models in Make to Order Supply Chain

    Directory of Open Access Journals (Sweden)

    Kundu Kaustav

    2016-01-01

    Full Text Available Nowadays customisation becomes more common due to vast requirement from the customers for which industries are trying to use make-to-order (MTO strategy. Due to high variation in the process, workload control models are extensively used for jobshop companies which usually adapt MTO strategy. Some authors tried to implement workload control models, order review and release systems, in non-repetitive manufacturing companies, where there is a dominant flow in production. Those models are better in shop floor but their performances are never been investigated in high variation situations like MTO supply chain. This paper starts with the introduction of particular issues in MTO companies and a general overview of order review and release systems widely used in the industries. Two order review and release systems, the Limited and Balanced models, particularly suitable for flow shop system are applied to MTO supply chain, where the processing times are difficult to estimate due to high variation. Simulation results show that the Balanced model performs much better than the Limited model if the processing times can be estimated preciously.

  9. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    Science.gov (United States)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  10. XY model with higher-order exchange.

    Science.gov (United States)

    Žukovič, Milan; Kalagov, Georgii

    2017-08-01

    An XY model, generalized by inclusion of up to an infinite number of higher-order pairwise interactions with an exponentially decreasing strength, is studied by spin-wave theory and Monte Carlo simulations. At low temperatures the model displays a quasi-long-range-order phase characterized by an algebraically decaying correlation function with the exponent η=T/[2πJ(p,α)], nonlinearly dependent on the parameters p and α that control the number of the higher-order terms and the decay rate of their intensity, respectively. At higher temperatures the system shows a crossover from the continuous Berezinskii-Kosterlitz-Thouless to the first-order transition for the parameter values corresponding to a highly nonlinear shape of the potential well. The role of topological excitations (vortices) in changing the nature of the transition is discussed.

  11. IDC Reengineering Phase 2 & 3 Rough Order of Magnitude (ROM) Cost Estimate Summary (Leveraged NDC Case).

    Energy Technology Data Exchange (ETDEWEB)

    Harris, James M.; Prescott, Ryan; Dawson, Jericah M.; Huelskamp, Robert M.

    2014-11-01

    Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.

  12. In-Situ Residual Tracking in Reduced Order Modelling

    Directory of Open Access Journals (Sweden)

    Joseph C. Slater

    2002-01-01

    Full Text Available Proper orthogonal decomposition (POD based reduced-order modelling is demonstrated to be a weighted residual technique similar to Galerkin's method. Estimates of weighted residuals of neglected modes are used to determine relative importance of neglected modes to the model. The cumulative effects of neglected modes can be used to estimate error in the reduced order model. Thus, once the snapshots have been obtained under prescribed training conditions, the need to perform full-order simulations for comparison is eliminates. This has the potential to allow the analyst to initiate further training when the reduced modes are no longer sufficient to accurately represent the predominant phenomenon of interest. The response of a fluid moving at Mach 1.2 above a panel to a forced localized oscillation of the panel at and away from the training operating conditions is used to demonstrate the evaluation method.

  13. An Almost Integration-free Approach to Ordered Response Models

    NARCIS (Netherlands)

    van Praag, B.M.S.; Ferrer-i-Carbonell, A.

    2006-01-01

    'In this paper we propose an alternative approach to the estimation of ordered response models. We show that the Probit-method may be replaced by a simple OLS-approach, called P(robit)OLS, without any loss of efficiency. This method can be generalized to the analysis of panel data. For large-scale

  14. Dynamical models of happiness with fractional order

    Science.gov (United States)

    Song, Lei; Xu, Shiyun; Yang, Jianying

    2010-03-01

    This present study focuses on a dynamical model of happiness described through fractional-order differential equations. By categorizing people of different personality and different impact factor of memory (IFM) with different set of model parameters, it is demonstrated via numerical simulations that such fractional-order models could exhibit various behaviors with and without external circumstance. Moreover, control and synchronization problems of this model are discussed, which correspond to the control of emotion as well as emotion synchronization in real life. This study is an endeavor to combine the psychological knowledge with control problems and system theories, and some implications for psychotherapy as well as hints of a personal approach to life are both proposed.

  15. Optimal inventory management and order book modeling

    KAUST Repository

    Baradel, Nicolas

    2018-02-16

    We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic of the order book, similar to the one considered in the Queue-Reactive models [14, 20, 21], the MM and the HFT define their trading strategy by optimizing the expected utility of terminal wealth, while the IB has a prescheduled task to sell or buy many shares of the considered asset. We derive the variational partial differential equations that characterize the value functions of the MM and HFT and explain how almost optimal control can be deduced from them. We then provide a first illustration of the interactions that can take place between these different market participants by simulating the dynamic of an order book in which each of them plays his own (optimal) strategy.

  16. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  17. Tracking Skill Acquisition with Cognitive Diagnosis Models: A Higher-Order, Hidden Markov Model with Covariates

    Science.gov (United States)

    Wang, Shiyu; Yang, Yan; Culpepper, Steven Andrew; Douglas, Jeffrey A.

    2018-01-01

    A family of learning models that integrates a cognitive diagnostic model and a higher-order, hidden Markov model in one framework is proposed. This new framework includes covariates to model skill transition in the learning environment. A Bayesian formulation is adopted to estimate parameters from a learning model. The developed methods are…

  18. Estimation of the convergence order of rigorous coupled-wave analysis for OCD metrology

    Science.gov (United States)

    Ma, Yuan; Liu, Shiyuan; Chen, Xiuguo; Zhang, Chuanwei

    2011-12-01

    In most cases of optical critical dimension (OCD) metrology, when applying rigorous coupled-wave analysis (RCWA) to optical modeling, a high order of Fourier harmonics is usually set up to guarantee the convergence of the final results. However, the total number of floating point operations grows dramatically as the truncation order increases. Therefore, it is critical to choose an appropriate order to obtain high computational efficiency without losing much accuracy in the meantime. In this paper, the convergence order associated with the structural and optical parameters has been estimated through simulation. The results indicate that the convergence order is linear with the period of the sample when fixing the other parameters, both for planar diffraction and conical diffraction. The illuminated wavelength also affects the convergence of a final result. With further investigations concentrated on the ratio of illuminated wavelength to period, it is discovered that the convergence order decreases with the growth of the ratio, and when the ratio is fixed, convergence order jumps slightly, especially in a specific range of wavelength. This characteristic could be applied to estimate the optimum convergence order of given samples to obtain high computational efficiency.

  19. A fractional-order model for MINMOD Millennium.

    Science.gov (United States)

    Cho, Yongjin; Kim, Imbunm; Sheen, Dongwoo

    2015-04-01

    MINMOD Millennium has been widely used to estimate insulin sensitivity (SI) in glucose-insulin dynamics. In order to explain the rheological behavior of glucose-insulin we attempt to modify MINMOD Millennium with fractional-order differentiation of order α ∈ (0, 1]. We show that the new modified model has non-negative, bounded solutions and a stable equilibrium point. Quasi-optimal fractional orders and parameters are estimated by using a nonlinear weighted least-squares method, the Levenberg-Marquardt algorithm, and the fractional Adams-Bashforth-Moulton method for several subjects (normal subjects and type 2 diabetic patients). The numerical results confirm that SI is significantly lower in diabetics than in non-diabetics. In addition, we explain the new factor (τ(1 - α)) determining glucose tolerance and the relation between SI and τ(1 - α). Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Extreme Earthquake Risk Estimation by Hybrid Modeling

    Science.gov (United States)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Garcia, S.; Emerson, D.; Perea, N.; Salazar, A.; Moulinec, C.

    2012-12-01

    The estimation of the hazard and the economical consequences i.e. the risk associated to the occurrence of extreme magnitude earthquakes in the neighborhood of urban or lifeline infrastructure, such as the 11 March 2011 Mw 9, Tohoku, Japan, represents a complex challenge as it involves the propagation of seismic waves in large volumes of the earth crust, from unusually large seismic source ruptures up to the infrastructure location. The large number of casualties and huge economic losses observed for those earthquakes, some of which have a frequency of occurrence of hundreds or thousands of years, calls for the development of new paradigms and methodologies in order to generate better estimates, both of the seismic hazard, as well as of its consequences, and if possible, to estimate the probability distributions of their ground intensities and of their economical impacts (direct and indirect losses), this in order to implement technological and economical policies to mitigate and reduce, as much as possible, the mentioned consequences. Herewith, we propose a hybrid modeling which uses 3D seismic wave propagation (3DWP) and neural network (NN) modeling in order to estimate the seismic risk of extreme earthquakes. The 3DWP modeling is achieved by using a 3D finite difference code run in the ~100 thousands cores Blue Gene Q supercomputer of the STFC Daresbury Laboratory of UK, combined with empirical Green function (EGF) techniques and NN algorithms. In particular the 3DWP is used to generate broadband samples of the 3D wave propagation of extreme earthquakes (plausible) scenarios corresponding to synthetic seismic sources and to enlarge those samples by using feed-forward NN. We present the results of the validation of the proposed hybrid modeling for Mw 8 subduction events, and show examples of its application for the estimation of the hazard and the economical consequences, for extreme Mw 8.5 subduction earthquake scenarios with seismic sources in the Mexican

  1. Global weighted estimates for second-order nondivergence elliptic ...

    Indian Academy of Sciences (India)

    Fengping Yao

    2018-03-21

    mail: yfp@shu.edu.cn. MS received 18 August 2015; revised 15 January 2016; accepted 28 ... A is (δ, R)-vanishing and is a time-independent/time-dependent Reifenberg flat domain, Byun and Wang [7,8] studied L p estimates.

  2. Order Quantity Distributions: Estimating an Adequate Aggregation Horizon

    Directory of Open Access Journals (Sweden)

    Eriksen Poul Svante

    2016-09-01

    Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.

  3. Joint estimation of the fractional differentiation orders and the unknown input for linear fractional non-commensurate system

    KAUST Repository

    Belkhatir, Zehor

    2015-11-05

    This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.

  4. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Model for traffic emissions estimation

    Science.gov (United States)

    Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.

    A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.

  6. Are Quantum Models for Order Effects Quantum?

    Science.gov (United States)

    Moreira, Catarina; Wichert, Andreas

    2017-12-01

    The application of principles of Quantum Mechanics in areas outside of physics has been getting increasing attention in the scientific community in an emergent disciplined called Quantum Cognition. These principles have been applied to explain paradoxical situations that cannot be easily explained through classical theory. In quantum probability, events are characterised by a superposition state, which is represented by a state vector in a N-dimensional vector space. The probability of an event is given by the squared magnitude of the projection of this superposition state into the desired subspace. This geometric approach is very useful to explain paradoxical findings that involve order effects, but do we really need quantum principles for models that only involve projections? This work has two main goals. First, it is still not clear in the literature if a quantum projection model has any advantage towards a classical projection. We compared both models and concluded that the Quantum Projection model achieves the same results as its classical counterpart, because the quantum interference effects play no role in the computation of the probabilities. Second, it intends to propose an alternative relativistic interpretation for rotation parameters that are involved in both classical and quantum models. In the end, instead of interpreting these parameters as a similarity measure between questions, we propose that they emerge due to the lack of knowledge concerned with a personal basis state and also due to uncertainties towards the state of world and towards the context of the questions.

  7. Consistent Estimation of Partition Markov Models

    Directory of Open Access Journals (Sweden)

    Jesús E. García

    2017-04-01

    Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.

  8. Multivariable robust adaptive controller using reduced-order model

    Directory of Open Access Journals (Sweden)

    Wei Wang

    1990-04-01

    Full Text Available In this paper a multivariable robust adaptive controller is presented for a plant with bounded disturbances and unmodeled dynamics due to plant-model order mismatches. The robust stability of the closed-loop system is achieved by using the normalization technique and the least squares parameter estimation scheme with dead zones. The weighting polynomial matrices are incorporated into the control law, so that the open-loop unstable or/and nonminimum phase plants can be handled.

  9. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan

    2013-01-01

    We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data. © 2013 Springer-Verlag.

  10. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  11. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  12. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    Science.gov (United States)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  13. Reduced order model of draft tube flow

    International Nuclear Information System (INIS)

    Rudolf, P; Štefan, D

    2014-01-01

    Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations

  14. Reduced order modeling of fluid/structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  15. A Simple Method for Estimation of Parameters in First order Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Miklos, Robert

    2014-01-01

    A simple method for estimation of parameters in first order systems with time delays is presented in this paper. The parameter estimation approach is based on a step response for the open loop system. It is shown that the estimation method does not require a complete step response, only a part of...

  16. Aeroelastic simulation using CFD based reduced order models

    International Nuclear Information System (INIS)

    Zhang, W.; Ye, Z.; Li, H.; Yang, Q.

    2005-01-01

    This paper aims at providing an accurate and efficient method for aeroelastic simulation. System identification is used to get the reduced order models of unsteady aerodynamics. Unsteady Euler codes are used to compute the output signals while 3211 multistep input signals are utilized. LS(Least Squares) method is used to estimate the coefficients of the input-output difference model. The reduced order models are then used in place of the unsteady CFD code for aeroelastic simulation. The aeroelastic equations are marched by an improved 4th order Runge-Kutta method that only needs to compute the aerodynamic loads one time at every time step. The computed results agree well with that of the direct coupling CFD/CSD methods. The computational efficiency is improved 1∼2 orders while still retaining the high accuracy. A standard aeroelastic computing example (isogai wing) with S type flutter boundary is computed and analyzed. It is due to the system has more than one neutral points at the Mach range of 0.875∼0.9. (author)

  17. The Optimal Economic Order: the simplest model

    NARCIS (Netherlands)

    J. Tinbergen (Jan)

    1992-01-01

    textabstractIn the last five years humanity has become faced with the problem of the optimal socioeconomic order more clearly than ever. After the confrontation of capitalism and socialism, which was the core of the Marxist thesis, the fact transpired that capitalism was not the optimal order. It

  18. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  19. Discrete Choice Models - Estimation of Passenger Traffic

    DEFF Research Database (Denmark)

    Sørensen, Majken Vildrik

    2003-01-01

    model, data and estimation are described, with a focus of possibilities/limitations of different techniques. Two special issues of modelling are addressed in further detail, namely data segmentation and estimation of Mixed Logit models. Both issues are concerned with whether individuals can be assumed...... for estimation of choice models). For application of the method an algorithm is provided with a case. Also for the second issue, estimation of Mixed Logit models, a method was proposed. The most commonly used approach to estimate Mixed Logit models, is to employ the Maximum Simulated Likelihood estimation (MSL...... distribution of coefficients were found. All the shapes of distributions found, complied with sound knowledge in terms of which should be uni-modal, sign specific and/or skewed distributions....

  20. Nonparametric estimation in models for unobservable heterogeneity

    OpenAIRE

    Hohmann, Daniel

    2014-01-01

    Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.

  1. MCMC estimation of multidimensional IRT models

    NARCIS (Netherlands)

    Beguin, Anton; Glas, Cornelis A.W.

    1998-01-01

    A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will

  2. Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft

    Science.gov (United States)

    Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter

    2016-01-01

    This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.

  3. Modeling and Parameter Estimation of a Small Wind Generation System

    Directory of Open Access Journals (Sweden)

    Carlos A. Ramírez Gómez

    2013-11-01

    Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.

  4. Model order reduction using eigen algorithm

    African Journals Online (AJOL)

    DR OKE

    to use either for design or analysis. Hence, it is ... directly from the Eigen algorithm while the zeros are determined through factor division algorithm to obtain the reduced order system. ..... V. Singh, Chandra and H. Kar, “Improved Routh Pade approximationss: A computer aided approach”, IEEE Transaction on. Automat ...

  5. Estimates on the minimal period for periodic solutions of nonlinear second order Hamiltonian systems

    International Nuclear Information System (INIS)

    Yiming Long.

    1994-11-01

    In this paper, we prove a sharper estimate on the minimal period for periodic solutions of autonomous second order Hamiltonian systems under precisely Rabinowitz' superquadratic condition. (author). 20 refs, 1 fig

  6. Order and disorder in product innovation models

    NARCIS (Netherlands)

    Pina e Cunha, Miguel; Gomes, Jorge F.S.; Gomes, J.F.

    2003-01-01

    This article argues that the conceptual development of product innovation models goes hand in hand with paradigmatic changes in the field of organization science. Remarkable similarities in the change of organizational perspectives and product innovation models are noticeable. To illustrate how

  7. Model Order Reduction: Application to Electromagnetic Problems

    OpenAIRE

    Paquay, Yannick

    2017-01-01

    With the increase in computational resources, numerical modeling has grown expo- nentially these last two decades. From structural analysis to combustion modeling and electromagnetics, discretization methods–in particular the finite element method–have had a tremendous impact. Their main advantage consists in a correct representation of dynamical and nonlinear behaviors by solving equations at local scale, however the spatial discretization inherent to such approaches is also its main drawbac...

  8. Improved diagnostic model for estimating wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Endlich, R.M.; Lee, J.D.

    1983-03-01

    Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.

  9. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...... distribution functions of the CDOCE and Tmax as well as the correlation coefficients are obtained by using the FORM and the results are compared with corresponding Monte-Carlo simulations (MCS). According to the results obtained from the FORM, an increase in the pulling speed yields an increase...

  10. Model selection criteria : how to evaluate order restrictions

    NARCIS (Netherlands)

    Kuiper, R.M.

    2012-01-01

    Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might

  11. On parameter estimation in deformable models

    DEFF Research Database (Denmark)

    Fisker, Rune; Carstensen, Jens Michael

    1998-01-01

    Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...

  12. Model Order Reduction for Non Linear Mechanics

    OpenAIRE

    Pinillo, Rubén

    2017-01-01

    Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.

  13. Ordering dynamics of microscopic models with nonconserved order parameter of continuous symmetry

    DEFF Research Database (Denmark)

    Zhang, Z.; Mouritsen, Ole G.; Zuckermann, Martin J.

    1993-01-01

    from a disordered phase to an orientationally ordered phase of continuous symmetry. The Lebwohl-Lasher model accounts for the orientational ordering properties of the nematic-isotropic transition in liquid crystals and the Heisenberg model for the ferromagnetic-paramagnetic transition in magnetic...... crystals. For both models, which have a nonconserved order parameter, it is found that the linear scale, R(t), of the evolving order, following quenches to below the transition temperature, grows at late times in an effectively algebraic fashion, R(t)∼tn, with exponent values which are strongly temperature...

  14. Predicting inpatient clinical order patterns with probabilistic topic models vs conventional order sets.

    Science.gov (United States)

    Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B

    2017-05-01

    Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P  sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  15. Temporal rainfall estimation using input data reduction and model inversion

    Science.gov (United States)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a

  16. Context Tree Estimation in Variable Length Hidden Markov Models

    OpenAIRE

    Dumont, Thierry

    2011-01-01

    We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...

  17. Nonlinear Growth Models as Measurement Models: A Second-Order Growth Curve Model for Measuring Potential.

    Science.gov (United States)

    McNeish, Daniel; Dumas, Denis

    2017-01-01

    Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.

  18. Model for estimating of population abundance using line transect sampling

    Science.gov (United States)

    Abdulraqeb Abdullah Saeed, Gamil; Muhammad, Noryanti; Zun Liang, Chuan; Yusoff, Wan Nur Syahidah Wan; Zuki Salleh, Mohd

    2017-09-01

    Today, many studies use the nonparametric methods for estimating objects abundance, for the simplicity, the parametric methods are widely used by biometricians. This paper is designed to present the proposed model for estimating of population abundance using line transect technique. The proposed model is appealing because it is strictly monotonically decreasing with perpendicular distance and it satisfies the shoulder conditions. The statistical properties and inference of the proposed model are discussed. In the presented detection function, theoretically, the proposed model is satisfied the line transect assumption, that leads us to study the performance of this model. We use this model as a reference for the future research of density estimation. In this paper we also study the assumption of the detection function and introduce the corresponding model in order to apply the simulation in future work.

  19. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  20. Estimation in autoregressive models with Markov regime

    OpenAIRE

    Ríos, Ricardo; Rodríguez, Luis

    2005-01-01

    In this paper we derive the consistency of the penalized likelihood method for the number state of the hidden Markov chain in autoregressive models with Markov regimen. Using a SAEM type algorithm to estimate the models parameters. We test the null hypothesis of hidden Markov Model against an autoregressive process with Markov regime.

  1. Maneuver Estimation Model for Geostationary Orbit Determination

    National Research Council Canada - National Science Library

    Hirsch, Brian J

    2006-01-01

    .... The Clohessy-Wiltshire equations were used to model the relative motion of a geostationary satellite about its intended location, and a nonlinear least squares algorithm was developed to estimate the satellite trajectories.

  2. Parameter estimation of fractional-order chaotic systems by using quantum parallel particle swarm optimization algorithm.

    Directory of Open Access Journals (Sweden)

    Yu Huang

    Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.

  3. Multiscale Reduced Order Modeling of Complex Multi-Bay Structures

    Science.gov (United States)

    2013-07-01

    modeled with 96,000 degrees of freedom within Nastran . Keywords: reduced order modeling, nonlinear geometric response, finite elements 2...deformations, i.e. exhibiting geometric nonlinearity, from finite element models generated using commercial codes (e.g. Nastran , Abaqus, DYNA3D), see...reduced order model of the 9-bay panel modeled within Nastran with 96,000 degrees of freedom. An excellent agreement between the nonlinear static

  4. Multilevel models improve precision and speed of IC50 estimates.

    Science.gov (United States)

    Vis, Daniel J; Bombardelli, Lorenzo; Lightfoot, Howard; Iorio, Francesco; Garnett, Mathew J; Wessels, Lodewyk Fa

    2016-05-01

    Experimental variation in dose-response data of drugs tested on cell lines result in inaccuracies in the estimate of a key drug sensitivity characteristic: the IC50. We aim to improve the precision of the half-limiting dose (IC50) estimates by simultaneously employing all dose-responses across all cell lines and drugs, rather than using a single drug-cell line response. We propose a multilevel mixed effects model that takes advantage of all available dose-response data. The new estimates are highly concordant with the currently used Bayesian model when the data are well behaved. Otherwise, the multilevel model is clearly superior. The multilevel model yields a significant reduction of extreme IC50 estimates, an increase in precision and it runs orders of magnitude faster.

  5. Semi-parametric estimation for ARCH models

    Directory of Open Access Journals (Sweden)

    Raed Alzghool

    2018-03-01

    Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator

  6. A reduced order model of a quadruped walking system

    International Nuclear Information System (INIS)

    Sano, Akihito; Furusho, Junji; Naganuma, Nobuyuki

    1990-01-01

    Trot walking has recently been studied by several groups because of its stability and realizability. In the trot, diagonally opposed legs form pairs. While one pair of legs provides support, the other pair of legs swings forward in preparation for the next step. In this paper, we propose a reduced order model for the trot walking. The reduced order model is derived by using two dominant modes of the closed loop system in which the local feedback at each joint is implemented. It is shown by numerical examples that the obtained reduced order model can well approximate the original higher order model. (author)

  7. Stochastic reduced order models for inverse problems under uncertainty.

    Science.gov (United States)

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  8. Fractional-order in a macroeconomic dynamic model

    Science.gov (United States)

    David, S. A.; Quintino, D. D.; Soliani, J.

    2013-10-01

    In this paper, we applied the Riemann-Liouville approach in order to realize the numerical simulations to a set of equations that represent a fractional-order macroeconomic dynamic model. It is a generalization of a dynamic model recently reported in the literature. The aforementioned equations have been simulated for several cases involving integer and non-integer order analysis, with some different values to fractional order. The time histories and the phase diagrams have been plotted to visualize the effect of fractional order approach. The new contribution of this work arises from the fact that the macroeconomic dynamic model proposed here involves the public sector deficit equation, which renders the model more realistic and complete when compared with the ones encountered in the literature. The results reveal that the fractional-order macroeconomic model can exhibit a real reasonable behavior to macroeconomics systems and might offer greater insights towards the understanding of these complex dynamic systems.

  9. Parameter Estimation of Partial Differential Equation Models

    KAUST Repository

    Xun, Xiaolei

    2013-09-01

    Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  10. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Directory of Open Access Journals (Sweden)

    Jiri Kadlec

    2008-08-01

    Full Text Available A high performance RLS lattice filter with the estimation of an unknown order and forgetting factor of identified system was developed and implemented as a PCORE coprocessor for Xilinx EDK. The coprocessor implemented in FPGA hardware can fully exploit parallelisms in the algorithm and remove load from a microprocessor. The EDK integration allows effective programming and debugging of hardware accelerated DSP applications. The RLS lattice core extended by the order and forgetting factor estimation was implemented using the logarithmic numbers system (LNS arithmetic. An optimal mapping of the RLS lattice onto the LNS arithmetic units found by the cyclic scheduling was used. The schedule allows us to run four independent filters in parallel on one arithmetic macro set. The coprocessor containing the RLS lattice core is highly configurable. It allows to exploit the modular structure of the RLS lattice filter and construct the pipelined serial connection of filters for even higher performance. It also allows to run independent parallel filters on the same input with different forgetting factors in order to estimate which order and exponential forgetting factor better describe the observed data. The FPGA coprocessor implementation presented in the paper is able to evaluate the RLS lattice filter of order 504 at 12 kHz input data sampling rate. For the filter of order up to 20, the probability of order and forgetting factor hypotheses can be continually estimated. It has been demonstrated that the implemented coprocessor accelerates the Microblaze solution up to 20 times. It has also been shown that the coprocessor performs up to 2.5 times faster than highly optimized solution using 50 MIPS SHARC DSP processor, while the Microblaze is capable of performing another tasks concurrently.

  11. Large deviation estimates for a Non-Markovian Lévy generator of big order

    International Nuclear Information System (INIS)

    Léandre, Rémi

    2015-01-01

    We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)

  12. Optimal number of upper order statistics used in estimation for the ...

    African Journals Online (AJOL)

    Beirlant et al. (2011) introduced a bias-reduced estimator for the coeffcient of tail dependence and for bivariate tail probability in bivariate extreme value statistics. In this paper, we are interested in the problem of choice of the number of extreme order statistics of bivariate observations exceeding high thresholds, we want to ...

  13. Roof planes detection via a second-order variational model

    Science.gov (United States)

    Benciolini, Battista; Ruggiero, Valeria; Vitti, Alfonso; Zanetti, Massimo

    2018-04-01

    The paper describes a unified automatic procedure for the detection of roof planes in gridded height data. The procedure exploits the Blake-Zisserman (BZ) model for segmentation in both 2D and 1D, and aims to detect, to model and to label roof planes. The BZ model relies on the minimization of a functional that depends on first- and second-order derivatives, free discontinuities and free gradient discontinuities. During the minimization, the relative strength of each competitor is controlled by a set of weight parameters. By finding the minimum of the approximated BZ functional, one obtains: (1) an approximation of the data that is smoothed solely within regions of homogeneous gradient, and (2) an explicit detection of the discontinuities and gradient discontinuities of the approximation. Firstly, input data is segmented using the 2D BZ. The maps of data and gradient discontinuities are used to isolate building candidates and planar patches (i.e. regions with homogeneous gradient) that correspond to roof planes. Connected regions that can not be considered as buildings are filtered according to both patch dimension and distribution of the directions of the normals to the boundary. The 1D BZ model is applied to the curvilinear coordinates of boundary points of building candidates in order to reduce the effect of data granularity when the normals are evaluated. In particular, corners are preserved and can be detected by means of gradient discontinuity. Lastly, a total least squares model is applied to estimate the parameters of the plane that best fits the points of each planar patch (orthogonal regression with planar model). Refinement of planar patches is performed by assigning those points that are close to the boundaries to the planar patch for which a given proximity measure assumes the smallest value. The proximity measure is defined to account for the variance of a fitting plane and a weighted distance of a point from the plane. The effectiveness of the

  14. ON THE ESTIMATION AND PREDICTION IN MIXED LINEAR MODELS

    Directory of Open Access Journals (Sweden)

    LÓPEZ L.A.

    1998-01-01

    Full Text Available Beginning with the classical Gauss-Markov Linear Model for mixed effects and using the technique of the Lagrange multipliers to obtain an alternative method for the estimation of linear predictors. A structural method is also discussed in order to obtain the variance and covariance matrixes and their inverses.

  15. Higher-order RANS turbulence models for separated flows

    Data.gov (United States)

    National Aeronautics and Space Administration — Higher-order Reynolds-averaged Navier-Stokes (RANS) models are developed to overcome the shortcomings of second-moment RANS models in predicting separated flows....

  16. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Czech Academy of Sciences Publication Activity Database

    Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří

    2008-01-01

    Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf

  17. Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions

    KAUST Repository

    Belkhatir, Zehor

    2017-06-28

    This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.

  18. FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...

    African Journals Online (AJOL)

    This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...

  19. Modelling and parameter estimation of dynamic systems

    CERN Document Server

    Raol, JR; Singh, J

    2004-01-01

    Parameter estimation is the process of using observations from a system to develop mathematical models that adequately represent the system dynamics. The assumed model consists of a finite set of parameters, the values of which are calculated using estimation techniques. Most of the techniques that exist are based on least-square minimization of error between the model response and actual system response. However, with the proliferation of high speed digital computers, elegant and innovative techniques like filter error method, H-infinity and Artificial Neural Networks are finding more and mor

  20. Orbital Order in Two-Orbital Hubbard Model

    Science.gov (United States)

    Honkawa, Kojiro; Onari, Seiichiro

    2018-03-01

    In strongly correlated multiorbital systems, various ordered phases appear. In particular, the orbital order in iron-based superconductors attracts much attention since it is considered to be the origin of the nematic state. To clarify the essential conditions for realizing orbital orders, we study the simple two-orbital (dxz,dyz) Hubbard model. We find that the orbital order, which corresponds to the nematic order, appears due to the vertex corrections even in the two-orbital model. Thus, the dxy orbital is not essential to realize the nematic orbital order. The obtained orbital order is determined by the orbital dependence and the topology of Fermi surfaces. We also find that another type of orbital order, which is rotated 45°, appears in a heavily hole-doped case.

  1. Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions from Coarsened Data

    Science.gov (United States)

    Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D.

    2017-01-01

    Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…

  2. Sparsity enabled cluster reduced-order models for control

    Science.gov (United States)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  3. Spiking and bursting patterns of fractional-order Izhikevich model

    Science.gov (United States)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  4. Estimation and uncertainty of reversible Markov models.

    Science.gov (United States)

    Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank

    2015-11-07

    Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.

  5. Error estimation and adaptive chemical transport modeling

    Directory of Open Access Journals (Sweden)

    Malte Braack

    2014-09-01

    Full Text Available We present a numerical method to use several chemical transport models of increasing accuracy and complexity in an adaptive way. In largest parts of the domain, a simplified chemical model may be used, whereas in certain regions a more complex model is needed for accuracy reasons. A mathematically derived error estimator measures the modeling error and provides information where to use more accurate models. The error is measured in terms of output functionals. Therefore, one has to consider adjoint problems which carry sensitivity information. This concept is demonstrated by means of ozone formation and pollution emission.

  6. PDS-Modelling and Regional Bayesian Estimation of Extreme Rainfalls

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rosbjerg, Dan; Harremoës, Poul

    1994-01-01

    Since 1979 a country-wide system of raingauges has been operated in Denmark in order to obtain a better basis for design and analysis of urban drainage systems. As an alternative to the traditional non-parametric approach the Partial Duration Series method is employed in the modelling of extreme ....... The application of the Bayesian approach is derived in case of both exponential and generalized Pareto distributed exceedances. Finally, the aspect of including economic perspectives in the estimation of the design events is briefly discussed....... in Denmark cannot be justified. In order to obtain an estimation procedure at non-monitored sites and to improve at-site estimates a regional Bayesian approach is adopted. The empirical regional distributions of the parameters in the Partial Duration Series model are used as prior information...

  7. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  8. Parameter Estimation for Thurstone Choice Models

    Energy Technology Data Exchange (ETDEWEB)

    Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-24

    We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.

  9. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  10. Estimating classification images with generalized linear and additive models.

    Science.gov (United States)

    Knoblauch, Kenneth; Maloney, Laurence T

    2008-12-22

    Conventional approaches to modeling classification image data can be described in terms of a standard linear model (LM). We show how the problem can be characterized as a Generalized Linear Model (GLM) with a Bernoulli distribution. We demonstrate via simulation that this approach is more accurate in estimating the underlying template in the absence of internal noise. With increasing internal noise, however, the advantage of the GLM over the LM decreases and GLM is no more accurate than LM. We then introduce the Generalized Additive Model (GAM), an extension of GLM that can be used to estimate smooth classification images adaptively. We show that this approach is more robust to the presence of internal noise, and finally, we demonstrate that GAM is readily adapted to estimation of higher order (nonlinear) classification images and to testing their significance.

  11. Bayesian Analysis of Linear and Nonlinear Latent Variable Models with Fixed Covariate and Ordered Categorical Data

    Directory of Open Access Journals (Sweden)

    Thanoon Y. Thanoon

    2016-03-01

    Full Text Available In this paper, ordered categorical variables are used to compare between linear and nonlinear interactions of fixed covariate and latent variables Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Hidden continuous normal distribution (censored normal distribution is used to handle the problem of ordered categorical data. Statistical inferences, which involve estimation of parameters and their standard deviations, and residuals analyses for testing the selected model, are discussed. The proposed procedure is illustrated by a simulation data obtained from R program. Analysis are done by using OpenBUGS program.

  12. Modelling Limit Order Execution Times from Market Data

    Science.gov (United States)

    Kim, Adlar; Farmer, Doyne; Lo, Andrew

    2007-03-01

    Although the term ``liquidity'' is widely used in finance literatures, its meaning is very loosely defined and there is no quantitative measure for it. Generally, ``liquidity'' means an ability to quickly trade stocks without causing a significant impact on the stock price. From this definition, we identified two facets of liquidity -- 1.execution time of limit orders, and 2.price impact of market orders. The limit order is an order to transact a prespecified number of shares at a prespecified price, which will not cause an immediate execution. On the other hand, the market order is an order to transact a prespecified number of shares at a market price, which will cause an immediate execution, but are subject to price impact. Therefore, when the stock is liquid, market participants will experience quick limit order executions and small market order impacts. As a first step to understand market liquidity, we studied the facet of liquidity related to limit order executions -- execution times. In this talk, we propose a novel approach of modeling limit order execution times and show how they are affected by size and price of orders. We used q-Weibull distribution, which is a generalized form of Weibull distribution that can control the fatness of tail to model limit order execution times.

  13. Modulating functions method for parameters estimation in the fifth order KdV equation

    KAUST Repository

    Asiri, Sharefa M.

    2017-07-25

    In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.

  14. Parameter and Uncertainty Estimation in Groundwater Modelling

    DEFF Research Database (Denmark)

    Jensen, Jacob Birk

    The data basis on which groundwater models are constructed is in general very incomplete, and this leads to uncertainty in model outcome. Groundwater models form the basis for many, often costly decisions and if these are to be made on solid grounds, the uncertainty attached to model results must...... be quantified. This study was motivated by the need to estimate the uncertainty involved in groundwater models.Chapter 2 presents an integrated surface/subsurface unstructured finite difference model that was developed and applied to a synthetic case study.The following two chapters concern calibration...... was applied.Capture zone modelling was conducted on a synthetic stationary 3-dimensional flow problem involving river, surface and groundwater flow. Simulated capture zones were illustrated as likelihood maps and compared with a deterministic capture zones derived from a reference model. The results showed...

  15. Extreme gust wind estimation using mesoscale modeling

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries

    2014-01-01

    Currently, the existing estimation of the extreme gust wind, e.g. the 50-year winds of 3 s values, in the IEC standard, is based on a statistical model to convert the 1:50-year wind values from the 10 min resolution. This statistical model assumes a Gaussian process that satisfies the classical...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...

  16. Robust estimation procedure in panel data model

    Energy Technology Data Exchange (ETDEWEB)

    Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.

  17. Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling

    Directory of Open Access Journals (Sweden)

    Ioana Cornel

    2005-01-01

    Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.

  18. Flocking of Second-Order Multiagent Systems With Connectivity Preservation Based on Algebraic Connectivity Estimation.

    Science.gov (United States)

    Fang, Hao; Wei, Yue; Chen, Jie; Xin, Bin

    2017-04-01

    The problem of flocking of second-order multiagent systems with connectivity preservation is investigated in this paper. First, for estimating the algebraic connectivity as well as the corresponding eigenvector, a new decentralized inverse power iteration scheme is formulated. Then, based on the estimation of the algebraic connectivity, a set of distributed gradient-based flocking control protocols is built with a new class of generalized hybrid potential fields which could guarantee collision avoidance, desired distance stabilization, and the connectivity of the underlying communication network simultaneously. What is important is that the proposed control scheme allows the existing edges to be broken without violation of connectivity constraints, and thus yields more flexibility of motions and reduces the communication cost for the multiagent system. In the end, nontrivial comparative simulations and experimental results are performed to demonstrate the effectiveness of the theoretical results and highlight the advantages of the proposed estimation scheme and control algorithm.

  19. Adaptive order search and tangent-weighted trade-off for motion estimation in H.264

    Directory of Open Access Journals (Sweden)

    Srinivas Bachu

    2018-04-01

    Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding

  20. Multi-Criteria Model for Determining Order Size

    Directory of Open Access Journals (Sweden)

    Katarzyna Jakowska-Suwalska

    2013-01-01

    Full Text Available A multi-criteria model for determining the order size for materials used in production has been presented. It was assumed that the consumption rate of each material is a random variable with a known probability distribution. Using such a model, in which the purchase cost of materials ordered is limited, three criteria were considered: order size, probability of a lack of materials in the production process, and deviations in the order size from the consumption rate in past periods. Based on an example, it has been shown how to use the model to determine the order sizes for polyurethane adhesive and wood in a hard-coal mine. (original abstract

  1. Atmospheric Turbulence Modeling for Aerospace Vehicles: Fractional Order Fit

    Science.gov (United States)

    Kopasakis, George (Inventor)

    2015-01-01

    An improved model for simulating atmospheric disturbances is disclosed. A scale Kolmogorov spectral may be scaled to convert the Kolmogorov spectral into a finite energy von Karman spectral and a fractional order pole-zero transfer function (TF) may be derived from the von Karman spectral. Fractional order atmospheric turbulence may be approximated with an integer order pole-zero TF fit, and the approximation may be stored in memory.

  2. A variable-order fractal derivative model for anomalous diffusion

    Directory of Open Access Journals (Sweden)

    Liu Xiaoting

    2017-01-01

    Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.

  3. Model-Based Optimizing Control and Estimation Using Modelica Model

    Directory of Open Access Journals (Sweden)

    L. Imsland

    2010-07-01

    Full Text Available This paper reports on experiences from case studies in using Modelica/Dymola models interfaced to control and optimization software, as process models in real time process control applications. Possible applications of the integrated models are in state- and parameter estimation and nonlinear model predictive control. It was found that this approach is clearly possible, providing many advantages over modeling in low-level programming languages. However, some effort is required in making the Modelica models accessible to NMPC software.

  4. An improved second-order continuum traffic model

    Science.gov (United States)

    Marques, W., Jr.; Velasco, R. M.

    2010-02-01

    We construct a second-order continuum traffic model by using an iterative procedure in order to derive a constitutive relation for the traffic pressure which is similar to the Navier-Stokes equation for ordinary fluids. Our second-order traffic model represents an improvement on the traffic model suggested by Kerner and Konhäuser since the iterative procedure introduces, in the constitutive relation for the traffic pressure, a density-dependent viscosity coefficient. By using a finite-difference scheme based on the Steger-Warming flux splitting, we investigate the solution of our improved second-order traffic model for specific problems like shock fronts in traffic and freeway-lane drop.

  5. An improved second-order continuum traffic model

    International Nuclear Information System (INIS)

    Marques, W Jr; Velasco, R M

    2010-01-01

    We construct a second-order continuum traffic model by using an iterative procedure in order to derive a constitutive relation for the traffic pressure which is similar to the Navier–Stokes equation for ordinary fluids. Our second-order traffic model represents an improvement on the traffic model suggested by Kerner and Konhäuser since the iterative procedure introduces, in the constitutive relation for the traffic pressure, a density-dependent viscosity coefficient. By using a finite-difference scheme based on the Steger–Warming flux splitting, we investigate the solution of our improved second-order traffic model for specific problems like shock fronts in traffic and freeway-lane drop

  6. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from...... the two estimation methods without noise correction are studied. Second, a noise robust GMM estimator is constructed by approximating integrated volatility by a realized kernel instead of realized variance. The PBEFs are also recalculated in the noise setting, and the two estimation methods ability...

  7. Estimating Drilling Cost and Duration Using Copulas Dependencies Models

    Directory of Open Access Journals (Sweden)

    M. Al Kindi

    2017-03-01

    Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.

  8. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  9. Fractional-Order Nonlinear Systems Modeling, Analysis and Simulation

    CERN Document Server

    Petráš, Ivo

    2011-01-01

    "Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation" presents a study of fractional-order chaotic systems accompanied by Matlab programs for simulating their state space trajectories, which are shown in the illustrations in the book. Description of the chaotic systems is clearly presented and their analysis and numerical solution are done in an easy-to-follow manner. Simulink models for the selected fractional-order systems are also presented. The readers will understand the fundamentals of the fractional calculus, how real dynamical systems can be described using fractional derivatives and fractional differential equations, how such equations can be solved, and how to simulate and explore chaotic systems of fractional order. The book addresses to mathematicians, physicists, engineers, and other scientists interested in chaos phenomena or in fractional-order systems. It can be used in courses on dynamical systems, control theory, and applied mathematics at graduate or postgraduate level. ...

  10. A first-order seismotectonic regionalization of Mexico for seismic hazard and risk estimation

    Science.gov (United States)

    Zúñiga, F. Ramón; Suárez, Gerardo; Figueroa-Soto, Ángel; Mendoza, Avith

    2017-11-01

    The purpose of this work is to define a seismic regionalization of Mexico for seismic hazard and risk analyses. This seismic regionalization is based on seismic, geologic, and tectonic characteristics. To this end, a seismic catalog was compiled using the more reliable sources available. The catalog was made homogeneous in magnitude in order to avoid the differences in the way this parameter is reported by various agencies. Instead of using a linear regression to converts from m b and M d to M s or M w , using only events for which estimates of both magnitudes are available (i.e., paired data), we used the frequency-magnitude relations relying on the a and b values of the Gutenberg-Richter relation. The seismic regions are divided into three main categories: seismicity associated with the subduction process along the Pacific coast of Mexico, in-slab events within the down-going COC and RIV plates, and crustal seismicity associated to various geologic and tectonic regions. In total, 18 seismic regions were identified and delimited. For each, the a and b values of the Gutenberg-Richter relation were determined using a maximum likelihood estimation. The a and b parameters were repeatedly estimated as a function of time for each region, in order to confirm their reliability and stability. The recurrence times predicted by the resulting Gutenberg-Richter relations obtained are compared with the observed recurrence times of the larger events in each region of both historical and instrumental earthquakes.

  11. Modified Dual Second-order Generalized Integrator FLL for Frequency Estimation Under Various Grid Abnormalities

    Directory of Open Access Journals (Sweden)

    Kalpeshkumar Rohitbhai Patil

    2016-10-01

    Full Text Available Proper synchronization of Distributed Generator with grid and its performance in grid-connected mode relies on fast and precise estimation of phase and amplitude of the fundamental component of grid voltage. However, the accuracy with which the frequency is estimated is dependent on the type of grid voltage abnormalities and structure of the phase-locked loop or frequency locked loop control schemes. Among various control schemes, second-order generalized integrator based frequency- locked loop (SOGI-FLL is reported to have the most promising performance. It tracks the frequency of grid voltage accurately even when grid voltage is characterized by sag, swell, harmonics, imbalance, frequency variations etc. However, estimated frequency contains low frequency oscillations in case when sensed grid-voltage has a dc offset. This paper presents a modified dual second-order generalized integrator frequency-locked loop (MDSOGI-FLL for three-phase systems to cope with the non-ideal three-phase grid voltages having all type of abnormalities including the dc offset. The complexity in control scheme is almost the same as the standard dual SOGI-FLL, but the performance is enhanced. Simulation results show that the proposed MDSOGI-FLL is effective under all abnormal grid voltage conditions. The results are validated experimentally to justify the superior performance of MDSOGI-FLL under adverse conditions.

  12. Order reduction for a model of marine bacteriophage evolution

    Science.gov (United States)

    Pagliarini, Silvia; Korobeinikov, Andrei

    2017-02-01

    A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.

  13. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    Science.gov (United States)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  14. Random balance designs for the estimation of first order global sensitivity indices

    International Nuclear Information System (INIS)

    Tarantola, S.; Gatelli, D.; Mara, T.A.

    2006-01-01

    We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA

  15. Random balance designs for the estimation of first order global sensitivity indices

    Energy Technology Data Exchange (ETDEWEB)

    Tarantola, S. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (VA) (Italy); Mara, T.A. [Laboratory of Industrial engineering, University of Reunion Island, BP 7151, 15 avenue Rene Cassin, 97 715 Saint-Denis (France)

    2006-06-15

    We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA.

  16. Time and order estimation of paintings based on visual features and expert priors

    Science.gov (United States)

    Cabral, Ricardo S.; Costeira, João P.; de La Torre, Fernando; Bernardino, Alexandre; Carneiro, Gustavo

    2011-03-01

    Time and order are considered crucial information in the art domain, and subject of many research efforts by historians. In this paper, we present a framework for estimating the ordering and date information of paintings and drawings. We formulate this problem as the embedding into a one dimension manifold, which aims to place paintings far or close to each other according to a measure of similarity. Our formulation can be seen as a manifold learning algorithm, albeit properly adapted to deal with existing questions in the art community. To solve this problem, we propose an approach based in Laplacian Eigenmaps and a convex optimization formulation. Both methods are able to incorporate art expertise as priors to the estimation, in the form of constraints. Types of information include exact or approximate dating and partial orderings. We explore the use of soft penalty terms to allow for constraint violation to account for the fact that prior knowledge may contain small errors. Our problem is tested within the scope of the PrintART project, which aims to assist art historians in tracing Portuguese Tile art "Azulejos" back to the engravings that inspired them. Furthermore, we describe other possible applications where time information (and hence, this method) could be of use in art history, fake detection or curatorial treatment.

  17. Los Alamos Waste Management Cost Estimation Model

    International Nuclear Information System (INIS)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs

  18. Testing static tradeoff theory against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  19. Bayesian variable order Markov models: Towards Bayesian predictive state representations

    NARCIS (Netherlands)

    Dimitrakakis, C.

    2009-01-01

    We present a Bayesian variable order Markov model that shares many similarities with predictive state representations. The resulting models are compact and much easier to specify and learn than classical predictive state representations. Moreover, we show that they significantly outperform a more

  20. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  1. Testing static tradeoff theiry against pecking order models of capital ...

    African Journals Online (AJOL)

    We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...

  2. Partial-Order Reduction for GPU Model Checking

    NARCIS (Netherlands)

    Neele, T.; Wijs, A.; Bosnacki, D.; van de Pol, Jan Cornelis; Artho, C; Legay, A.; Peled, D.

    2016-01-01

    Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicit-state model checking, we improve memory efficiency by applying partial-order reduction. We propose

  3. Data-Driven Model Order Reduction for Bayesian Inverse Problems

    KAUST Repository

    Cui, Tiangang

    2014-01-06

    One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.

  4. Order-of-magnitude physics of neutron stars. Estimating their properties from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Reisenegger, Andreas; Zepeda, Felipe S. [Pontificia Universidad Catolica de Chile, Instituto de Astrofisica, Facultad de Fisica, Macul (Chile)

    2016-03-15

    We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of ''everyday'' matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties. (orig.)

  5. Analysis of order-statistic CFAR threshold estimators for improved ultrasonic flaw detection.

    Science.gov (United States)

    Saniie, J; Nagle, D T

    1992-01-01

    In the pulse-echo method using broadband transducers, flaw detection can be improved by using optimal bandpass filtering to resolve flaw echoes surrounded by grain scatterers. Optimal bandpass filtering is achieved by examining spectral information of the flaw and grain echoes where frequency differences have been experimentally shown to be predictable in the Rayleigh scattering region. Using optimal frequency band information, flaw echoes can then be discriminated by applying adaptive thresholding techniques based on surrounding range cells. The authors present order-statistic (OS) processors, ranked and trimmed mean (TM), to robustly estimate the threshold while censoring outliers. The design of these OS processors is accomplished analytically based on constant false-alarm rate (CFAR) detection. It is shown that OS-CFAR and TM-CFAR processors can detect flaw echoes robustly with the CFAR of 10 (-4) where the range cell used for the threshold estimate contains outliers.

  6. Modeling vehicle operating speed on urban roads in Montreal: a panel mixed ordered probit fractional split model.

    Science.gov (United States)

    Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F

    2013-10-01

    Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate

  7. Group-ICA model order highlights patterns of functional brain connectivity

    Directory of Open Access Journals (Sweden)

    Ahmed eAbou Elseoud

    2011-06-01

    Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.

  8. Estimation Model for Concrete Slump Recovery by Using Superplasticizer

    OpenAIRE

    Chaiyakrit Raoupatham; Ram Hari Dhakal; Chalermchai Wanichlamlert

    2015-01-01

    This paper aimed to introduce the solution of concrete slump recovery using chemical admixture type-F (superplasticizer, naphthalene base) to the practice in order to solve unusable concrete problem due to concrete loss its slump, especially for those tropical countries that have faster slump loss rate. In the other hand, randomly adding superplasticizer into concrete can cause concrete to segregate. Therefore, this paper also develops the estimation model used to calcula...

  9. House thermal model parameter estimation method for Model Predictive Control applications

    NARCIS (Netherlands)

    van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria

    In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results

  10. Large-order estimates in perturbative QCD and non-borel summable series

    International Nuclear Information System (INIS)

    Fischer, J.

    1994-01-01

    Basic facts about the summation of divergent power series are reviewed, both for series with non vanishing and for series with vanishing convergence radius. Particular attention is paid to the recent development that makes it possible, in the former case, to define summation in the whole Mittag-Leffler star and, in the latter case, to define summation when the point of expansion lies at the tip of a horn-shaped analyticity domain with zero opening angle. Relevance of these results to perturbative QCD is stressed in relation to current discussions concerning large-order estimates of perturbative QCD expansion coefficients. (orig.)

  11. Conditional shape models for cardiac motion estimation

    DEFF Research Database (Denmark)

    Metz, Coert; Baka, Nora; Kirisli, Hortense

    2010-01-01

    We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...... alignment of pre-operative CTA data with intra-operative X-ray imaging. Due to a trend towards prospective electrocardiogram gating techniques, 4D imaging data, from which motion information could be extracted, is not commonly available. The prediction of motion from shape information is thus relevant...

  12. Software Cost Estimating Models: A Comparative Study of What the Models Estimate

    Science.gov (United States)

    1993-09-01

    generate good cost estimates. One model developer best summed up this sentiment by stating: Estimation is not a mechanical process. Art, skill, and...Allocation Perc.uinta~es for SASEY Development Phases Sysieni Conce~pt 7.5% yseS/W Requ~irements Anlysis _________%__ S/W Raq;iirements Analysis 9.0

  13. Mathematical modelling of fractional order circuit elements and bioimpedance applications

    Science.gov (United States)

    Moreles, Miguel Angel; Lainez, Rafael

    2017-05-01

    In this work a classical derivation of fractional order circuits models is presented. Generalised constitutive equations in terms of fractional Riemann-Liouville derivatives are introduced in the Maxwell's equations for each circuit element. Next the Kirchhoff voltage law is applied in a RCL circuit configuration. It is shown that from basic properties of Fractional Calculus, a fractional differential equation model with Caputo derivatives is obtained. Thus standard initial conditions apply. Finally, models for bioimpedance are revisited.

  14. Measurement Error in Designed Experiments for Second Order Models

    OpenAIRE

    McMahan, Angela Renee

    1997-01-01

    Measurement error (ME) in the factor levels of designed experiments is often overlooked in the planning and analysis of experimental designs. A familiar model for this type of ME, called the Berkson error model, is discussed at length. Previous research has examined the effect of Berkson error on two-level factorial and fractional factorial designs. This dissertation extends the examination to designs for second order models. The results are used to suggest ...

  15. Modeling Human Behaviour with Higher Order Logic: Insider Threats

    DEFF Research Database (Denmark)

    Boender, Jaap; Ivanova, Marieta Georgieva; Kammuller, Florian

    2014-01-01

    it to the sociological process of logical explanation. As a case study on modeling human behaviour, we present the modeling and analysis of insider threats as a Higher Order Logic theory in Isabelle/HOL. We show how each of the three step process of sociological explanation can be seen in our modeling of insider’s state......, its context within an organisation and the effects on security as outcomes of a theorem proving analysis....

  16. Abnormal Waves Modelled as Second-order Conditional Waves

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2005-01-01

    The paper presents results for the expected second order short-crested wave conditional of a given wave crest at a specific point in time and space. The analysis is based on the second order Sharma and Dean shallow water wave theory. Numerical results showing the importance of the spectral density......, the water depth and the directional spreading on the conditional mean wave profile are presented. Application of conditional waves to model and explain abnormal waves, e.g. the well-known New Year Wave measured at the Draupner platform January 1st 1995, is discussed. Whereas the wave profile can be modelled...... quite well by the second order conditional wave including directional spreading and finite water depth the probability to encounter such a wave is still, however, extremely rare. The use of the second order conditional wave as initial condition to a fully non-linear three-dimensional analysis...

  17. Terminology Modeling for an Enterprise Laboratory Orders Catalog

    Science.gov (United States)

    Zhou, Li; Goldberg, Howard; Pabbathi, Deepika; Wright, Adam; Goldman, Debora S.; Van Putten, Cheryl; Barley, Amanda; Rocha, Roberto A.

    2009-01-01

    Laboratory test orders are used in a variety of clinical information systems at Partners HealthCare. At present, each site at Partners manages its own set of laboratory orders with locally defined codes. Our current plan is to implement an enterprise catalog, where laboratory test orders are mapped to reference terminologies and codes from different sites are mapped to each other. This paper describes the terminology modeling effort that preceded the implementation of the enterprise laboratory orders catalog. In particular, we present our experience in adapting HL7’s “Common Terminology Services 2 – Upper Level Class Model” as a terminology metamodel for guiding the development of fully specified laboratory orders and related services. PMID:20351950

  18. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  19. Reduced order modeling of some fluid flows of industrial interest

    International Nuclear Information System (INIS)

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M

    2012-01-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  20. Reduced order modeling of some fluid flows of industrial interest

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  1. Composite symmetry-protected topological order and effective models

    Science.gov (United States)

    Nietner, A.; Krumnow, C.; Bergholtz, E. J.; Eisert, J.

    2017-12-01

    Strongly correlated quantum many-body systems at low dimension exhibit a wealth of phenomena, ranging from features of geometric frustration to signatures of symmetry-protected topological order. In suitable descriptions of such systems, it can be helpful to resort to effective models, which focus on the essential degrees of freedom of the given model. In this work, we analyze how to determine the validity of an effective model by demanding it to be in the same phase as the original model. We focus our study on one-dimensional spin-1 /2 systems and explain how nontrivial symmetry-protected topologically ordered (SPT) phases of an effective spin-1 model can arise depending on the couplings in the original Hamiltonian. In this analysis, tensor network methods feature in two ways: on the one hand, we make use of recent techniques for the classification of SPT phases using matrix product states in order to identify the phases in the effective model with those in the underlying physical system, employing Künneth's theorem for cohomology. As an intuitive paradigmatic model we exemplify the developed methodology by investigating the bilayered Δ chain. For strong ferromagnetic interlayer couplings, we find the system to transit into exactly the same phase as an effective spin-1 model. However, for weak but finite coupling strength, we identify a symmetry broken phase differing from this effective spin-1 description. On the other hand, we underpin our argument with a numerical analysis making use of matrix product states.

  2. Research on Modeling of Hydropneumatic Suspension Based on Fractional Order

    Directory of Open Access Journals (Sweden)

    Junwei Zhang

    2015-01-01

    Full Text Available With such excellent performance as nonlinear stiffness, adjustable vehicle height, and good vibration resistance, hydropneumatic suspension (HS has been more and more applied to heavy vehicle and engineering vehicle. Traditional modeling methods are still confined to simple models without taking many factors into consideration. A hydropneumatic suspension model based on fractional order (HSM-FO is built with the advantage of fractional order (FO in viscoelastic material modeling considering the mechanics property of multiphase medium of HS. Then, the detailed calculation method is proposed based on Oustaloup filtering approximation algorithm. The HSM-FO is implemented in Matlab/Simulink, and the results of comparison among the simulation curve of fractional order, integral order, and the curve of real experiment prove the feasibility and validity of HSM-FO. The damping force property of the suspension system under different fractional orders is also studied. In the end of this paper, several conclusions concerning HSM-FO are drawn according to analysis of simulation.

  3. Nonparametric model assisted model calibrated estimation in two ...

    African Journals Online (AJOL)

    Nonparametric model assisted model calibrated estimation in two stage survey sampling. RO Otieno, PN Mwita, PN Kihara. Abstract. No Abstract > East African Journal of Statistics Vol. 1 (3) 2007: pp.261-281. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  4. Reverse time migration by Krylov subspace reduced order modeling

    Science.gov (United States)

    Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali

    2018-04-01

    Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.

  5. Impact of Physics Parameterization Ordering in a Global Atmosphere Model

    Science.gov (United States)

    Donahue, Aaron S.; Caldwell, Peter M.

    2018-02-01

    Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.

  6. Parameter estimation in fractional diffusion models

    CERN Document Server

    Kubilius, Kęstutis; Ralchenko, Kostiantyn

    2017-01-01

    This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...

  7. Ordering kinetics in model systems with inhibited interfacial adsorption

    DEFF Research Database (Denmark)

    Willart, J.-F.; Mouritsen, Ole G.; Naudts, J.

    1992-01-01

    . The results are related to experimental work on ordering processes in orientational glasses. It is suggested that the experimental observation of very slow ordering kinetics in, e.g., glassy crystals of cyanoadamantane may be a consequence of low-temperature activated processes which ultimately lead......The ordering kinetics in two-dimensional Ising-like spin moels with inhibited interfacial adsorption are studied by computer-simulation calculations. The inhibited interfacial adsorption is modeled by a particular interfacial adsorption condition on the structure of the domain wall between......, of the algebraic growth law, R(t)∼Atn, whereas the growth exponent, n, remains close to the value n=1/2 predicted by the classical Lifshitz-Allen-Cahn growth law for systems with nonconserved order parameter. At very low temperatures there is, however, an effective crossover to a much slower algebraic growth...

  8. Robust simulation of buckled structures using reduced order modeling

    International Nuclear Information System (INIS)

    Wiebe, R.; Perez, R.A.; Spottswood, S.M.

    2016-01-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties. (paper)

  9. Atmospheric Turbulence Modeling for Aero Vehicles: Fractional Order Fits

    Science.gov (United States)

    Kopasakis, George

    2015-01-01

    Atmospheric turbulence models are necessary for the design of both inlet/engine and flight controls, as well as for studying coupling between the propulsion and the vehicle structural dynamics for supersonic vehicles. Models based on the Kolmogorov spectrum have been previously utilized to model atmospheric turbulence. In this paper, a more accurate model is developed in its representative fractional order form, typical of atmospheric disturbances. This is accomplished by first scaling the Kolmogorov spectral to convert them into finite energy von Karman forms and then by deriving an explicit fractional circuit-filter type analog for this model. This circuit model is utilized to develop a generalized formulation in frequency domain to approximate the fractional order with the products of first order transfer functions, which enables accurate time domain simulations. The objective of this work is as follows. Given the parameters describing the conditions of atmospheric disturbances, and utilizing the derived formulations, directly compute the transfer function poles and zeros describing these disturbances for acoustic velocity, temperature, pressure, and density. Time domain simulations of representative atmospheric turbulence can then be developed by utilizing these computed transfer functions together with the disturbance frequencies of interest.

  10. Time Ordering in Frontal Lobe Patients: A Stochastic Model Approach

    Science.gov (United States)

    Magherini, Anna; Saetti, Maria Cristina; Berta, Emilia; Botti, Claudio; Faglioni, Pietro

    2005-01-01

    Frontal lobe patients reproduced a sequence of capital letters or abstract shapes. Immediate and delayed reproduction trials allowed the analysis of short- and long-term memory for time order by means of suitable Markov chain stochastic models. Patients were as proficient as healthy subjects on the immediate reproduction trial, thus showing spared…

  11. Reduced order modelling and predictive control of multivariable ...

    Indian Academy of Sciences (India)

    Anuj Abraham

    2018-03-16

    Mar 16, 2018 ... The performance of constraint generalized predictive control scheme is found to be superior to that of the conventional PID controller in terms of overshoot, settling time and performance indices, mainly ISE, IAE and MSE. Keywords. Predictive control; distillation column; reduced order model; dominant pole; ...

  12. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  13. Modeling extreme events: Sample fraction adaptive choice in parameter estimation

    Science.gov (United States)

    Neves, Manuela; Gomes, Ivette; Figueiredo, Fernanda; Gomes, Dora Prata

    2012-09-01

    When modeling extreme events there are a few primordial parameters, among which we refer the extreme value index and the extremal index. The extreme value index measures the right tail-weight of the underlying distribution and the extremal index characterizes the degree of local dependence in the extremes of a stationary sequence. Most of the semi-parametric estimators of these parameters show the same type of behaviour: nice asymptotic properties, but a high variance for small values of k, the number of upper order statistics to be used in the estimation, and a high bias for large values of k. This shows a real need for the choice of k. Choosing some well-known estimators of those parameters we revisit the application of a heuristic algorithm for the adaptive choice of k. The procedure is applied to some simulated samples as well as to some real data sets.

  14. Estimation of ibuprofen and famotidine in tablets by second order derivative spectrophotometery method

    Directory of Open Access Journals (Sweden)

    Dimal A. Shah

    2017-02-01

    Full Text Available A simple and accurate method for the analysis of ibuprofen (IBU and famotidine (FAM in their combined dosage form was developed using second order derivative spectrophotometery. IBU and FAM were quantified using second derivative responses at 272.8 nm and 290 nm in the spectra of their solutions in methanol. The calibration curves were linear in the concentration range of 100–600 μg/mL for IBU and 5–25 μg/mL for FAM. The method was validated and found to be accurate and precise. Developed method was successfully applied for the estimation of IBU and FAM in their combined dosage form.

  15. Integrable higher order deformations of Heisenberg supermagnetic model

    International Nuclear Information System (INIS)

    Guo Jiafeng; Yan Zhaowen; Wang Shikun; Wu Ke; Zhao Weizhong

    2009-01-01

    The Heisenberg supermagnet model is an integrable supersymmetric system and has a close relationship with the strong electron correlated Hubbard model. In this paper, we investigate the integrable higher order deformations of Heisenberg supermagnet models with two different constraints: (i) S 2 =3S-2I for S is an element of USPL(2/1)/S(U(2)xU(1)) and (ii) S 2 =S for S is an element of USPL(2/1)/S(L(1/1)xU(1)). In terms of the gauge transformation, their corresponding gauge equivalent counterparts are derived.

  16. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  17. Computational Analysis of Complex Population Dynamical Model with Arbitrary Order

    Directory of Open Access Journals (Sweden)

    Fazal Haq

    2018-01-01

    Full Text Available This paper considers the approximation of solution for a fractional order biological population model. The fractional derivative is considered in the Caputo sense. By using Laplace Adomian decomposition method (LADM, we construct a base function and provide deformation equation of higher order in a simple equation. The considered scheme gives us a solution in the form of rapidly convergent infinite series. Some examples are used to show the efficiency of the method. The results show that LADM is efficient and accurate for solving such types of nonlinear problems.

  18. Wave Height Estimation from First-Order Backscatter of a Dual-Frequency High Frequency Radar

    Directory of Open Access Journals (Sweden)

    Yingwei Tian

    2017-11-01

    Full Text Available Second-order scattering based wave height measurement with high-frequency (HF radar has always been subjected to problems such as distance limitation and external interference especially under low or moderate sea state. The performance is further exacerbated for a compact system with small antennas. First-order Bragg scattering has been investigated to relate wave height to the stronger Bragg backscatter, but calibrating the echo power along distance and direction is challenging. In this paper, a new method is presented to deal with the calibration and improve the Bragg scattering based wave height estimation from dual-frequency radar data. The relative difference of propagation attenuation and directional spreading between two operating frequencies has been found to be identifiable along range and almost independent of direction, and it is employed to effectively reduce the fitting requirements of in situ wave buoys. A 20-day experiment was performed over the Taiwan Strait of China to validate this method. Comparison of wave height measured by radar and buoys at distance of 15 km and 70 km shows that the root-mean-square errors are 0.34 m and 0.56 m, respectively, with correlation coefficient of 0.82 and 0.84.

  19. First-order estimate of the planktic foraminifer biomass in the modern ocean

    Directory of Open Access Journals (Sweden)

    R. Schiebel

    2012-09-01

    Full Text Available Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. With a new non-destructive protocol developed from the bicinchoninic acid (BCA method and nano-photospectrometry, we have analysed the protein-biomass, along with test size and weight, of 754 individual planktic foraminifers from 21 different species and morphotypes. From additional CHN analysis, it can be assumed that protein-biomass equals carbon-biomass. Accordingly, the average individual planktic foraminifer protein- and carbon-biomass amounts to 0.845 μg. Samples include symbiont bearing and symbiont-barren species from the sea surface down to 2500 m water depth. Conversion factors between individual biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum test diameter. Assemblage-biomass data presented here include 1128 sites and water depth intervals. The regional coverage of data includes the North Atlantic, Arabian Sea, Red Sea, and Caribbean as well as literature data from the eastern and western North Pacific, and covers a wide range of oligotrophic to eutrophic waters over six orders of magnitude of planktic-foraminifer assemblage-biomass (PFAB. A first order estimate of the average global planktic foraminifer biomass production (>125 μm ranges from 8.2–32.7 Tg C yr−1 (i.e. 0.008–0.033 Gt C yr−1, and might be more than three times as high including neanic and juvenile individuals adding up to 25–100 Tg C yr−1. However, this is a first

  20. A Model of Gear Transmission: Fractional Order System Dynamics

    Directory of Open Access Journals (Sweden)

    Katica (Stevanović Hedrih

    2010-01-01

    Full Text Available A theoretical model of multistep gear transmission dynamics is presented. This model is based on the assumption that the connection between the teeth of the gears is with properties within the range from ideal clasic to viscoelastic so that a new model of connection between the teeth was expressed by means of derivative of fractional order. For this model a two-step gear transmision with three degrees of freedom of motion has been used. The obtained solutions are in the analytic form of the expansion according to time. As boundary cases this model gives results for the case of ideally elastic connection of the gear teeth and for the case of viscoelastic connection of the gear teeth, as well. Eigen fractional modes are obtained and a vizualization is done.

  1. Application of Higher Order Fission Matrix for Real Variance Estimation in McCARD Monte Carlo Eigenvalue Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-05-15

    In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.

  2. Identification of the reduced order models of a BWR reactor

    International Nuclear Information System (INIS)

    Hernandez S, A.

    2004-01-01

    The present work has as objective to analyze the relative stability of a BWR type reactor. It is analyzed that so adaptive it turns out to identify the parameters of a model of reduced order so that this it reproduces a condition of given uncertainty. This will take of a real fact happened in the La Salle plant under certain operation conditions of power and flow of coolant. The parametric identification is carried out by means of an algorithm of recursive least square and an Output Error model (Output Error), measuring the output power of the reactor when the instability is present, and considering that it is produced by a change in the reactivity of the system in the same way that a sign of type step. Also it is carried out an analytic comparison of the relative stability, analyzing two types of answers: the original answer of the uncertainty of the reactor vs. the obtained response identifying the parameters of the model of reduced order, reaching the conclusion that it is very viable to adapt a model of reduced order to study the stability of a reactor, under the only condition to consider that the dynamics of the reactivity is of step type. (Author)

  3. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  4. Advanced Fluid Reduced Order Models for Compressible Flow.

    Energy Technology Data Exchange (ETDEWEB)

    Tezaur, Irina Kalashnikova [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maddix, Danielle [Stanford Univ., CA (United States); Mussoni, Erin E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Balajewicz, Maciej [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2017-09-01

    This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.

  5. Robust estimation of hydrological model parameters

    Directory of Open Access Journals (Sweden)

    A. Bárdossy

    2008-11-01

    Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.

  6. MODELING THE SELF-ASSEMBLY OF ORDERED NANOPOROUS MATERIALS

    Energy Technology Data Exchange (ETDEWEB)

    Monson, Peter [University of Massachusetts; Auerbach, Scott [University of Massachusetts

    2017-11-13

    This report describes progress on a collaborative project on the multiscale modeling of the assembly processes in the synthesis of nanoporous materials. Such materials are of enormous importance in modern technology with application in the chemical process industries, biomedicine and biotechnology as well as microelectronics. The project focuses on two important classes of materials: i) microporous crystalline materials, such as zeolites, and ii) ordered mesoporous materials. In the first case the pores are part of the crystalline structure, while in the second the structures are amorphous on the atomistic length scale but where surfactant templating gives rise to order on the length scale of 2 - 20 nm. We have developed a modeling framework that encompasses both these kinds of materials. Our models focus on the assembly of corner sharing silica tetrahedra in the presence of structure directing agents. We emphasize a balance between sufficient realism in the models and computational tractibility given the complex many-body phenomena. We use both on-lattice and off-lattice models and the primary computational tools are Monte Carlo simulations with sampling techniques and ensembles appropriate to specific situations. Our modeling approach is the first to capture silica polymerization, nanopore crystallization, and mesopore formation through computer-simulated self assembly.

  7. Perspectives on the application of order-statistics in best-estimate plus uncertainty nuclear safety analysis

    International Nuclear Information System (INIS)

    Martin, Robert P.; Nutt, William T.

    2011-01-01

    Research highlights: → Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. → Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. → Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. → Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.

  8. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Robust estimation of errors-in-variables models using M-estimators

    Science.gov (United States)

    Guo, Cuiping; Peng, Junhuan

    2017-07-01

    The traditional Errors-in-variables (EIV) models are widely adopted in applied sciences. The EIV model estimators, however, can be highly biased by gross error. This paper focuses on robust estimation in EIV models. A new class of robust estimators, called robust weighted total least squared estimators (RWTLS), is introduced. Robust estimators of the parameters of the EIV models are derived from M-estimators and Lagrange multiplier method. A simulated example is carried out to demonstrate the performance of the presented RWTLS. The result shows that the RWTLS algorithm can indeed resist gross error to achieve a reliable solution.

  10. HOKF: High Order Kalman Filter for Epilepsy Forecasting Modeling.

    Science.gov (United States)

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2017-08-01

    Epilepsy forecasting has been extensively studied using high-order time series obtained from scalp-recorded electroencephalography (EEG). An accurate seizure prediction system would not only help significantly improve patients' quality of life, but would also facilitate new therapeutic strategies to manage epilepsy. This paper thus proposes an improved Kalman Filter (KF) algorithm to mine seizure forecasts from neural activity by modeling three properties in the high-order EEG time series: noise, temporal smoothness, and tensor structure. The proposed High-Order Kalman Filter (HOKF) is an extension of the standard Kalman filter, for which higher-order modeling is limited. The efficient dynamic of HOKF system preserves the tensor structure of the observations and latent states. As such, the proposed method offers two main advantages: (i) effectiveness with HOKF results in hidden variables that capture major evolving trends suitable to predict neural activity, even in the presence of missing values; and (ii) scalability in that the wall clock time of the HOKF is linear with respect to the number of time-slices of the sequence. The HOKF algorithm is examined in terms of its effectiveness and scalability by conducting forecasting and scalability experiments with a real epilepsy EEG dataset. The results of the simulation demonstrate the superiority of the proposed method over the original Kalman Filter and other existing methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Modeling the assembly order of multimeric heteroprotein complexes.

    Science.gov (United States)

    Peterson, Lenna X; Togawa, Yoichiro; Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Roy, Amitava; Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be

  12. Modeling the assembly order of multimeric heteroprotein complexes.

    Directory of Open Access Journals (Sweden)

    Lenna X Peterson

    2018-01-01

    Full Text Available Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure

  13. Comparison of different models for non-invasive FFR estimation

    Science.gov (United States)

    Mirramezani, Mehran; Shadden, Shawn

    2017-11-01

    Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.

  14. Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model

    KAUST Repository

    Mo, Qianxing

    2010-01-29

    ChIP-chip experiments are procedures that combine chromatin immunoprecipitation (ChIP) and DNA microarray (chip) technology to study a variety of biological problems, including protein-DNA interaction, histone modification, and DNA methylation. The most important feature of ChIP-chip data is that the intensity measurements of probes are spatially correlated because the DNA fragments are hybridized to neighboring probes in the experiments. We propose a simple, but powerful Bayesian hierarchical approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic resolutions. The model parameters are estimated using the Gibbs sampler. The proposed method is illustrated using two publicly available data sets from Affymetrix and Agilent platforms, and compared with three alternative Bayesian methods, namely, Bayesian hierarchical model, hierarchical gamma mixture model, and Tilemap hidden Markov model. The numerical results indicate that the proposed method performs as well as the other three methods for the data from Affymetrix tiling arrays, but significantly outperforms the other three methods for the data from Agilent promoter arrays. In addition, we find that the proposed method has better operating characteristics in terms of sensitivities and false discovery rates under various scenarios. © 2010, The International Biometric Society.

  15. Using publicly available GPS solutions for fast estimations of first-order source details from coseismic deformations

    Directory of Open Access Journals (Sweden)

    Antonio Piersanti

    2011-08-01

    Full Text Available We here explore the potential use of publicly available GPS solutions to obtain first-order constraints on a source model immediately following an earthquake, within the limits of GPS solution timeliness and near-field coverage. We use GPS solutions from the Scripps Orbit and Permanent Array Center to carry out simple inversions of the coseismic displacement field induced by the 2010 Maule earthquake (Chile, by inferring the seismic moment and the rake angle of a fixed-geometry seismic source. The rake angle obtained from the inversion (m = 117.8˚ is consistent with seismological estimates. The seismic moment, which corresponds to a moment magnitude MW = 8.9, is about 1.6 times greater than seismological estimates. This suggests that as in other recent megathrust events, a consistent fraction of the energy was released aseismically. In this respect, the additional information obtained from GPS can help to provide a better estimate of the weight of the aseismic contribution to the energy release.

  16. Long range ordering in model Ni-Cr-X alloys

    International Nuclear Information System (INIS)

    Young, G.A.; Eno, D.R.

    2015-01-01

    Nickel-chromium alloys are used throughout commercial nuclear power systems due to their desirable combination of corrosion resistance and mechanical properties. However, some Ni-Cr alloys can undergo long range ordering (LRO), forming the Ni 2 Cr phase when exposed to temperatures < 590 C. degrees. LRO results in lattice contraction, hardening, and a change in slip mode, which, in turn, can cause dimensional changes, internal stress, and appreciable embrittlement. Despite the technological importance of this alloy system, the variables that influence LRO are not well understood and the time-temperature-transformation kinetics poorly defined. In order to assess the risk of LRO in nuclear power systems, the present research uses model Ni-Cr alloys and ageing times up to 10000 hours to define the kinetics of LRO and to assess the effects of cold work, quench rate, and alloying additions. Results show that the hardening caused by ordering is well described by the Kolmogorov-Johnson-Mehl-Avrami (KJMA) equation with an Avrami exponent, n near 0.65 and an apparent activation energy that depends on the starting condition of the alloy. Furnace cooled samples displayed a Q ∼ 244 kJ/mol, which suggests bulk diffusional growth of the ordered phase, while water quenched samples exhibited a Q ∼ 147 kJ/mol, indicating that excess vacancies accelerate ordering. Cold work (10% or 20%) acts to disrupt any ordering that forms on furnace cooling but has no apparent effect on the apparent activation energy or Avrami exponent. Iron additions decrease the temperature below which the ordered phase is stable but do not appear to affect the rate of ordering. Investigation of other alloying suggest that molybdenum (∼ 2.47 wt.%) may accelerate ordering but other alloying elements studied (Si up to 0.28 wt.%, Mn up to 0.19 wt.%, and Nb up to 2.38 wt.%) have little influence. These findings, combined with a review of LRO in commercial alloys indicate that LRO can develop over a wide

  17. AMEM-ADL Polymer Migration Estimation Model User's Guide

    Science.gov (United States)

    The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.

  18. Benefit Estimation Model for Tourist Spaceflights

    Science.gov (United States)

    Goehlich, Robert A.

    2003-01-01

    It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.

  19. [Research on high-order Windkessel model for assessing vascular compliance].

    Science.gov (United States)

    Ren, Yinzi; Xu, Jing; Gong, Shijin; Li, Li; Hu, Qijun; Yan, Jing; Ning, Gangmin

    2011-04-01

    In this paper, we propose the construction of a fifth-order Windkessel model, and give complete mathematical solutions for this model. Utilizing the diastolic pulse wave analytical methods, we derived the parameters of the mathematical model. The parameters were further applied to estimate arterial compliance, blood flow inertia, peripheral resistance and other indices. With simulation tools we assess the validity of the model, and built a simulation circuit with the model parameters R, C and L. The model parameters were obtained from the high-order Windkessel model. The stroke volume of left ventricle is employed as the input of the simulation circuit. At the end of the circuit, the responding signal was gained. And it in turn was compared with the measured pulse waveform. The results show that the fifth-order Windkessel model is superior to the third-order Windkessel model in the pulse wave fitting and stability, and thus better reflects the role of microvessles in the circulatory system.

  20. An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions

    Energy Technology Data Exchange (ETDEWEB)

    Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.

    2016-07-01

    Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)

  1. Probabilistic Rotor Life Assessment Using Reduced Order Models

    Directory of Open Access Journals (Sweden)

    Brian K. Beachkofski

    2009-01-01

    Full Text Available Probabilistic failure assessments for integrally bladed disks are system reliability problems where a failure in at least one blade constitutes a rotor system failure. Turbine engine fan and compressor blade life is dominated by High Cycle Fatigue (HCF initiated either by pure HCF or Foreign Object Damage (FOD. To date performing an HCF life assessment for the entire rotor system has been too costly in analysis time to be practical. Although the substantial run-time has previously precluded a full-rotor probabilistic analysis, reduced order models make this process tractable as demonstrated in this work. The system model includes frequency prediction, modal stress variation, mistuning amplification, FOD effect, and random material capability. The model has many random variables which are most easily handled through simple random sampling.

  2. Topological order in an exactly solvable 3D spin model

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Leemhuis, Bernhard; Terhal, Barbara M.

    2011-01-01

    Research highlights: RHtriangle We study exactly solvable spin model with six-qubit nearest neighbor interactions on a 3D face centered cubic lattice. RHtriangle The ground space of the model exhibits topological quantum order. RHtriangle Elementary excitations can be geometrically described as the corners of rectangular-shaped membranes. RHtriangle The ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. RHtriangle Logical operators acting on the encoded qubits are described in terms of closed strings and closed membranes. - Abstract: We study a 3D generalization of the toric code model introduced recently by Chamon. This is an exactly solvable spin model with six-qubit nearest-neighbor interactions on an FCC lattice whose ground space exhibits topological quantum order. The elementary excitations of this model which we call monopoles can be geometrically described as the corners of rectangular-shaped membranes. We prove that the creation of an isolated monopole separated from other monopoles by a distance R requires an operator acting on Ω(R 2 ) qubits. Composite particles that consist of two monopoles (dipoles) and four monopoles (quadrupoles) can be described as end-points of strings. The peculiar feature of the model is that dipole-type strings are rigid, that is, such strings must be aligned with face-diagonals of the lattice. For periodic boundary conditions the ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. We describe a complete set of logical operators acting on the encoded qubits in terms of closed strings and closed membranes.

  3. Groundwater temperature estimation and modeling using hydrogeophysics.

    Science.gov (United States)

    Nguyen, F.; Lesparre, N.; Hermans, T.; Dassargues, A.; Klepikova, M.; Kemna, A.; Caers, J.

    2017-12-01

    Groundwater temperature may be of use as a state variable proxy for aquifer heat storage, highlighting preferential flow paths, or contaminant remediation monitoring. However, its estimation often relies on scarce temperature data collected in boreholes. Hydrogeophysical methods such as electrical resistivity tomography (ERT) and distributed temperature sensing (DTS) may provide more exhaustive spatial information of the bulk properties of interest than samples from boreholes. If a properly calibrated DTS reading provides direct measurements of the groundwater temperature in the well, ERT requires one to determine the fractional change per degree Celsius. One advantage of this petrophysical relationship is its relative simplicity: the fractional change is often found to be around 0.02 per degree Celcius, and represents mainly the variation of electrical resistivity due to the viscosity effect. However, in presence of chemical and kinetics effects, the variation may also depend on the duration of the test and may neglect reactions occurring between the pore water and the solid matrix. Such effects are not expected to be important for low temperature systems (<30 °C), at least for short experiments. In this contribution, we use different field experiments under natural and forced flow conditions to review developments for the joint use of DTS and ERT to map and monitor the temperature distribution within aquifers, to characterize aquifers in terms of heterogeneity and to better understand processes. We show how temperature time-series measurements might be used to constraint the ERT inverse problem in space and time and how combined ERT-derived and DTS estimation of temperature may be used together with hydrogeological modeling to provide predictions of the groundwater temperature field.

  4. [Foundation of preoperative prognosis estimation model for glioblastoma multiforme].

    Science.gov (United States)

    Jiang, H H; Feng, G Y; Liu, D; Ren, X H; Cui, Y; Lin, S

    2017-08-15

    Objective: This study explored the preoperative prognostic factors of patients with glioblastoma multiforme (GBM) in order to propose a preoperative prognosis estimation model. Methods: The clinical data of 416 patients diagnosed with GBM in Beijing Tiantan Hospital affiliated to Capital Medical University from 2008 to 2015 were retrospectively reviewed.A total of nine factors: gender, age, duration of symptoms, preoperative epilepsy, preoperative muscle weakness, preoperative headache, preoperative KPS score, tumor location and tumor diameter were enrolled in the survival analysis.The significant factors identified by Kaplan-Meier plot were further collected in the multivariate Cox regression analysis.On the basis of multivariate analysis results, a preoperative prognosis estimation model was founded. Results: Univariate analysis showed that Age ≥50 years, without preoperative epilepsy, tumor located in non-frontotemporal lobe, tumor diameter ≥6 cm and preoperative KPS score preoperative epilepsy, tumor located in non-frontotemporal lobe were independent risk factors ( P <0.05). The prognostic estimation model based on the independent risk factors divided the whole cohort into three subgroups with different survival ( P <0.001). Conclusions: The more risk factors, the higher score but poorer prognosis. Patients in the high-risk group had lower gross total resection degree but higher rate of postoperative complications, which suggested that aggressive resection was not suitable for high-risk patients.

  5. Towards a Wind Turbine Wake Reduced-Order Model

    Science.gov (United States)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc; Tutkun, Murat; Cal, Raúl Bayoán

    2017-11-01

    A reduced-order model for a wind turbine wake is sought for prediction and control. Basis functions from the proper orthogonal decomposition (POD) represent the spatially coherent turbulence structures in the wake; eigenvalues delineate the turbulence kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a series of polynomial parameters that quantify mode interaction and the evolution of each mode coefficient. Tikhonov regularization is employed to recalibrate the dynamical system, reducing error in the modeled mode coefficients and adding stability to the system. The wakeROM is periodically reinitialized by relating the incoming turbulent velocity to the POD mode coefficients. A high-level view of the wakeROM provides as a platform to discuss promising research direction, alternate processes that will enhance stability, and portability to control methods. NSF- ECCS-1032647, NSF-CBET-1034581, Research Council of Norway Project Number 231491.

  6. Dynamic plant uptake modelling and mass flux estimation

    DEFF Research Database (Denmark)

    Rein, Arno; Bauer-Gottwein, Peter; Trapp, Stefan

    2011-01-01

    Plants significantly influence contaminant transport and fate. Important processes are uptake of soil and groundwater contaminants, as well as biodegradation in plants and their root zones. Models for the prediction of chemical uptake into plants are required for the set-up of mass balances...... in environmental systems at different scales. Feedback mechanisms between plants and hydrological systems can play an important role. However, they have received little attention to date. Here, a new model concept for dynamic plant uptake models applying analytical matrix solutions is presented, which can...... be coupled to groundwater transport simulation tools. Exemplary simulations of plant uptake were carried out in order to estimate chemical concentrations in the soil–plant–air system and the influence of plants on contaminant mass fluxes from soil to groundwater....

  7. Temporal aggregation in first order cointegrated vector autoregressive models

    DEFF Research Database (Denmark)

    La Cour, Lisbeth Funding; Milhøj, Anders

    We study aggregation - or sample frequencies - of time series, e.g. aggregation from weekly to monthly or quarterly time series. Aggregation usually gives shorter time series but spurious phenomena, in e.g. daily observations, can on the other hand be avoided. An important issue is the effect of ...... of aggregation on the adjustment coefficient in cointegrated systems. We study only first order vector autoregressive processes for n dimensional time series Xt, and we illustrate the theory by a two dimensional and a four dimensional model for prices of various grades of gasoline...

  8. Using Count Data and Ordered Models in National Forest Recreation Demand Analysis

    Science.gov (United States)

    Simões, Paula; Barata, Eduardo; Cruz, Luis

    2013-11-01

    This research addresses the need to improve our knowledge on the demand for national forests for recreation and offers an in-depth data analysis supported by the complementary use of count data and ordered models. From a policy-making perspective, while count data models enable the estimation of monetary welfare measures, ordered models allow for the wider use of the database and provide a more flexible analysis of data. The main purpose of this article is to analyse the individual forest recreation demand and to derive a measure of its current use value. To allow a more complete analysis of the forest recreation demand structure the econometric approach supplements the use of count data models with ordered category models using data obtained by means of an on-site survey in the Bussaco National Forest (Portugal). Overall, both models reveal that travel cost and substitute prices are important explanatory variables, visits are a normal good and demographic variables seem to have no influence on demand. In particular, estimated price and income elasticities of demand are quite low. Accordingly, it is possible to argue that travel cost (price) in isolation may be expected to have a low impact on visitation levels.

  9. Extended Kalman Filtering and Pathloss modeling for Shadow Power Parameter Estimation in Mobile Wireless Communications

    OpenAIRE

    P. Pappas, George; A. Zohdy, Mohamed

    2017-01-01

    In this paper accurate estimation of parameters, higher order state space prediction methods and Extended Kalman filter (EKF) for modeling shadow power in wireless mobile communications are developed. Path-loss parameter estimation models are compared and evaluated. Shadow power estimation methods in wireless cellular communications are very important for use in power control of mobile device and base station. The methods are validated and compared to existing methods, Kalman Filter (KF) with...

  10. Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model

    Directory of Open Access Journals (Sweden)

    Jing-Huai Gao

    2009-12-01

    Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.

  11. Antiferromagnetic order in the Hubbard model on the Penrose lattice

    Science.gov (United States)

    Koga, Akihisa; Tsunetsugu, Hirokazu

    2017-12-01

    We study an antiferromagnetic order in the ground state of the half-filled Hubbard model on the Penrose lattice and investigate the effects of quasiperiodic lattice structure. In the limit of infinitesimal Coulomb repulsion U →+0 , the staggered magnetizations persist to be finite, and their values are determined by confined states, which are strictly localized with thermodynamics degeneracy. The magnetizations exhibit an exotic spatial pattern, and have the same sign in each of cluster regions, the size of which ranges from 31 sites to infinity. With increasing U , they continuously evolve to those of the corresponding spin model in the U =∞ limit. In both limits of U , local magnetizations exhibit a fairly intricate spatial pattern that reflects the quasiperiodic structure, but the pattern differs between the two limits. We have analyzed this pattern change by a mode analysis by the singular value decomposition method for the fractal-like magnetization pattern projected into the perpendicular space.

  12. Pairing of parafermions of order 2: seniority model

    International Nuclear Information System (INIS)

    Nelson, Charles A

    2004-01-01

    As generalizations of the fermion seniority model, four multi-mode Hamiltonians are considered to investigate some of the consequences of the pairing of parafermions of order 2. Two- and four-particle states are explicitly constructed for H A ≡ -GA†A with A† ≡ 1/2 Σ m>0 c† m c† -m and the distinct H C ≡ -GC†C with C† ≡ 1/2 Σ m>0 c† -m c† m , and for the time-reversal invariant H (-) ≡ -G(A† - C†)(A - C) and H (+) ≡ -G(A† + C†)(A + C), which has no analogue in the fermion case. The spectra and degeneracies are compared with those of the usual fermion seniority model

  13. Quantifying and modeling birth order effects in autism.

    Directory of Open Access Journals (Sweden)

    Tychele Turner

    Full Text Available Autism is a complex genetic disorder with multiple etiologies whose molecular genetic basis is not fully understood. Although a number of rare mutations and dosage abnormalities are specific to autism, these explain no more than 10% of all cases. The high heritability of autism and low recurrence risk suggests multifactorial inheritance from numerous loci but other factors also intervene to modulate risk. In this study, we examine the effect of birth rank on disease risk which is not expected for purely hereditary genetic models. We analyzed the data from three publicly available autism family collections in the USA for potential birth order effects and studied the statistical properties of three tests to show that adequate power to detect these effects exist. We detect statistically significant, yet varying, patterns of birth order effects across these collections. In multiplex families, we identify V-shaped effects where middle births are at high risk; in simplex families, we demonstrate linear effects where risk increases with each additional birth. Moreover, the birth order effect is gender-dependent in the simplex collection. It is currently unknown whether these patterns arise from ascertainment biases or biological factors. Nevertheless, further investigation of parental age-dependent risks yields patterns similar to those observed and could potentially explain part of the increased risk. A search for genes considering these patterns is likely to increase statistical power and uncover novel molecular etiologies.

  14. Dynamic Diffusion Estimation in Exponential Family Models

    Czech Academy of Sciences Publication Activity Database

    Dedecius, Kamil; Sečkárová, Vladimíra

    2013-01-01

    Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf

  15. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  16. Source term boundary adaptive estimation in a first-order 1D hyperbolic PDE: Application to a one loop solar collector through

    KAUST Repository

    Mechhoud, Sarra

    2016-08-04

    In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term and only boundary measurements are available. Using boundary injection, the estimator is developed in the Lyapunov approach and consists of a combination of a state observer and a parameter adaptation law which guarantee the asymptotic convergence of the state and parameter estimation errors. Simulation results are provided to illustrate the performance of the proposed identifier.

  17. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Science.gov (United States)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-10-01

    transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

  18. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    Directory of Open Access Journals (Sweden)

    R. Locatelli

    2013-10-01

    question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

  19. Topological order in an exactly solvable 3D spin model

    Science.gov (United States)

    Bravyi, Sergey; Leemhuis, Bernhard; Terhal, Barbara M.

    2011-04-01

    We study a 3D generalization of the toric code model introduced recently by Chamon. This is an exactly solvable spin model with six-qubit nearest-neighbor interactions on an FCC lattice whose ground space exhibits topological quantum order. The elementary excitations of this model which we call monopoles can be geometrically described as the corners of rectangular-shaped membranes. We prove that the creation of an isolated monopole separated from other monopoles by a distance R requires an operator acting on Ω( R2) qubits. Composite particles that consist of two monopoles (dipoles) and four monopoles (quadrupoles) can be described as end-points of strings. The peculiar feature of the model is that dipole-type strings are rigid, that is, such strings must be aligned with face-diagonals of the lattice. For periodic boundary conditions the ground space can encode 4 g qubits where g is the greatest common divisor of the lattice dimensions. We describe a complete set of logical operators acting on the encoded qubits in terms of closed strings and closed membranes.

  20. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2011-01-01

    In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator

  1. Performances of some estimators of linear model with ...

    African Journals Online (AJOL)

    The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...

  2. On population size estimators in the Poisson mixture model.

    Science.gov (United States)

    Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua

    2013-09-01

    Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.

  3. A Derivative Based Estimator for Semiparametric Index Models

    NARCIS (Netherlands)

    Donkers, A.C.D.; Schafgans, M.

    2003-01-01

    This paper proposes a semiparametric estimator for single- and multiple index models.It provides an extension of the average derivative estimator to the multiple index model setting.The estimator uses the average of the outer product of derivatives and is shown to be root-N consistent and

  4. Estimation of Stochastic Volatility Models by Nonparametric Filtering

    DEFF Research Database (Denmark)

    Kanaya, Shin; Kristensen, Dennis

    2016-01-01

    /estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...

  5. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  6. Strategies for Reduced-Order Models in Uncertainty Quantification of Complex Turbulent Dynamical Systems

    Science.gov (United States)

    Qi, Di

    Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are

  7. Efficient estimation of semiparametric copula models for bivariate survival data

    KAUST Repository

    Cheng, Guang

    2014-01-01

    A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.

  8. Theory and Low-Order Modeling of Unsteady Airfoil Flows

    Science.gov (United States)

    Ramesh, Kiran

    Unsteady flow phenomena are prevalent in a wide range of problems in nature and engineering. These include, but are not limited to, aerodynamics of insect flight, dynamic stall in rotorcraft and wind turbines, leading-edge vortices in delta wings, micro-air vehicle (MAV) design, gust handling and flow control. The most significant characteristics of unsteady flows are rapid changes in the circulation of the airfoil, apparent-mass effects, flow separation and the leading-edge vortex (LEV) phenomenon. Although experimental techniques and computational fluid dynamics (CFD) methods have enabled the detailed study of unsteady flows and their underlying features, a reliable and inexpensive loworder method for fast prediction and for use in control and design is still required. In this research, a low-order methodology based on physical principles rather than empirical fitting is proposed. The objective of such an approach is to enable insights into unsteady phenomena while developing approaches to model them. The basis of the low-order model developed here is unsteady thin-airfoil theory. A time-stepping approach is used to solve for the vorticity on an airfoil camberline, allowing for large amplitudes and nonplanar wakes. On comparing lift coefficients from this method against data from CFD and experiments for some unsteady test cases, it is seen that the method predicts well so long as LEV formation does not occur and flow over the airfoil is attached. The formation of leading-edge vortices (LEVs) in unsteady flows is initiated by flow separation and the formation of a shear layer at the airfoil's leading edge. This phenomenon has been observed to have both detrimental (dynamic stall in helicopters) and beneficial (high-lift flight in insects) effects. To predict the formation of LEVs in unsteady flows, a Leading Edge Suction Parameter (LESP) is proposed. This parameter is calculated from inviscid theory and is a measure of the suction at the airfoil's leading edge. It

  9. Mathematical model of transmission network static state estimation

    Directory of Open Access Journals (Sweden)

    Ivanov Aleksandar

    2012-01-01

    Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.

  10. Vortex network community based reduced-order force model

    Science.gov (United States)

    Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya; Taira, Kunihiko

    2017-11-01

    We characterize the vortical wake interactions by utilizing network theory and cluster-based approaches, and develop a data-inspired unsteady force model. In the present work, the vortical interaction network is defined by nodes representing vortical elements and the edges quantified by induced velocity measures amongst the vortices. The full vorticity field is reduced to a finite number of vortical clusters based on network community detection algorithm, which serves as a basis for a skeleton network that captures the essence of the wake dynamics. We use this reduced representation of the wake to develop a data-inspired reduced-order force model that can predict unsteady fluid forces on the body. The overall formulation is demonstrated for laminar flows around canonical bluff body wake and stalled flow over an airfoil. We also show the robustness of the present network-based model against noisy data, which motivates applications towards turbulent flows and experimental measurements. Supported by the National Science Foundation (Grant 1632003).

  11. Application of ANFIS and SVM Systems in Order to Estimate Monthly Reference Crop Evapotranspiration in the Northwest of Iran

    Directory of Open Access Journals (Sweden)

    F. Ahmadi

    2016-10-01

    by a linear boundary. In this method, the nearest samples to the decision boundary called support vectors. These vectors define the equation of the decision boundary. The classic intelligent simulation algorithms such as artificial neural network usually minimize the absolute error or sum of square errors of the training data, but the SVM models, used the structural error minimization principle (5. Results Discussion Based on the results of performance evaluations, and RMSE and R criteria, both of the SVM and ANFIS models had a high accuracy in predicting the reference evapotranspiration of North West of Iran. From the results of Tables 6 and 8, it can be concluded that both of the models had similar performance and they can present high accuracy in modeling with different inputs. As the ANFIS model for achieving the maximum accuracy used the maximum, minimum and average temperature, sunshine (M8 and wind speed. But the SVM model in Urmia and Sanandaj stations with M8 pattern and in other stations with M9 pattern achieves the maximum performance. In all of the stations (apart from Sanandaj station the SVM model had a high accuracy and less error than the ANFIS model but, this difference is not remarkable and the SVM model used more input parameters (than the ANFIS model for predicting the evapotranspiration. Conclusion In this research, in order to predict monthly reference evapotranspiration two ANFIS and SVM models employed using collected data at the six synoptic stations in the period of 38 years (1973-2010 located in the north-west of Iran. At first monthly evapotranspiration of a reference crop estimated by FAO-Penman- Monteith method for selected stations as the output of SVM and ANFIS models. Then a regression equation between effective meteorological parameters on evapotranspiration fitted and different input patterns for model determined. Results showed Relative humidity as the less effective parameter deleted from an input of the model. Also in this paper

  12. Harnack Inequalities and ABP Estimates for Nonlinear Second-Order Elliptic Equations in Unbounded Domains

    Directory of Open Access Journals (Sweden)

    M. E. Amendola

    2008-01-01

    Full Text Available We are concerned with fully nonlinear uniformly elliptic operators with a superlinear gradient term. We look for local estimates, such as weak Harnack inequality and local maximum principle, and their extension up to the boundary. As applications, we deduce ABP-type estimates and weak maximum principles in general unbounded domains, a strong maximum principle, and a Liouville-type theorem.

  13. Estimating the Robustness of Composite CBA and MCDA Assessments by Variation of Criteria Importance Order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    2011-01-01

    . Furthermore, the relative weights can make a large difference in the resulting assessment of alternatives (Hobbs and Meier 2000). Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness...

  14. Online Estimation of Model Parameters of Lithium-Ion Battery Using the Cubature Kalman Filter

    Science.gov (United States)

    Tian, Yong; Yan, Rusheng; Tian, Jindong; Zhou, Shijie; Hu, Chao

    2017-11-01

    Online estimation of state variables, including state-of-charge (SOC), state-of-energy (SOE) and state-of-health (SOH) is greatly crucial for the operation safety of lithium-ion battery. In order to improve estimation accuracy of these state variables, a precise battery model needs to be established. As the lithium-ion battery is a nonlinear time-varying system, the model parameters significantly vary with many factors, such as ambient temperature, discharge rate and depth of discharge, etc. This paper presents an online estimation method of model parameters for lithium-ion battery based on the cubature Kalman filter. The commonly used first-order resistor-capacitor equivalent circuit model is selected as the battery model, based on which the model parameters are estimated online. Experimental results show that the presented method can accurately track the parameters variation at different scenarios.

  15. Reduced order modeling in topology optimization of vibroacoustic problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas

    2017-01-01

    There is an interest in introducing topology optimization techniques in the design process of structural-acoustic systems. In topology optimization, the design space must be finely meshed in order to obtain an accurate design, which results in large numbers of degrees of freedom when designing...... or size optimization in large vibroacoustic models; however, new challenges are encountered when dealing with topology optimization. Since a design parameter per element is considered, the total number of design variables becomes very large; this poses a challenge to most existing pMOR techniques, which...... suffer from the curse of dimensionality. Moreover, the fact that the nature of the elements changes throughout the optimization (material to void or material to air) makes it more difficult to create a global basis that is accurate throughout the whole design space. In this work, these challenges...

  16. Basic first-order model theory in Mizar

    Directory of Open Access Journals (Sweden)

    Marco Bright Caminati

    2010-01-01

    Full Text Available The author has submitted to Mizar Mathematical Library a series of five articles introducing a framework for the formalization of classical first-order model theory.In them, Goedel's completeness and Lowenheim-Skolem theorems have also been formalized for the countable case, to offer a first application of it and to showcase its utility.This is an overview and commentary on some key aspects of this setup.It features exposition and discussion of a new encoding of basic definitions and theoretical gears needed for the task, remarks about the design strategies and approaches adopted in their implementation, and more general reflections about proof checking induced by the work done.

  17. Dynamical analysis of fractional order model of immunogenic tumors

    Directory of Open Access Journals (Sweden)

    Sadia Arshad

    2016-07-01

    Full Text Available In this article, we examine the fractional order model of the cytotoxic T lymphocyte response to a growing tumor cell population. We investigate the long-term behavior of tumor growth and explore the conditions of tumor elimination analytically. We establish the conditions for the tumor-free equilibrium and tumor-infection equilibrium to be asymptotically stable and provide the expression of the basic reproduction number. Existence of physical significant tumor-infection equilibrium points is investigated analytically. We show that tumor growth rate, source rate of immune cells, and death rate of immune cells play vital role in tumor dynamics and system undergoes saddle-node and transcritical bifurcation based on these parameters. Furthermore, the effect of cancer treatment is discussed by varying the values of relevant parameters. Numerical simulations are presented to illustrate the analytical results.

  18. Estimation of distribution overlap of urn models.

    Science.gov (United States)

    Hampton, Jerrad; Lladser, Manuel E

    2012-01-01

    A classical problem in statistics is estimating the expected coverage of a sample, which has had applications in gene expression, microbial ecology, optimization, and even numismatics. Here we consider a related extension of this problem to random samples of two discrete distributions. Specifically, we estimate what we call the dissimilarity probability of a sample, i.e., the probability of a draw from one distribution not being observed in [Formula: see text] draws from another distribution. We show our estimator of dissimilarity to be a [Formula: see text]-statistic and a uniformly minimum variance unbiased estimator of dissimilarity over the largest appropriate range of [Formula: see text]. Furthermore, despite the non-Markovian nature of our estimator when applied sequentially over [Formula: see text], we show it converges uniformly in probability to the dissimilarity parameter, and we present criteria when it is approximately normally distributed and admits a consistent jackknife estimator of its variance. As proof of concept, we analyze V35 16S rRNA data to discern between various microbial environments. Other potential applications concern any situation where dissimilarity of two discrete distributions may be of interest. For instance, in SELEX experiments, each urn could represent a random RNA pool and each draw a possible solution to a particular binding site problem over that pool. The dissimilarity of these pools is then related to the probability of finding binding site solutions in one pool that are absent in the other.

  19. Semiparametric Efficient Adaptive Estimation of the PTTGARCH model

    OpenAIRE

    Ciccarelli, Nicola

    2016-01-01

    Financial data sets exhibit conditional heteroskedasticity and asymmetric volatility. In this paper we derive a semiparametric efficient adaptive estimator of a conditional heteroskedasticity and asymmetric volatility GARCH-type model (i.e., the PTTGARCH(1,1) model). Via kernel density estimation of the unknown density function of the innovation and via the Newton-Raphson technique applied on the root-n-consistent quasi-maximum likelihood estimator, we construct a more efficient estimator tha...

  20. Partition-based Unscented Kalman Filter for Reconfigurable Battery Pack State Estimation using an Electrochemical Model

    OpenAIRE

    Couto, Luis D.; Kinnaert, Michel

    2017-01-01

    Accurate state estimation of large-scale lithium-ion battery packs is necessary for the advanced control of batteries, which could potentially increase their lifetime through e.g. reconfiguration. To tackle this problem, an enhanced reduced-order electrochemical model is used here. This model allows considering a wider operating range and thermal coupling between cells, the latter turning out to be significant. The resulting nonlinear model is exploited for state estimation through unscented ...

  1. Robust estimation and moment selection in dynamic fixed-effects panel data models

    NARCIS (Netherlands)

    Cizek, Pavel; Aquaro, Michele

    Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their

  2. The efficiency of OLS estimator in the linear-regression model with ...

    African Journals Online (AJOL)

    Bounds for the efficiency of ordinary least squares estimator relative to generalized least squares estimator in the linear regression model with first-order spatial error process are given. SINET: Ethiopian Journal of Science Vol. 24, No. 1 (June 2001), pp. 17-33. Key words/phrases: Efficiency, generalized least squares, ...

  3. Development and estimation of a semi-compensatory model with flexible error structure

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Shiftan, Yoram; Bekhor, Shlomo

    -response model and the utility-based choice by alternatively (i) a nested-logit model and (ii) an error-component logit. In order to test the suggested methodology, the model was estimated for a sample of 1,893 ranked choices and respective threshold values from 631 students who participated in a web-based two...

  4. Bootstrap and Order Statistics for Quantifying Thermal-Hydraulic Code Uncertainties in the Estimation of Safety Margins

    Directory of Open Access Journals (Sweden)

    Enrico Zio

    2008-01-01

    Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.

  5. The model for estimation production cost of embroidery handicraft

    Science.gov (United States)

    Nofierni; Sriwana, IK; Septriani, Y.

    2017-12-01

    Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.

  6. Recursive estimation of high-order Markov chains: Approximation by finite mixtures

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav

    2016-01-01

    Roč. 326, č. 1 (2016), s. 188-201 ISSN 0020-0255 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Markov chain * Approximate parameter estimation * Bayesian recursive estimation * Adaptive systems * Kullback–Leibler divergence * Forgetting Subject RIV: BC - Control Systems Theory Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2015/AS/karny-0447119.pdf

  7. Volatility estimation using a rational GARCH model

    Directory of Open Access Journals (Sweden)

    Tetsuya Takaishi

    2018-03-01

    Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.

  8. A POD reduced order unstructured mesh ocean modelling method for moderate Reynolds number flows

    Science.gov (United States)

    Fang, F.; Pain, C. C.; Navon, I. M.; Gorman, G. J.; Piggott, M. D.; Allison, P. A.; Farrell, P. E.; Goddard, A. J. H.

    Herein a new approach to enhance the accuracy of a novel Proper Orthogonal Decomposition (POD) model applied to moderate Reynolds number flows (of the type typically encountered in ocean models) is presented. This approach develops the POD model of Fang et al. [Fang, F., Pain, C.C., Navon, I.M., Piggott, M.D., Gorman, G.J., Allison, P., Goddard, A.J.H., 2008. Reduced-order modelling of an adaptive mesh ocean model. International Journal for Numerical Methods in Fluids. doi:10.1002/fld.1841] used in conjunction with the Imperial College Ocean Model (ICOM), an adaptive, non-hydrostatic finite element model. Both the velocity and vorticity results of the POD reduced order model (ROM) exhibit an overall good agreement with those obtained from the full model. The accuracy of the POD-Galerkin model with the use of adaptive meshes is first evaluated using the Munk gyre flow test case with Reynolds numbers ranging between 400 and 2000. POD models using the L2 norm become oscillatory when the Reynolds number exceeds Re=400. This is because the low-order truncation of the POD basis inhibits generally all the transfers between the large and the small (unresolved) scales of the fluid flow. Accuracy is improved by using the H1 POD projector in preference to the L2 POD projector. The POD bases are constructed by incorporating gradients as well as function values in the H1 Sobolev norm. The accuracy of numerical results is further enhanced by increasing the number of snapshots and POD bases. Error estimation was used to assess the effect of truncation (involved in the POD-Galerkin approach) when adaptive meshes are used in conjunction with POD/ROM. The RMSE of velocity results between the full model and POD-Galerkin model is reduced by as much as 50% by using the H1 norm and increasing the number of snapshots and POD bases.

  9. Second-order closure PBL model with new third-order moments: Comparison with LES data

    Science.gov (United States)

    Canuto, V. M.; Minotti, F.; Ronchi, C.; Ypma, R. M.; Zeman, O.

    1994-01-01

    This paper contains two parts. In the first part, a new set of diagnostic equations is derived for the third-order moments for a buoyancy-driven flow, by exact inversion of the prognostic equations for the third-order moment equations in the stationary case. The third-order moments exhibit a universal structure: they all are a linear combination of the derivatives of all the second-order moments, bar-w(exp 2), bar-w theta, bar-theta(exp 2), and bar-q(exp 2). Each term of the sum contains a turbulent diffusivity D(sub t), which also exhibits a universal structure of the form D(sub t) = a nu(sub t) + b bar-w theta. Since the sign of the convective flux changes depending on stable or unstable stratification, D(sub t) varies according to the type of stratification. Here nu(sub t) approximately equal to wl (l is a mixing length and w is an rms velocity) represents the 'mechanical' part, while the 'buoyancy' part is represented by the convective flux bar-w theta. The quantities a and b are functions of the variable N(sub tau)(exp 2), where N(exp 2) = g alpha derivative of Theta with respect to z and tau is the turbulence time scale. The new expressions for the third-order moments generalize those of Zeman and Lumley, which were subsequently adopted by Sun and Ogura, Chen and Cotton, and Finger and Schmidt in their treatments of the convective boundary layer. In the second part, the new expressions for the third-order moments are used to solve the ensemble average equations describing a purely convective boundary laye r heated from below at a constant rate. The computed second- and third-order moments are then compared with the corresponding Large Eddy Simulation (LES) results, most of which are obtained by running a new LES code, and part of which are taken from published results. The ensemble average results compare favorably with the LES data.

  10. Comparison of two intelligent models to estimate the instantaneous ...

    Indian Academy of Sciences (India)

    Mostafa Zamani Mohiabadi

    2017-07-25

    Jul 25, 2017 ... 2014) has combined empirical models and a Bayesian neural network (BNN) model to estimate daily global solar radiation on a horizon- tal surface in Ghardaıa, Algeria. In their model, the maximum and minimum air temperatures of the year 2006 have been used to estimate the coefficients of the empirical ...

  11. A Contingent Trip Model for Estimating Rail-trail Demand

    Science.gov (United States)

    Carter J. Betz; John C. Bergstrom; J. Michael Bowker

    2003-01-01

    The authors develop a contingent trip model to estimate the recreation demand for and value of a potential rail-trail site in north-east Georgia. The contingent trip model is an alternative to travel cost modelling useful for ex ante evaluation of proposed recreation resources or management alternatives. The authors estimate the empirical demand for trips using a...

  12. Linear stability analysis of first-order delayed car-following models on a ring

    Science.gov (United States)

    Lassarre, Sylvain; Roussignol, Michel; Tordeux, Antoine

    2012-09-01

    The evolution of a line of vehicles on a ring is modeled by means of first-order car-following models. Three generic models describe the speed of a vehicle as a function of the spacing ahead and the speed of the predecessor. The first model is a basic one with no delay. The second is a delayed car-following model with a strictly positive parameter for the driver and vehicle reaction time. The last model includes a reaction time parameter with an anticipation process by which the delayed position of the predecessor is estimated. Explicit conditions for the linear stability of homogeneous configurations are calculated for each model. Two methods of calculus are compared: an exact one via Hopf bifurcations and an approximation by second-order models. The conditions describe stable areas for the parameters of the models that we interpret. The results notably show that the impact of the reaction time on the stability can be palliated by the anticipation process.

  13. High-order hidden Markov model for piecewise linear processes and applications to speech recognition.

    Science.gov (United States)

    Lee, Lee-Min; Jean, Fu-Rong

    2016-08-01

    The hidden Markov models have been widely applied to systems with sequential data. However, the conditional independence of the state outputs will limit the output of a hidden Markov model to be a piecewise constant random sequence, which is not a good approximation for many real processes. In this paper, a high-order hidden Markov model for piecewise linear processes is proposed to better approximate the behavior of a real process. A parameter estimation method based on the expectation-maximization algorithm was derived for the proposed model. Experiments on speech recognition of noisy Mandarin digits were conducted to examine the effectiveness of the proposed method. Experimental results show that the proposed method can reduce the recognition error rate compared to a baseline hidden Markov model.

  14. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  15. NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...

    African Journals Online (AJOL)

    Air temperature of monthly mean minimum temperature, maximum temperature and relative humidity obtained from Nigerian Meteorological Agency (NIMET) were used as inputs to the ANFIS model and monthly mean global solar radiation was used as out of the model. Statistical evaluation of the model was done based on ...

  16. Lag space estimation in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...

  17. Lower-order effects adjustment in quantitative traits model-based multifactor dimensionality reduction.

    Science.gov (United States)

    Mahachie John, Jestinah M; Cattaert, Tom; Lishout, François Van; Gusareva, Elena S; Steen, Kristel Van

    2012-01-01

    Identifying gene-gene interactions or gene-environment interactions in studies of human complex diseases remains a big challenge in genetic epidemiology. An additional challenge, often forgotten, is to account for important lower-order genetic effects. These may hamper the identification of genuine epistasis. If lower-order genetic effects contribute to the genetic variance of a trait, identified statistical interactions may simply be due to a signal boost of these effects. In this study, we restrict attention to quantitative traits and bi-allelic SNPs as genetic markers. Moreover, our interaction study focuses on 2-way SNP-SNP interactions. Via simulations, we assess the performance of different corrective measures for lower-order genetic effects in Model-Based Multifactor Dimensionality Reduction epistasis detection, using additive and co-dominant coding schemes. Performance is evaluated in terms of power and familywise error rate. Our simulations indicate that empirical power estimates are reduced with correction of lower-order effects, likewise familywise error rates. Easy-to-use automatic SNP selection procedures, SNP selection based on "top" findings, or SNP selection based on p-value criterion for interesting main effects result in reduced power but also almost zero false positive rates. Always accounting for main effects in the SNP-SNP pair under investigation during Model-Based Multifactor Dimensionality Reduction analysis adequately controls false positive epistasis findings. This is particularly true when adopting a co-dominant corrective coding scheme. In conclusion, automatic search procedures to identify lower-order effects to correct for during epistasis screening should be avoided. The same is true for procedures that adjust for lower-order effects prior to Model-Based Multifactor Dimensionality Reduction and involve using residuals as the new trait. We advocate using "on-the-fly" lower-order effects adjusting when screening for SNP-SNP interactions

  18. Fractional Order Modeling of Atmospheric Turbulence - A More Accurate Modeling Methodology for Aero Vehicles

    Science.gov (United States)

    Kopasakis, George

    2014-01-01

    The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.

  19. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    Science.gov (United States)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  20. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2009-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  1. Efficient estimation of an additive quantile regression model

    NARCIS (Netherlands)

    Cheng, Y.; de Gooijer, J.G.; Zerom, D.

    2010-01-01

    In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By

  2. Probability density estimation in stochastic environmental models using reverse representations

    NARCIS (Netherlands)

    Van den Berg, E.; Heemink, A.W.; Lin, H.X.; Schoenmakers, J.G.M.

    2003-01-01

    The estimation of probability densities of variables described by systems of stochastic dierential equations has long been done using forward time estimators, which rely on the generation of realizations of the model, forward in time. Recently, an estimator based on the combination of forward and

  3. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  4. Stabilization Approaches for Linear and Nonlinear Reduced Order Models

    Science.gov (United States)

    Rezaian, Elnaz; Wei, Mingjun

    2017-11-01

    It has been a major concern to establish reduced order models (ROMs) as reliable representatives of the dynamics inherent in high fidelity simulations, while fast computation is achieved. In practice it comes to stability and accuracy of ROMs. Given the inviscid nature of Euler equations it becomes more challenging to achieve stability, especially where moving discontinuities exist. Originally unstable linear and nonlinear ROMs are stabilized here by two approaches. First, a hybrid method is developed by integrating two different stabilization algorithms. At the same time, symmetry inner product is introduced in the generation of ROMs for its known robust behavior for compressible flows. Results have shown a notable improvement in computational efficiency and robustness compared to similar approaches. Second, a new stabilization algorithm is developed specifically for nonlinear ROMs. This method adopts Particle Swarm Optimization to enforce a bounded ROM response for minimum discrepancy between the high fidelity simulation and the ROM outputs. Promising results are obtained in its application on the nonlinear ROM of an inviscid fluid flow with discontinuities. Supported by ARL.

  5. Empirical estimates in stochastic programs with probability and second order stochastic dominance constraints

    Czech Academy of Sciences Publication Activity Database

    Omelchenko, Vadym; Kaňková, Vlasta

    2015-01-01

    Roč. 84, č. 2 (2015), s. 267-281 ISSN 0862-9544 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : Stochastic programming problems * empirical estimates * light and heavy tailed distributions * quantiles Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/E/omelchenko-0454495.pdf

  6. Reduced order modeling, statistical analysis and system identification for a bladed rotor with geometric mistuning

    Science.gov (United States)

    Vishwakarma, Vinod

    Modified Modal Domain Analysis (MMDA) is a novel method for the development of a reduced-order model (ROM) of a bladed rotor. This method utilizes proper orthogonal decomposition (POD) of Coordinate Measurement Machine (CMM) data of blades' geometries and sector analyses using ANSYS. For the first time ROM of a geometrically mistuned industrial scale rotor (Transonic rotor) with large size of Finite Element (FE) model is generated using MMDA. Two methods for estimating mass and stiffness mistuning matrices are used a) exact computation from sector FE analysis, b) estimates based on POD mistuning parameters. Modal characteristics such as mistuned natural frequencies, mode shapes and forced harmonic response are obtained from ROM for various cases, and results are compared with full rotor ANSYS analysis and other ROM methods such as Subset of Nominal Modes (SNM) and Fundamental Model of Mistuning (FMM). Accuracy of MMDA ROM is demonstrated with variations in number of POD features and geometric mistuning parameters. It is shown for the aforementioned case b) that the high accuracy of ROM studied in previous work with Academic rotor does not directly translate to the Transonic rotor. Reasons for such mismatch in results are investigated and attributed to higher mistuning in Transonic rotor. Alternate solutions such as estimation of sensitivities via least squares, and interpolation of mass and stiffness matrices on manifolds are developed, and their results are discussed. Statistics such as mean and standard deviations of forced harmonic response peak amplitude are obtained from random permutations, and are shown to have similar results as those of Monte Carlo simulations. These statistics are obtained and compared for 3 degree of freedom (DOF) lumped parameter model (LPM) of rotor, Academic rotor and Transonic rotor. A state -- estimator based on MMDA ROM and Kalman filter is also developed for offline or online estimation of harmonic forcing function from

  7. TPmsm: Estimation of the Transition Probabilities in 3-State Models

    Directory of Open Access Journals (Sweden)

    Artur Araújo

    2014-12-01

    Full Text Available One major goal in clinical applications of multi-state models is the estimation of transition probabilities. The usual nonparametric estimator of the transition matrix for non-homogeneous Markov processes is the Aalen-Johansen estimator (Aalen and Johansen 1978. However, two problems may arise from using this estimator: first, its standard error may be large in heavy censored scenarios; second, the estimator may be inconsistent if the process is non-Markovian. The development of the R package TPmsm has been motivated by several recent contributions that account for these estimation problems. Estimation and statistical inference for transition probabilities can be performed using TPmsm. The TPmsm package provides seven different approaches to three-state illness-death modeling. In two of these approaches the transition probabilities are estimated conditionally on current or past covariate measures. Two real data examples are included for illustration of software usage.

  8. Construction of energy-stable Galerkin reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan; van Bloemen Waanders, Bart Gustaaf

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed

  9. A nonparametric mixture model for cure rate estimation.

    Science.gov (United States)

    Peng, Y; Dear, K B

    2000-03-01

    Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.

  10. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  11. A simulation of water pollution model parameter estimation

    Science.gov (United States)

    Kibler, J. F.

    1976-01-01

    A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.

  12. Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations

    KAUST Repository

    Jin, Bangti

    2013-01-01

    We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.

  13. Partially ordered mixed hidden Markov model for the disablement process of older adults.

    Science.gov (United States)

    Ip, Edward H; Zhang, Qiang; Rejeski, W Jack; Harris, Tamara B; Kritchevsky, Stephen

    2013-06-01

    At both the individual and societal levels, the health and economic burden of disability in older adults is enormous in developed countries, including the U.S. Recent studies have revealed that the disablement process in older adults often comprises episodic periods of impaired functioning and periods that are relatively free of disability, amid a secular and natural trend of decline in functioning. Rather than an irreversible, progressive event that is analogous to a chronic disease, disability is better conceptualized and mathematically modeled as states that do not necessarily follow a strict linear order of good-to-bad. Statistical tools, including Markov models, which allow bidirectional transition between states, and random effects models, which allow individual-specific rate of secular decline, are pertinent. In this paper, we propose a mixed effects, multivariate, hidden Markov model to handle partially ordered disability states. The model generalizes the continuation ratio model for ordinal data in the generalized linear model literature and provides a formal framework for testing the effects of risk factors and/or an intervention on the transitions between different disability states. Under a generalization of the proportional odds ratio assumption, the proposed model circumvents the problem of a potentially large number of parameters when the number of states and the number of covariates are substantial. We describe a maximum likelihood method for estimating the partially ordered, mixed effects model and show how the model can be applied to a longitudinal data set that consists of N = 2,903 older adults followed for 10 years in the Health Aging and Body Composition Study. We further statistically test the effects of various risk factors upon the probabilities of transition into various severe disability states. The result can be used to inform geriatric and public health science researchers who study the disablement process.

  14. Nonconvergence of formal integrals: II. Improved estimates for the optimal order of truncation

    International Nuclear Information System (INIS)

    Efthymiopoulos, C; Giorgilli, A; Contopoulos, G

    2004-01-01

    We investigate the asymptotic properties of formal integral series in the neighbourhood of an elliptic equilibrium in nonlinear 2 DOF Hamiltonian systems. In particular, we study the dependence of the optimal order of truncation N opt on the distance ρ from the elliptic equilibrium, by numerical and analytical means. The function N opt (ρ) determines the region of Nekhoroshev stability of the orbits and the time of practical stability. We find that the function N opt (ρ) decreases by abrupt steps. The decrease is roughly approximated with an average power law N opt = O(ρ -a ), with a ≅ 1. We find an analytical explanation of this behaviour by investigating the accumulation of small divisors in both the normal form algorithm via Lie series and in the direct construction of first integrals. Precisely, we find that the series exhibit an apparent radius of convergence that tends to zero by abrupt steps as the order of the series tends to infinity. Our results agree with those obtained by Servizi G et al (1983 Phys. Lett. A 95 11) for a conservative map of the plane. Moreover, our analytical considerations allow us to explain the results of our previous paper (Contopoulos G et al 2003 J. Phys. A: Math. Gen. 36 8639), including in particular the different behaviour observed for low-order and higher order resonances

  15. Estimating and managing uncertainties in order to detect terrestial greenhouse gas removals

    OpenAIRE

    Rypdal, Kristin; Baritz, Rainer

    2002-01-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and under the Kyoto protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for high uncertainties are high temporal and spatial variability, and lack of representative estimation parameters. Additional uncertaint...

  16. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    OpenAIRE

    Rypdal, Kristin; Baritz, Rainer

    2002-01-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and under the Kyoto protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for high uncertainties are high temporal and spatial variability, and lack of representative estimation parameters. Additional uncertaint...

  17. Optimal covariance selection for estimation using graphical models

    OpenAIRE

    Vichik, Sergey; Oshman, Yaakov

    2011-01-01

    We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...

  18. Estimating a Noncompensatory IRT Model Using Metropolis within Gibbs Sampling

    Science.gov (United States)

    Babcock, Ben

    2011-01-01

    Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm…

  19. Estimated Frequency Domain Model Uncertainties used in Robust Controller Design

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.; Andersen, Palle; Stoustrup, Jakob

    1994-01-01

    This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are......This paper deals with the combination of system identification and robust controller design. Recent results on estimation of frequency domain model uncertainty are...

  20. Estimating Lead (Pb) Bioavailability In A Mouse Model

    Science.gov (United States)

    Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...

  1. ESTIMATION DU MODELE LINEAIRE GENERALISE ET APPLICATION

    Directory of Open Access Journals (Sweden)

    Malika CHIKHI

    2012-06-01

    Full Text Available Cet article présente  le modèle linéaire généralisé englobant les  techniques de modélisation telles que la régression linéaire, la régression logistique, la régression  log linéaire et la régression  de Poisson . On Commence par la présentation des modèles  des lois exponentielles pour ensuite estimer les paramètres du modèle par la méthode du maximum de vraisemblance. Par la suite on teste les coefficients du modèle pour voir leurs significations et leurs intervalles de confiances, en utilisant le test de Wald qui porte sur la signification  de la vraie valeur du paramètre  basé sur l'estimation de l'échantillon.

  2. State reduced order models for the modelling of the thermal behavior of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Menezo, Christophe; Bouia, Hassan; Roux, Jean-Jacques; Depecker, Patrick [Institute National de Sciences Appliquees de Lyon, Villeurbanne Cedex, (France). Centre de Thermique de Lyon (CETHIL). Equipe Thermique du Batiment]. E-mail: menezo@insa-cethil-etb.insa-lyon.fr; bouia@insa-cethil-etb.insa-lyon.fr; roux@insa-cethil-etb.insa-lyon.fr; depecker@insa-cethil-etb.insa-lyon.fr

    2000-07-01

    This work is devoted to the field of building physics and related to the reduction of heat conduction models. The aim is to enlarge the model libraries of heat and mass transfer codes through limiting the considerable dimensions reached by the numerical systems during the modelling process of a multizone building. We show that the balanced realization technique, specifically adapted to the coupling of reduced order models with the other thermal phenomena, turns out to be very efficient. (author)

  3. Permutation forests for modeling word order in machine translation

    NARCIS (Netherlands)

    Stanojević, M.

    2017-01-01

    In natural language, there is only a limited space for variation in the word order of linguistic productions. From a linguistic perspective, word order is the result of multiple application of syntactic recursive functions. These syntactic operations produce hierarchical syntactic structures, as

  4. Targeting estimation of CCC-GARCH models with infinite fourth moments

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Søndergaard

    As an alternative to quasi-maximum likelihood, targeting estimation is a much applied estimation method for univariate and multivariate GARCH models. In terms of variance targeting estimation recent research has pointed out that at least finite fourth-order moments of the data generating process...... is required if one wants to perform inference in GARCH models relying on asymptotic normality of the estimator,see Pedersen and Rahbek (2014) and Francq et al. (2011). Such moment conditions may not be satisfied in practice for financial returns highlighting a large drawback of variance targeting estimation....... In this paper we consider the large-sample properties of the variance targeting estimator for the multivariate extended constant conditional correlation GARCH model when the distribution of the data generating process has infinite fourth moments. Using non-standard limit theory we derive new results...

  5. An Estimation of Construction and Demolition Debris in Seoul, Korea: Waste Amount, Type, and Estimating Model.

    Science.gov (United States)

    Seo, Seongwon; Hwang, Yongwoo

    1999-08-01

    Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.

  6. Estimating the robustness of composite CBA & MCA assessments by variation of criteria importance order

    DEFF Research Database (Denmark)

    Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen

    that the outcome of the method is a subset of the total solution space. The paper finishes up with a discussion and considerations about how to present the results. The question whether to present a single decision criterion, such as the benefit-cost rate or the net present value, or instead to present graphs......, the proposed method uses surrogate weights based on rankings of the criteria, by the use of Rank Order Distribution (ROD) weights [3]. This reduces the problem to assigning a rank order value for each criterion. A method for combining the MCA with the cost-benefit analysis (CBA) is applied as described...... there are 40340 (8!) possible combinations of ranking the criteria which have been made use of. The proposed method calculates all combinations and produces a set of rank variation graphs for each alternative and for different values of the trade-off indicator. This information is relatively easy to grasp...

  7. The Interaction Between Control Rods as Estimated by Second-Order One-Group Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Persson, Rolf

    1966-10-15

    The interaction effect between control rods is an important problem for the reactivity control of a reactor. The approach of second order one-group perturbation theory is shown to be attractive due to its simplicity. Formulas are derived for the fully inserted control rods in a bare reactor. For a single rod we introduce a correction parameter b, which with good approximation is proportional to the strength of the absorber. For two and more rods we introduce an interaction function g(r{sub ij}), which is assumed to depend only on the distance r{sub ij} between the rods. The theoretical expressions are correlated with the results of several experiments in R0, ZEBRA and the Aagesta reactor, as well as with more sophisticated calculations. The approximate formulas are found to give quite good agreement with exact values, but in the case of about 8 or more rods higher-order effects are likely to be important.

  8. The Interaction Between Control Rods as Estimated by Second-Order One-Group Perturbation Theory

    International Nuclear Information System (INIS)

    Persson, Rolf

    1966-10-01

    The interaction effect between control rods is an important problem for the reactivity control of a reactor. The approach of second order one-group perturbation theory is shown to be attractive due to its simplicity. Formulas are derived for the fully inserted control rods in a bare reactor. For a single rod we introduce a correction parameter b, which with good approximation is proportional to the strength of the absorber. For two and more rods we introduce an interaction function g(r ij ), which is assumed to depend only on the distance r ij between the rods. The theoretical expressions are correlated with the results of several experiments in R0, ZEBRA and the Aagesta reactor, as well as with more sophisticated calculations. The approximate formulas are found to give quite good agreement with exact values, but in the case of about 8 or more rods higher-order effects are likely to be important

  9. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models (Open Access)

    Science.gov (United States)

    2015-03-16

    with logical rules to simulate an archetype biochemical network, the human coagulation cascade. The model consisted of five differential equations...coagulation system. Coagulation is an archetype proteolytic cascade involving both positive and negative feedback [10–12]. Coagulation is mediated by a...purely ODE models in the literature . We estimated the model parameters from in vitro extrinsic coagulation data sets, in the presence of ATIII, with and

  10. A Novel Method for Decoding Any High-Order Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2014-01-01

    Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

  11. Reduced Order Aeroservoelastic Models with Rigid Body Modes, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations generating models with millions of degrees of freedom. Starting...

  12. A Polarimetric First-Order Model of Soil Moisture Effects on the DInSAR Coherence

    Directory of Open Access Journals (Sweden)

    Simon Zwieback

    2015-06-01

    Full Text Available Changes in soil moisture between two radar acquisitions can impact the observed coherence in differential interferometry: both coherence magnitude |Υ| and phase Φ are affected. The influence on the latter potentially biases the estimation of deformations. These effects have been found to be variable in magnitude and sign, as well as dependent on polarization, as opposed to predictions by existing models. Such diversity can be explained when the soil is modelled as a half-space with spatially varying dielectric properties and a rough interface. The first-order perturbative solution achieves–upon calibration with airborne L band data–median correlations ρ at HH polarization of 0.77 for the phase Φ, of 0.50 for |Υ|, and for the phase triplets ≡ of 0.56. The predictions are sensitive to the choice of dielectric mixing model, in particular the absorptive properties; the differences between the mixing models are found to be partially compensatable by varying the relative importance of surface and volume scattering. However, for half of the agricultural fields the Hallikainen mixing model cannot reproduce the observed sensitivities of the phase to soil moisture. In addition, the first-order expansion does not predict any impact on the HV coherence, which is however empirically found to display similar sensitivities to soil moisture as the co-pol channels HH and VV. These results indicate that the first-order solution, while not able to reproduce all observed phenomena, can capture some of the more salient patterns of the effect of soil moisture changes on the HH and VV DInSAR signals. Hence it may prove useful in separating the deformations from the moisture signals, thus yielding improved displacement estimates or new ways for inferring soil moisture.

  13. Multiple-F0 tracking based on a high-order HMM model

    OpenAIRE

    Chang, Wei-Chen; Su, Alvin W.Y.; Yeh,, Chunghsin; Roebel, Axel; Rodet, Xavier

    2008-01-01

    cote interne IRCAM: Chang08a; None / None; National audience; This paper is about multiple-F0 tracking and the estimation of the number of harmonic source streams in music sound signals. A source stream is understood as generated from a note played by a musical instrument. A note is described by a hiddenMarkovmodel (HMM) having two states: the attack state and the sustain state. It is proposed to first perform the tracking of F0 candidates using a high-order hidden Markov model, based on a fo...

  14. Incremental parameter estimation of kinetic metabolic network models

    Directory of Open Access Journals (Sweden)

    Jia Gengjie

    2012-11-01

    Full Text Available Abstract Background An efficient and reliable parameter estimation method is essential for the creation of biological models using ordinary differential equation (ODE. Most of the existing estimation methods involve finding the global minimum of data fitting residuals over the entire parameter space simultaneously. Unfortunately, the associated computational requirement often becomes prohibitively high due to the large number of parameters and the lack of complete parameter identifiability (i.e. not all parameters can be uniquely identified. Results In this work, an incremental approach was applied to the parameter estimation of ODE models from concentration time profiles. Particularly, the method was developed to address a commonly encountered circumstance in the modeling of metabolic networks, where the number of metabolic fluxes (reaction rates exceeds that of metabolites (chemical species. Here, the minimization of model residuals was performed over a subset of the parameter space that is associated with the degrees of freedom in the dynamic flux estimation from the concentration time-slopes. The efficacy of this method was demonstrated using two generalized mass action (GMA models, where the method significantly outperformed single-step estimations. In addition, an extension of the estimation method to handle missing data is also presented. Conclusions The proposed incremental estimation method is able to tackle the issue on the lack of complete parameter identifiability and to significantly reduce the computational efforts in estimating model parameters, which will facilitate kinetic modeling of genome-scale cellular metabolism in the future.

  15. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  16. Ballistic model to estimate microsprinkler droplet distribution

    Directory of Open Access Journals (Sweden)

    Conceição Marco Antônio Fonseca

    2003-01-01

    Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.

  17. Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model

    International Nuclear Information System (INIS)

    Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang

    2016-01-01

    A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.

  18. Second order kinetic modeling of headspace solid phase microextraction of flavors released from selected food model systems.

    Science.gov (United States)

    Zhang, Jiyuan; Cheong, Mun-Wai; Yu, Bin; Curran, Philip; Zhou, Weibiao

    2014-09-04

    The application of headspace-solid phase microextraction (HS-SPME) has been widely used in various fields as a simple and versatile method, yet challenging in quantification. In order to improve the reproducibility in quantification, a mathematical model with its root in psychological modeling and chemical reactor modeling was developed, describing the kinetic behavior of aroma active compounds extracted by SPME from two different food model systems, i.e., a semi-solid food and a liquid food. The model accounted for both adsorption and release of the analytes from SPME fiber, which occurred simultaneously but were counter-directed. The model had four parameters and their estimated values were found to be more reproducible than the direct measurement of the compounds themselves by instrumental analysis. With the relative standard deviations (RSD) of each parameter less than 5% and root mean square error (RMSE) less than 0.15, the model was proved to be a robust one in estimating the release of a wide range of low molecular weight acetates at three environmental temperatures i.e., 30, 40 and 60 °C. More insights of SPME behavior regarding the small molecule analytes were also obtained through the kinetic parameters and the model itself.

  19. Second Order Kinetic Modeling of Headspace Solid Phase Microextraction of Flavors Released from Selected Food Model Systems

    Directory of Open Access Journals (Sweden)

    Jiyuan Zhang

    2014-09-01

    Full Text Available The application of headspace-solid phase microextraction (HS-SPME has been widely used in various fields as a simple and versatile method, yet challenging in quantification. In order to improve the reproducibility in quantification, a mathematical model with its root in psychological modeling and chemical reactor modeling was developed, describing the kinetic behavior of aroma active compounds extracted by SPME from two different food model systems, i.e., a semi-solid food and a liquid food. The model accounted for both adsorption and release of the analytes from SPME fiber, which occurred simultaneously but were counter-directed. The model had four parameters and their estimated values were found to be more reproducible than the direct measurement of the compounds themselves by instrumental analysis. With the relative standard deviations (RSD of each parameter less than 5% and root mean square error (RMSE less than 0.15, the model was proved to be a robust one in estimating the release of a wide range of low molecular weight acetates at three environmental temperatures i.e., 30, 40 and 60 °C. More insights of SPME behavior regarding the small molecule analytes were also obtained through the kinetic parameters and the model itself.

  20. Modelling Trends in Ordered Correspondence Analysis Using Orthogonal Polynomials.

    Science.gov (United States)

    Lombardo, Rosaria; Beh, Eric J; Kroonenberg, Pieter M

    2016-06-01

    The core of the paper consists of the treatment of two special decompositions for correspondence analysis of two-way ordered contingency tables: the bivariate moment decomposition and the hybrid decomposition, both using orthogonal polynomials rather than the commonly used singular vectors. To this end, we will detail and explain the basic characteristics of a particular set of orthogonal polynomials, called Emerson polynomials. It is shown that such polynomials, when used as bases for the row and/or column spaces, can enhance the interpretations via linear, quadratic and higher-order moments of the ordered categories. To aid such interpretations, we propose a new type of graphical display-the polynomial biplot.

  1. A Dynamic Travel Time Estimation Model Based on Connected Vehicles

    Directory of Open Access Journals (Sweden)

    Daxin Tian

    2015-01-01

    Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.

  2. Weibull Parameters Estimation Based on Physics of Failure Model

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2012-01-01

    Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...

  3. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    International Nuclear Information System (INIS)

    Rypdal, Kristin; Baritz, Rainer

    2002-01-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO 2 carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  4. Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals

    Energy Technology Data Exchange (ETDEWEB)

    Rypdal, Kristin; Baritz, Rainer

    2002-07-01

    Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)

  5. Cokriging model for estimation of water table elevation

    International Nuclear Information System (INIS)

    Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.

    1989-01-01

    In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model

  6. Diffuse solar radiation estimation models for Turkey's big cities

    International Nuclear Information System (INIS)

    Ulgen, Koray; Hepbasli, Arif

    2009-01-01

    A reasonably accurate knowledge of the availability of the solar resource at any place is required by solar engineers, architects, agriculturists, and hydrologists in many applications of solar energy such as solar furnaces, concentrating collectors, and interior illumination of buildings. For this purpose, in the past, various empirical models (or correlations) have been developed in order to estimate the solar radiation around the world. This study deals with diffuse solar radiation estimation models along with statistical test methods used to statistically evaluate their performance. Models used to predict monthly average daily values of diffuse solar radiation are classified in four groups as follows: (i) From the diffuse fraction or cloudness index, function of the clearness index, (ii) From the diffuse fraction or cloudness index, function of the relative sunshine duration or sunshine fraction, (iii) From the diffuse coefficient, function of the clearness index, and (iv) From the diffuse coefficient, function of the relative sunshine duration or sunshine fraction. Empirical correlations are also developed to establish a relationship between the monthly average daily diffuse fraction or cloudness index (K d ) and monthly average daily diffuse coefficient (K dd ) with the monthly average daily clearness index (K T ) and monthly average daily sunshine fraction (S/S o ) for the three big cities by population in Turkey (Istanbul, Ankara and Izmir). Although the global solar radiation on a horizontal surface and sunshine duration has been measured by the Turkish State Meteorological Service (STMS) over all country since 1964, the diffuse solar radiation has not been measured. The eight new models for estimating the monthly average daily diffuse solar radiation on a horizontal surface in three big cites are validated, and thus, the most accurate model is selected for guiding future projects. The new models are then compared with the 32 models available in the

  7. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Science.gov (United States)

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  8. Comparison of Estimation Procedures for Multilevel AR(1 Models

    Directory of Open Access Journals (Sweden)

    Tanja eKrone

    2016-04-01

    Full Text Available To estimate a time series model for multiple individuals, a multilevel model may be used.In this paper we compare two estimation methods for the autocorrelation in Multilevel AR(1 models, namely Maximum Likelihood Estimation (MLE and Bayesian Markov Chain Monte Carlo.Furthermore, we examine the difference between modeling fixed and random individual parameters.To this end, we perform a simulation study with a fully crossed design, in which we vary the length of the time series (10 or 25, the number of individuals per sample (10 or 25, the mean of the autocorrelation (-0.6 to 0.6 inclusive, in steps of 0.3 and the standard deviation of the autocorrelation (0.25 or 0.40.We found that the random estimators of the population autocorrelation show less bias and higher power, compared to the fixed estimators. As expected, the random estimators profit strongly from a higher number of individuals, while this effect is small for the fixed estimators.The fixed estimators profit slightly more from a higher number of time points than the random estimators.When possible, random estimation is preferred to fixed estimation.The difference between MLE and Bayesian estimation is nearly negligible. The Bayesian estimation shows a smaller bias, but MLE shows a smaller variability (i.e., standard deviation of the parameter estimates.Finally, better results are found for a higher number of individuals and time points, and for a lower individual variability of the autocorrelation. The effect of the size of the autocorrelation differs between outcome measures.

  9. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    The objective is to minimize the processing time and computer memory required .... Survey. 65 time to acquire extra GPR or seismic data for large sites and picking the first arrival time. 66 to provide the needed datasets for the joint inversion are also .... The data utilized for the regression modelling was acquired from ground.

  10. Linear Regression Models for Estimating True Subsurface ...

    Indian Academy of Sciences (India)

    47

    of the processing time and memory space required to carry out the inversion with the. 29. SCLS algorithm. ... consumption of time and memory space for the iterative computations to converge at. 54 minimum data ..... colour scale and blanking as the observed true resistivity models, for visual assessment. 163. The accuracy ...

  11. Model order reduction using eigen algorithm | Singh | International ...

    African Journals Online (AJOL)

    -scale dynamic systems where denominator polynomial determined through Eigen algorithm and numerator polynomial via factor division algorithm. In Eigen algorithm, the most dominant Eigen value of both original and reduced order ...

  12. Genetic Prediction Models and Heritability Estimates for Functional ...

    African Journals Online (AJOL)

    This paper discusses these methodologies and their advantages and disadvantages. Heritability estimates obtained from these models are also reviewed. Linear methodologies can model binary and actual longevity, while RR and TM methodologies model binary survival. PH procedures model the hazard function of a cow ...

  13. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  14. Fractional Low-Order Joint Moments in the Estimation of Fractional Motions

    Science.gov (United States)

    Carsteanu, Alin Andrei; Guzman Sanluis, Javier Allan; Delvia Borjas López, Ada

    2017-04-01

    Fractional motions arise naturally from the integration of fractional noises, signals that appear in a variety of geophysical processes. When the marginal limiting probability distributions of these processes are Gaussian, the scaling behaviour of integer moments, be they marginal or joint - such as linear autocorrelation - can be used to parameterize the process. When, however, those moments do not converge, due to the heavy tails of the distributions, fractional low-order moments offer an attractive alternative. An application thereof to hydrometeorological data is presented herein.

  15. Simple, efficient estimators of treatment effects in randomized trials using generalized linear models to leverage baseline variables.

    Science.gov (United States)

    Rosenblum, Michael; van der Laan, Mark J

    2010-04-01

    Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.

  16. Muscle synergy control model-tuned EMG driven torque estimation system with a musculo-skeletal model.

    Science.gov (United States)

    Min, Kyuengbo; Shin, Duk; Lee, Jongho; Kakei, Shinji

    2013-01-01

    Muscle activity is the final signal for motion control from the brain. Based on this biological characteristic, Electromyogram (EMG) signals have been applied to various systems that interface human with external environments such as external devices. In order to use EMG signals as input control signal for this kind of system, the current EMG driven torque estimation models generally employ the mathematical model that estimates the nonlinear transformation function between the input signal and the output torque. However, these models need to estimate too many parameters and this process cause its estimation versatility in various conditions to be poor. Moreover, as these models are designed to estimate the joint torque, the input EMG signals are tuned out of consideration for the physiological synergetic contributions of multiple muscles for motion control. To overcome these problems of the current models, we proposed a new tuning model based on the synergy control mechanism between multiple muscles in the cortico-spinal tract. With this synergetic tuning model, the estimated contribution of multiple muscles for the motion control is applied to tune the EMG signals. Thus, this cortico-spinal control mechanism-based process improves the precision of torque estimation. This system is basically a forward dynamics model that transforms EMG signals into the joint torque. It should be emphasized that this forward dynamics model uses a musculo-skeletal model as a constraint. The musculo-skeletal model is designed with precise musculo-skeletal data, such as origins and insertions of individual muscles or maximum muscle force. Compared with the mathematical model, the proposed model can be a versatile model for the torque estimation in the various conditions and estimates the torque with improved accuracy. In this paper, we also show some preliminary experimental results for the discussion about the proposed model.

  17. Estimation of Sonic Fatigue by Reduced-Order Finite Element Based Analyses

    Science.gov (United States)

    Rizzi, Stephen A.; Przekop, Adam

    2006-01-01

    A computationally efficient, reduced-order method is presented for prediction of sonic fatigue of structures exhibiting geometrically nonlinear response. A procedure to determine the nonlinear modal stiffness using commercial finite element codes allows the coupled nonlinear equations of motion in physical degrees of freedom to be transformed to a smaller coupled system of equations in modal coordinates. The nonlinear modal system is first solved using a computationally light equivalent linearization solution to determine if the structure responds to the applied loading in a nonlinear fashion. If so, a higher fidelity numerical simulation in modal coordinates is undertaken to more accurately determine the nonlinear response. Comparisons of displacement and stress response obtained from the reduced-order analyses are made with results obtained from numerical simulation in physical degrees-of-freedom. Fatigue life predictions from nonlinear modal and physical simulations are made using the rainflow cycle counting method in a linear cumulative damage analysis. Results computed for a simple beam structure under a random acoustic loading demonstrate the effectiveness of the approach and compare favorably with results obtained from the solution in physical degrees-of-freedom.

  18. Estimation of Length and Order of Polynomial-based Filter Implemented in the Form of Farrow Structure

    Directory of Open Access Journals (Sweden)

    S. Vukotic

    2016-08-01

    Full Text Available Digital polynomial-based interpolation filters implemented using the Farrow structure are used in Digital Signal Processing (DSP to calculate the signal between its discrete samples. The two basic design parameters for these filters are number of polynomial-segments defining the finite length of impulse response, and order of polynomials in each polynomial segment. The complexity of the implementation structure and the frequency domain performance depend on these two parameters. This contribution presents estimation formulae for length and polynomial order of polynomial-based filters for various types of requirements including attenuation in stopband, width of transitions band, deviation in passband, weighting in passband/stopband.

  19. Risk factors associated with bus accident severity in the United States: A generalized ordered logit model

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2012-01-01

    Introduction: Recent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act...... of 2011. Method: The current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005–2009. Results: Results show...... that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because...

  20. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  1. Linear and nonlinear stability analysis in BWRs applying a reduced order model

    Energy Technology Data Exchange (ETDEWEB)

    Olvera G, O. A.; Espinosa P, G.; Prieto G, A., E-mail: omar_olverag@hotmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico)

    2016-09-15

    Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)

  2. Comparing plasma fluid models of different order for 1D streamer ionization fronts

    NARCIS (Netherlands)

    A. Markosyan (Aram); H.J. Teunissen (Jannis); S. Dujko (Sasa); U. M. Ebert (Ute)

    2015-01-01

    htmlabstractWe evaluate the performance of three plasma fluid models: the first order reaction-drift-diffusion model based on the local field approximation; the second order reaction-drift-diffusion model based on the local energy approximation and a recently developed high order fluid model by

  3. Second-Order Discrete-Time Sliding Mode Observer for State of Charge Determination Based on a Dynamic Resistance Li-Ion Battery Model

    Directory of Open Access Journals (Sweden)

    Sang Woo Kim

    2013-10-01

    Full Text Available A second-order discrete-time sliding mode observer (DSMO-based method is proposed to estimate the state of charge (SOC of a Li-ion battery. Unlike the first-order sliding mode approach, the proposed method eliminates the chattering phenomenon in SOC estimation. Further, a battery model with a dynamic resistance is also proposed to improve the accuracy of the battery model. Similar to actual battery behavior, the resistance parameters in this model are changed by both the magnitude of the discharge current and the SOC level. Validation of the dynamic resistance model is performed through pulse current discharge tests at two different SOC levels. Our experimental results show that the proposed estimation method not only enhances the estimation accuracy but also eliminates the chattering phenomenon. The SOC estimation performance of the second-order DSMO is compared with that of the first-order DSMO.

  4. Moving-Horizon Modulating Functions-Based Algorithm for Online Source Estimation in a First Order Hyperbolic PDE

    KAUST Repository

    Asiri, Sharefa M.

    2017-08-22

    In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.

  5. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    International Nuclear Information System (INIS)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.

    2015-01-01

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers

  6. Relaxation approximations to second-order traffic flow models by high-resolution schemes

    Energy Technology Data Exchange (ETDEWEB)

    Nikolos, I.K.; Delis, A.I.; Papageorgiou, M. [School of Production Engineering and Management, Technical University of Crete, University Campus, Chania 73100, Crete (Greece)

    2015-03-10

    A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.

  7. Multi-source localization in MEG using simulated annealing: model order determination and parameter accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Huang, M.; Supek, S.; Aine, C.

    1996-06-01

    Empirical neuromagnetic studies have reported that multiple brain regions are active at single instants in time as well as across time intervals of interest. Determining the number of active regions, however, required a systematic search across increasing model orders using reduced chi-square measure of goodness-of-fit and multiple starting points within each model order assumed. Simulated annealing was recently proposed for noiseless biomagnetic data as an effective global minimizer. A modified cost function was also proposed to effectively deal with an unknown number of dipoles for noiseless, multi-source biomagnetic data. Numerical simulation studies were conducted using simulated annealing to examine effects of a systematic increase in model order using both reduced chi-square as a cost function as well as a modified cost function, and effects of overmodeling on parameter estimation accuracy. Effects of different choices of weighting factors are also discussed. Simulated annealing was also applied to visually evoked neuromagnetic data and the effectiveness of both cost functions in determining the number of active regions was demonstrated.

  8. Jacobian projection reduced-order models for dynamic systems with contact nonlinearities

    Science.gov (United States)

    Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.

    2018-02-01

    In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.

  9. Explicit estimating equations for semiparametric generalized linear latent variable models

    KAUST Repository

    Ma, Yanyuan

    2010-07-05

    We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.

  10. Battery Calendar Life Estimator Manual Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jon P. Christophersen; Ira Bloom; Ed Thomas; Vince Battaglia

    2012-10-01

    The Battery Life Estimator (BLE) Manual has been prepared to assist developers in their efforts to estimate the calendar life of advanced batteries for automotive applications. Testing requirements and procedures are defined by the various manuals previously published under the United States Advanced Battery Consortium (USABC). The purpose of this manual is to describe and standardize a method for estimating calendar life based on statistical models and degradation data acquired from typical USABC battery testing.

  11. How to use COSMIC Functional Size in Effort Estimation Models?

    OpenAIRE

    Gencel, Cigdem

    2008-01-01

    Although Functional Size Measurement (FSM) methods have become widely used by the software organizations, the functional size based effort estimation still needs further investigation. Most of the studies on effort estimation consider total functional size of the software as the primary input to estimation models and they mostly focus on identifying the project parameters which might have a significant effect on the size-effort relationship. This study brings suggestions on how to use COSMIC ...

  12. Estimating Dynamic Equilibrium Models using Macro and Financial Data

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel

    We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... frequency. We suggest two approaches for the estimation of structural parameters. The first is a simple regression-based procedure for estimation of the reduced-form parameters of the model, combined with a minimum-distance method for identifying the structural parameters. The second approach uses...... martingale estimating functions to estimate the structural parameters directly through a non-linear optimization scheme. We illustrate both approaches by estimating the stochastic AK model with mean-reverting spot interest rates. We also provide Monte Carlo evidence on the small sample behavior...

  13. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  14. Parameter Estimation for a Class of Lifetime Models

    Directory of Open Access Journals (Sweden)

    Xinyang Ji

    2014-01-01

    Full Text Available Our purpose in this paper is to present a better method of parametric estimation for a bivariate nonlinear regression model, which takes the performance indicator of rubber aging as the dependent variable and time and temperature as the independent variables. We point out that the commonly used two-step method (TSM, which splits the model and estimate parameters separately, has limitation. Instead, we apply the Marquardt’s method (MM to implement parametric estimation directly for the model and compare these two methods of parametric estimation by random simulation. Our results show that MM has better effect of data fitting, more reasonable parametric estimates, and smaller prediction error compared with TSM.

  15. Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations

    Directory of Open Access Journals (Sweden)

    Lenny Suardi

    2014-08-01

    Full Text Available This  paper  applies  the  maximum  likelihood  (ML  approaches  to  implementing  the structural  model  of  corporate  bond,  as  suggested  by  Li  and  Wong  (2008,  in  Indonesian corporations.  Two  structural  models,  extended  Merton  and  Longstaff  &  Schwartz  (LS models,  are  used  in  determining  these  prices,  yields,  yield  spreads  and  probabilities  of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity,  total  liabilities,  bond  prices  data  and  the  irm's  parameters  (irm  value,  volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during  period  of  2001-2005,  based  on  criteria  of  Eom,  Helwege  and  Huang  (2004.  The equity  and  bond  prices  data  were  obtained  from  Indonesia  Stock  Exchange  for  irms  that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript

  16. Consistent estimation of linear panel data models with measurement error

    NARCIS (Netherlands)

    Meijer, Erik; Spierdijk, Laura; Wansbeek, Thomas

    2017-01-01

    Measurement error causes a bias towards zero when estimating a panel data linear regression model. The panel data context offers various opportunities to derive instrumental variables allowing for consistent estimation. We consider three sources of moment conditions: (i) restrictions on the

  17. The Development of an Empirical Model for Estimation of the ...

    African Journals Online (AJOL)

    Nassiri P

    rate, daily water consumption, smoking habits, drugs that interfere with the thermoregulatory processes, and exposure to other harmful agents. Conclusions: Eventually, based on the criteria, a model for estimation of the workers' sensitivity to heat stress was presented for the first time, by which the sensitivity is estimated in ...

  18. Asymptotics for Estimating Equations in Hidden Markov Models

    DEFF Research Database (Denmark)

    Hansen, Jørgen Vinsløv; Jensen, Jens Ledet

    Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...

  19. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  20. Estimation of Kinetic Parameters in an Automotive SCR Catalyst Model

    DEFF Research Database (Denmark)

    Åberg, Andreas; Widd, Anders; Abildskov, Jens

    2016-01-01

    A challenge during the development of models for simulation of the automotive Selective Catalytic Reduction catalyst is the parameter estimation of the kinetic parameters, which can be time consuming and problematic. The parameter estimation is often carried out on small-scale reactor tests...

  1. Estimation for the Multiple Factor Model When Data Are Missing.

    Science.gov (United States)

    Finkbeiner, Carl

    1979-01-01

    A maximum likelihood method of estimating the parameters of the multiple factor model when data are missing from the sample is presented. A Monte Carlo study compares the method with five heuristic methods of dealing with the problem. The present method shows some advantage in accuracy of estimation. (Author/CTM)

  2. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  3. Inverse Gaussian model for small area estimation via Gibbs sampling

    African Journals Online (AJOL)

    We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...

  4. Person Appearance Modeling and Orientation Estimation using Spherical Harmonics

    NARCIS (Netherlands)

    Liem, M.C.; Gavrila, D.M.

    2013-01-01

    We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical

  5. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    A Monte Carlo Study of the small sampling properties of five estimators of a linear model with Autocorrelated error terms is discussed. The independent variable was specified as standard normal data. The estimators of the slop coefficients β with the help of Ordinary Least Squares (OLS), increased with increased ...

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Estimating the accuracy of muscle response testing: two randomised-order blinded studies.

    Science.gov (United States)

    Jensen, Anne M; Stevens, Richard J; Burls, Amanda J

    2016-11-30

    Manual muscle testing (MMT) is a non-invasive assessment tool used by a variety of health care providers to evaluate neuromusculoskeletal integrity, and muscular strength in particular. In one form of MMT called muscle response testing (MRT), muscles are said to be tested, not to evaluate muscular strength, but neural control. One established, but insufficiently validated, application of MRT is to assess a patient's response to semantic stimuli (e.g. spoken lies) during a therapy session. Our primary aim was to estimate the accuracy of MRT to distinguish false from true spoken statements, in randomised and blinded experiments. A secondary aim was to compare MRT accuracy to the accuracy when practitioners used only their intuition to differentiate false from true spoken statements. Two prospective studies of diagnostic test accuracy using MRT to detect lies are presented. A true positive MRT test was one that resulted in a subjective weakening of the muscle following a lie, and a true negative was one that did not result in a subjective weakening of the muscle following a truth. Experiment 2 replicated Experiment 1 using a simplified methodology. In Experiment 1, 48 practitioners were paired with 48 MRT-naïve test patients, forming unique practitioner-test patient pairs. Practitioners were enrolled with any amount of MRT experience. In Experiment 2, 20 unique pairs were enrolled, with test patients being a mix of MRT-naïve and not-MRT-naïve. The primary index test was MRT. A secondary index test was also enacted in which the practitioners made intuitive guesses ("intuition"), without using MRT. The actual verity of the spoken statement was compared to the outcome of both index tests (MRT and Intuition) and their mean overall fractions correct were calculated and reported as mean accuracies. In Experiment 1, MRT accuracy, 0.659 (95% CI 0.623 - 0.695), was found to be significantly different (p Experiment 2 replicated the findings of Experiment 1. Testing for

  8. Review Genetic prediction models and heritability estimates for ...

    African Journals Online (AJOL)

    edward

    2015-05-09

    May 9, 2015 ... cattle in South Africa. Linear models, random regression (RR) models, threshold models (TMs) and ...... Heritability for longevity has been estimated with TMs in Canadian Holsteins (Boettcher et al., 1999),. Spanish ... simulation to incorporate the tri-gamma function (γ) as used by Sasaki et al. (2012) and ...

  9. On mixture model complexity estimation for music recommender systems

    NARCIS (Netherlands)

    Balkema, W.; van der Heijden, Ferdinand; Meijerink, B.

    2006-01-01

    Content-based music navigation systems are in need of robust music similarity measures. Current similarity measures model each song with the same model parameters. We propose methods to efficiently estimate the required number of model parameters of each individual song. First results of a study on

  10. Parameter estimation of electricity spot models from futures prices

    NARCIS (Netherlands)

    Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.

    We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate

  11. Calculus for cognitive scientists higher order models and their analysis

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book offers a self-study program on how mathematics, computer science and science can be profitably and seamlessly intertwined. This book focuses on two variable ODE models, both linear and nonlinear, and highlights theoretical and computational tools using MATLAB to explain their solutions. It also shows how to solve cable models using separation of variables and the Fourier Series.

  12. Temporal Aggregation in First Order Cointegrated Vector Autoregressive models

    DEFF Research Database (Denmark)

    Milhøj, Anders; la Cour, Lisbeth Funding

    2011-01-01

    with the frequency of the data. We also introduce a graphical representation that will prove useful as an additional informational tool for deciding the appropriate cointegration rank of a model. In two examples based on models of time series of different grades of gasoline, we demonstrate the usefulness of our...

  13. Abnormal Waves Modelled as Second-order Conditional Waves

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2005-01-01

    , the water depth and the directional spreading on the conditional mean wave profile are presented. Application of conditional waves to model and explain abnormal waves, e.g. the well-known New Year Wave measured at the Draupner platform January 1st 1995, is discussed. Whereas the wave profile can be modelled...

  14. LQG controller designs from reduced order models for a launch ...

    Indian Academy of Sciences (India)

    This paper describes the effort of a multivariable control approach applied to the Geosynchronous Satellite Launch Vehicle (GSLV) of the Indian Space Research Organization (ISRO) during a certain stage of its launch. The fuel slosh dynamics are modelled using a pendulum model analogy. We describe two design ...

  15. GLUE Based Uncertainty Estimation of Urban Drainage Modeling Using Weather Radar Precipitation Estimates

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2011-01-01

    the uncertainty of the weather radar rainfall input. The main findings of this work, is that the input uncertainty propagate through the urban drainage model with significant effects on the model result. The GLUE methodology is in general a usable way to explore this uncertainty although; the exact width......Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...

  16. Higher-Order Hamiltonian Model for Unidirectional Water Waves

    Science.gov (United States)

    Bona, J. L.; Carvajal, X.; Panthee, M.; Scialom, M.

    2018-04-01

    Formally second-order correct, mathematical descriptions of long-crested water waves propagating mainly in one direction are derived. These equations are analogous to the first-order approximations of KdV- or BBM-type. The advantage of these more complex equations is that their solutions corresponding to physically relevant initial perturbations of the rest state may be accurate on a much longer timescale. The initial value problem for the class of equations that emerges from our derivation is then considered. A local well-posedness theory is straightforwardly established by a contraction mapping argument. A subclass of these equations possess a special Hamiltonian structure that implies the local theory can be continued indefinitely.

  17. Order parameter model for unstable multilane traffic flow

    OpenAIRE

    Lubashevsky, Ihor A.; Mahnke, Reinhard

    1999-01-01

    We discuss a phenomenological approach to the description of unstable vehicle motion on multilane highways that explains in a simple way the observed sequence of the phase transitions "free flow -> synchronized motion -> jam" as well as the hysteresis in the transition "free flow synchronized motion". We introduce a new variable called order parameter that accounts for possible correlations in the vehicle motion at different lanes. So, it is principally due to the "many-body" effects in the ...

  18. Neutrino masses and their ordering: global data, priors and models

    Science.gov (United States)

    Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.

    2018-03-01

    We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the

  19. Parameter Estimation for the Thurstone Case III Model.

    Science.gov (United States)

    Mackay, David B.; Chaiy, Seoil

    1982-01-01

    The ability of three estimation criteria to recover parameters of the Thurstone Case V and Case III models from comparative judgment data was investigated via Monte Carlo techniques. Significant differences in recovery are shown to exist. (Author/JKS)

  20. Carbon footprint estimator, phase II : volume I - GASCAP model.

    Science.gov (United States)

    2014-03-01

    The GASCAP model was developed to provide a software tool for analysis of the life-cycle GHG : emissions associated with the construction and maintenance of transportation projects. This phase : of development included techniques for estimating emiss...

  1. Parameter estimation in stochastic rainfall-runoff models

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur

    2006-01-01

    A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all....... For a comparison the parameters are also estimated by an output error method, where the sum of squared simulation error is minimized. The former methodology is optimal for short-term prediction whereas the latter is optimal for simulations. Hence, depending on the purpose it is possible to select whether...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...

  2. Differential geometry of viscoelastic models with fractional-order derivatives

    International Nuclear Information System (INIS)

    Yajima, Takahiro; Nagahama, Hiroyuki

    2010-01-01

    Viscoelastic materials with memory effect are studied based on the fractional rheonomic geometry. The geometric objects are regarded as basic quantities of fractional viscoelastic models, i.e. the metric tensor and torsion tensor are interpreted as the strain and the fractional strain rate, respectively. The generalized viscoelastic equations are expressed by the geometric objects. Especially, the basic constitutive equations such as Voigt and Maxwell models can be derived geometrically from the generalized equation. This leads to the fact that various viscoelastic models can be unified into one geometric expression.

  3. Reduced Order Modeling of Combustion Instability in a Gas Turbine Model Combustor

    Science.gov (United States)

    Arnold-Medabalimi, Nicholas; Huang, Cheng; Duraisamy, Karthik

    2017-11-01

    Hydrocarbon fuel based propulsion systems are expected to remain relevant in aerospace vehicles for the foreseeable future. Design of these devices is complicated by combustion instabilities. The capability to model and predict these effects at reduced computational cost is a requirement for both design and control of these devices. This work focuses on computational studies on a dual swirl model gas turbine combustor in the context of reduced order model development. Full fidelity simulations are performed utilizing URANS and Hybrid RANS-LES with finite rate chemistry. Following this, data decomposition techniques are used to extract a reduced basis representation of the unsteady flow field. These bases are first used to identify sensor locations to guide experimental interrogations and controller feedback. Following this, initial results on developing a control-oriented reduced order model (ROM) will be presented. The capability of the ROM will be further assessed based on different operating conditions and geometric configurations.

  4. Stripe order from the perspective of the Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Devereaux, Thomas Peter

    2018-03-01

    A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including the often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.

  5. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  6. The Complexity of Model Checking Higher-Order Fixpoint Logic

    DEFF Research Database (Denmark)

    Axelsson, Roland; Lange, Martin; Somla, Rafal

    2007-01-01

    Higher Order Fixpoint Logic (HFL) is a hybrid of the simply typed λ-calculus and the modal μ-calculus. This makes it a highly expressive temporal logic that is capable of expressing various interesting correctness properties of programs that are not expressible in the modal μ-calculus. This paper...... of solving rather large parity games of small index. As a consequence of this we obtain an ExpTime upper bound on the expression complexity of each HFLk,m. The lower bound is established by a reduction from the word problem for alternating (k-1)-fold exponential space bounded Turing Machines. As a corollary...

  7. Car sharing demand estimation and urban transport demand modelling using stated preference techniques

    OpenAIRE

    Catalano, Mario; Lo Casto, Barbara; Migliore, Marco

    2008-01-01

    The research deals with the use of the stated preference technique (SP) and transport demand modelling to analyse travel mode choice behaviour for commuting urban trips in Palermo, Italy. The principal aim of the study was the calibration of a demand model to forecast the modal split of the urban transport demand, allowing for the possibility of using innovative transport systems like car sharing and car pooling. In order to estimate the demand model parameters, a specific survey was carried ...

  8. Estimation and prediction under local volatility jump-diffusion model

    Science.gov (United States)

    Kim, Namhyoung; Lee, Younhee

    2018-02-01

    Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.

  9. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  10. First and second order semi-Markov chains for wind speed modeling

    Science.gov (United States)

    Prattico, F.; Petroni, F.; D'Amico, G.

    2012-04-01

    The increasing interest in renewable energy leads scientific research to find a better way to recover most of the available energy. Particularly, the maximum energy recoverable from wind is equal to 59.3% of that available (Betz law) at a specific pitch angle and when the ratio between the wind speed in output and in input is equal to 1/3. The pitch angle is the angle formed between the airfoil of the blade of the wind turbine and the wind direction. Old turbine and a lot of that actually marketed, in fact, have always the same invariant geometry of the airfoil. This causes that wind turbines will work with an efficiency that is lower than 59.3%. New generation wind turbines, instead, have a system to variate the pitch angle by rotating the blades. This system able the wind turbines to recover, at different wind speed, always the maximum energy, working in Betz limit at different speed ratios. A powerful system control of the pitch angle allows the wind turbine to recover better the energy in transient regime. A good stochastic model for wind speed is then needed to help both the optimization of turbine design and to assist the system control to predict the value of the wind speed to positioning the blades quickly and correctly. The possibility to have synthetic data of wind speed is a powerful instrument to assist designer to verify the structures of the wind turbines or to estimate the energy recoverable from a specific site. To generate synthetic data, Markov chains of first or higher order are often used [1,2,3]. In particular in [3] is presented a comparison between a first-order Markov chain and a second-order Markov chain. A similar work, but only for the first-order Markov chain, is conduced by [2], presenting the probability transition matrix and comparing the energy spectral density and autocorrelation of real and synthetic wind speed data. A tentative to modeling and to join speed and direction of wind is presented in [1], by using two models, first-order

  11. A new method to estimate parameters of linear compartmental models using artificial neural networks

    International Nuclear Information System (INIS)

    Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.

    1998-01-01

    At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)

  12. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  13. Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects

    Directory of Open Access Journals (Sweden)

    Yi Hu

    2014-01-01

    Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.

  14. Models for estimating macronutrients in Mimosa scabrella Bentham

    Directory of Open Access Journals (Sweden)

    Saulo Jorge Téo

    2010-09-01

    Full Text Available The aim of this work was to adjust and test different statistical models for estimating macronutrient content in theabove-ground biomass of bracatinga (Mimosa scabrella Bentham. The data were collected from 25 bracatinga trees, all native to thenorth of the metropolitan region of Curitiba, Paraná state, Brazil. To determine the biomass and macronutrient content, the trees wereseparated into the compartments leaves, branches < 4 cm, branches 4 cm, wood and stem barks. Different statistical models wereadjusted to estimate N, P, K, Ca and Mg contents in the tree compartments, using dendrometric variables as the model independentvariables. Based on the results, the equations developed for estimating macronutrient contents were, in general, satisfactory. The mostaccurate estimates were obtained for the stem biomass compartments and the sum of the biomass compartments. In some cases, theequations had a better performance when crown and stem dimensions, age and dominant height were included as independentvariables.

  15. Multilevel Higher-Order Item Response Theory Models

    Science.gov (United States)

    Huang, Hung-Yu; Wang, Wen-Chung

    2014-01-01

    In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…

  16. Proposed higher order continuum-based models for an elastic ...

    African Journals Online (AJOL)

    Three new variants of continuum-based models for an elastic subgrade are proposed. The subgrade is idealized as a homogenous, isotropic elastic layer of thickness H overlying a firm stratum. All components of the stress tensor in the subgrade are taken into account. Reasonable assumptions are made regarding the ...

  17. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    To obtain the proton structure function in valon model with respect to the Laguerre polynomials one needs to use an elegant and fast numerical method at LO up to NLO. Therefore, we concentrate on the Laguerre polynomials in our determinations. In the. Laguerre method [13,14], the Laguerre polynomials are defined as.

  18. A generalized cellular automata approach to modeling first order ...

    Indian Academy of Sciences (India)

    Cellular automata; enzyme kinetics; extended von-Neumann neighborhood. 1. Introduction. Over the past two decades, there has been a significant growth in the use of computer-generated models to study dynamic phenomena in biochemical systems (Kier et al 2005). The need to include greater details about biochemical ...

  19. Improving the realism of hydrologic model through multivariate parameter estimation

    Science.gov (United States)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10

  20. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  1. Optimization of power rationing order based on fuzzy evaluation model

    Science.gov (United States)

    Zhang, Siyuan; Liu, Li; Xie, Peiyuan; Tang, Jihong; Wang, Canlin

    2018-04-01

    With the development of production and economic growth, China's electricity load has experienced a significant increase. Over the years, in order to alleviate the contradiction of power shortage, a series of policies and measures to speed up electric power construction have been made in china, which promotes the rapid development of the power industry and the power construction has made great achievements. For China, after large-scale power facilities, power grid long-term power shortage situation has been improved to some extent, but in a certain period of time, the power development still exists uneven development. On the whole, it is still in the state of insufficient power, and the situation of power restriction is still severe in some areas, so it is necessary to study on the power rationing.

  2. A new model for estimating total body water from bioelectrical resistance

    Science.gov (United States)

    Siconolfi, S. F.; Kear, K. T.

    1992-01-01

    Estimation of total body water (T) from bioelectrical resistance (R) is commonly done by stepwise regression models with height squared over R, H(exp 2)/R, age, sex, and weight (W). Polynomials of H(exp 2)/R have not been included in these models. We examined the validity of a model with third order polynomials and W. Methods: T was measured with oxygen-18 labled water in 27 subjects. R at 50 kHz was obtained from electrodes placed on the hand and foot while subjects were in the supine position. A stepwise regression equation was developed with 13 subjects (age 31.5 plus or minus 6.2 years, T 38.2 plus or minus 6.6 L, W 65.2 plus or minus 12.0 kg). Correlations, standard error of estimates and mean differences were computed between T and estimated T's from the new (N) model and other models. Evaluations were completed with the remaining 14 subjects (age 32.4 plus or minus 6.3 years, T 40.3 plus or minus 8 L, W 70.2 plus or minus 12.3 kg) and two of its subgroups (high and low) Results: A regression equation was developed from the model. The only significant mean difference was between T and one of the earlier models. Conclusion: Third order polynomials in regression models may increase the accuracy of estimating total body water. Evaluating the model with a larger population is needed.

  3. Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution

    Directory of Open Access Journals (Sweden)

    Emmanuel Kidando

    2017-01-01

    Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.

  4. Bayesian estimation of parameters in a regional hydrological model

    Directory of Open Access Journals (Sweden)

    K. Engeland

    2002-01-01

    Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis

  5. Wall Shear Stress Distribution in a Patient-Specific Cerebral Aneurysm Model using Reduced Order Modeling

    Science.gov (United States)

    Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya

    2016-11-01

    We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.

  6. Methods to assess performance of models estimating risk of death in intensive care patients: a review.

    Science.gov (United States)

    Cook, D A

    2006-04-01

    Models that estimate the probability of death of intensive care unit patients can be used to stratify patients according to the severity of their condition and to control for casemix and severity of illness. These models have been used for risk adjustment in quality monitoring, administration, management and research and as an aid to clinical decision making. Models such as the Mortality Prediction Model family, SAPS II, APACHE II, APACHE III and the organ system failure models provide estimates of the probability of in-hospital death of ICU patients. This review examines methods to assess the performance of these models. The key attributes of a model are discrimination (the accuracy of the ranking in order of probability of death) and calibration (the extent to which the model's prediction of probability of death reflects the true risk of death). These attributes should be assessed in existing models that predict the probability of patient mortality, and in any subsequent model that is developed for the purposes of estimating these probabilities. The literature contains a range of approaches for assessment which are reviewed and a survey of the methodologies used in studies of intensive care mortality models is presented. The systematic approach used by Standards for Reporting Diagnostic Accuracy provides a framework to incorporate these theoretical considerations of model assessment and recommendations are made for evaluation and presentation of the performance of models that estimate the probability of death of intensive care patients.

  7. Computer modeling: from sports to spaceflight ... from order to chaos.

    Science.gov (United States)

    Danby, J. M. A.

    In this volume the problems of concern are those that can be modeled by systems of ordinary differential equations and cannot be solved by mathematical formulae. The subjects covered are diverse and include chaotic systems; population growth and ecology; sickness and health; competition and economics; sports; travel and reaction; space travel and astronomy; pendulums; springs; chemical and other reacting systems. Accompanying the book is software on a CD-ROM which includes over 50 projects from this book.

  8. LQG controller designs from reduced order models for a launch ...

    Indian Academy of Sciences (India)

    Performing the indicated operations, assuming τpi and ˙τpi are small quantities and noting that .... Sensors in Rate path. GR = ω2. 2 s2 + 2ξ2ω2s + ω2. 2. , where ω2 = 12·5 Hz = 78·54 rad/sec and ξ2 = 0·7. 3. Formulation of the state space model. The vehicle dynamics including rigid body, slosh and actuators can be written ...

  9. Surface-source modeling and estimation using biomagnetic measurements.

    Science.gov (United States)

    Yetik, Imam Samil; Nehorai, Arye; Muravchik, Carlos H; Haueisen, Jens; Eiselt, Michael

    2006-10-01

    We propose a number of electric source models that are spatially distributed on an unknown surface for biomagnetism. These can be useful to model, e.g., patches of electrical activity on the cortex. We use a realistic head (or another organ) model and discuss the special case of a spherical head model with radial sensors resulting in more efficient computations of the estimates for magnetoencephalography. We derive forward solutions, maximum likelihood (ML) estimates, and Cramér-Rao bound (CRB) expressions for the unknown source parameters. A model selection method is applied to decide on the most appropriate model. We also present numerical examples to compare the performances and computational costs of the different models and illustrate when it is possible to distinguish between surface and focal sources or line sources. Finally, we apply our methods to real biomagnetic data of phantom human torso and demonstrate the applicability of them.

  10. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  11. Estimation of shape model parameters for 3D surfaces

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Darkner, Sune; Fripp, Jurgen

    2008-01-01

    is applied to a database of 3D surfaces from a section of the porcine pelvic bone extracted from 33 CT scans. A leave-one-out validation shows that the parameters of the first 3 modes of the shape model can be predicted with a mean difference within [-0.01,0.02] from the true mean, with a standard deviation......Statistical shape models are widely used as a compact way of representing shape variation. Fitting a shape model to unseen data enables characterizing the data in terms of the model parameters. In this paper a Gauss-Newton optimization scheme is proposed to estimate shape model parameters of 3D...... surfaces using distance maps, which enables the estimation of model parameters without the requirement of point correspondence. For applications with acquisition limitations such as speed and cost, this formulation enables the fitting of a statistical shape model to arbitrarily sampled data. The method...

  12. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  13. [Using log-binomial model for estimating the prevalence ratio].

    Science.gov (United States)

    Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue

    2010-05-01

    To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.

  14. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  15. In vitro biological models in order to study BNCT

    International Nuclear Information System (INIS)

    Dagrosa, Maria A.; Kreimann, Erica L.; Schwint, Amanda E.; Juvenal, Guillermo J.; Pisarev, Mario A.; Farias, Silvia S.; Garavaglia, Ricardo N.; Batistoni, Daniel A.

    1999-01-01

    Undifferentiated thyroid carcinoma (UTC) lacks an effective treatment. Boron neutron capture therapy (BNCT) is based on the selective uptake of 10 B-boronated compounds by some tumours, followed by irradiation with an appropriate neutron beam. The radioactive boron originated ( 11 B) decays releasing 7 Li, gamma rays and alpha particles, and these latter will destroy the tumour. In order to explore the possibility of applying BNCT to UTC we have studied the biodistribution of BPA. In vitro studies: the uptake of p- 10 borophenylalanine (BPA) by the UTC cell line ARO, primary cultures of normal bovine thyroid cells (BT) and human follicular adenoma (FA) thyroid was studied. No difference in BPA uptake was observed between proliferating and quiescent ARO cells. The uptake by quiescent ARO, BT and FA showed that the ARO/BT and ARO/FA ratios were 4 and 5, respectively (p< 0.001). The present experimental results open the possibility of applying BNCT for the treatment of UTC. (author)

  16. Procedures for parameter estimates of computational models for localized failure

    NARCIS (Netherlands)

    Iacono, C.

    2007-01-01

    In the last years, many computational models have been developed for tensile fracture in concrete. However, their reliability is related to the correct estimate of the model parameters, not all directly measurable during laboratory tests. Hence, the development of inverse procedures is needed, that

  17. Estimation of pure autoregressive vector models for revenue series ...

    African Journals Online (AJOL)

    This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...

  18. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  19. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  20. Bases for the Creation of Electric Energy Price Estimate Model

    International Nuclear Information System (INIS)

    Toljan, I.; Klepo, M.

    1995-01-01

    The paper presents the basic principles for the creation and introduction of a new model for the electric energy price estimate and its significant influence on the tariff system functioning. There is also a review of the model used presently for the electric energy price estimate which is based on the model of objectivized values of electric energy plants and production, transmission and distribution facilities, followed by proposed changes which would result in functional and organizational improvements within the electric energy system as the most complex subsystem of the whole power system. The model is based on substantial and functional connection of the optimization and analysis system with the electric energy economic dispatching, including marginal cost estimate and their influence on the tariff system as the main means in achieving better electric energy system's functioning quality. (author). 10 refs., 2 figs

  1. Comparison of first-order-decay modeled and actual field measured municipal solid waste landfill methane data.

    Science.gov (United States)

    Amini, Hamid R; Reinhart, Debra R; Niskanen, Antti

    2013-12-01

    The first-order decay (FOD) model is widely used to estimate landfill gas generation for emissions inventories, life cycle assessments, and regulation. The FOD model has inherent uncertainty due to underlying uncertainty in model parameters and a lack of opportunities to validate it with complete field-scale landfill data sets. The objectives of this paper were to estimate methane generation, fugitive methane emissions, and aggregated collection efficiency for landfills through a mass balance approach using the FOD model for gas generation coupled with literature values for cover-specific collection efficiency and methane oxidation. This study is unique and valuable because actual field data were used in comparison with modeled data. The magnitude and variation of emissions were estimated for three landfills using site-specific model parameters and gas collection data, and compared to vertical radial plume mapping emissions measurements. For the three landfills, the modeling approach slightly under-predicted measured emissions and over-estimated aggregated collection efficiency, but the two approaches yielded statistically equivalent uncertainties expressed as coefficients of variation. Sources of uncertainty include challenges in large-scale field measurement of emissions and spatial and temporal fluctuations in methane flow balance components (generated, collected, oxidized, and emitted methane). Additional publication of sets of field-scale measurement data and methane flow balance components will reduce the uncertainty in future estimates of fugitive emissions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four

  3. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  4. Quantization of second-order Lagrangians: Model problem

    Science.gov (United States)

    Moore, R. A.; Scott, T. C.

    1991-08-01

    Many aspects of a model problem, the Lagrangian of which contains a term depending quadratically on the acceleration, are examined in the regime where the classical solution consists of two independent normal modes. It is shown that the techniques of conversion to a problem of Lagrange, generalized mechanics, and Dirac's method for constrained systems all yield the same canonical form for the Hamiltonian. It is also seen that the resultant canonical equations of motion are equivalent to the Euler-Lagrange equations. In canonical form, all of the standard results apply, quantization follows in the usual way, and the interpretation of the results is straightforward. It is also demonstrated that perturbative methods fail, both classically and quantum mechanically, indicating the need for the nonperturbative techniques applied herein. Finally, it is noted that this result may have fundamental implications for certain relativistic theories.

  5. Application of Parameter Estimation for Diffusions and Mixture Models

    DEFF Research Database (Denmark)

    Nolsøe, Kim

    error models. This is obtained by constructing an estimating function through projections of some chosen function of Yti+1 onto functions of previous observations Yti ; : : : ; Yt0 . The process of interest Xti+1 is partially observed through a measurement equation Yti+1 = h(Xti+1)+ noice, where h......(:) is restricted to be a polynomial. Through a simulation study we compare for the CIR process the obtained estimator with an estimator derived from utilizing the extended Kalman filter. The simulation study shows that the two estimation methods perform equally well.......The first part of this thesis proposes a method to determine the preferred number of structures, their proportions and the corresponding geometrical shapes of an m-membered ring molecule. This is obtained by formulating a statistical model for the data and constructing an algorithm which samples...

  6. The problematic estimation of "imitation effects" in multilevel models

    Directory of Open Access Journals (Sweden)

    2003-09-01

    Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.

  7. Development on electromagnetic impedance function modeling and its estimation

    International Nuclear Information System (INIS)

    Sutarno, D.

    2015-01-01

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  8. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  9. Models and estimation methods for clinical HIV-1 data

    Science.gov (United States)

    Verotta, Davide

    2005-12-01

    Clinical HIV-1 data include many individual factors, such as compliance to treatment, pharmacokinetics, variability in respect to viral dynamics, race, sex, income, etc., which might directly influence or be associated with clinical outcome. These factors need to be taken into account to achieve a better understanding of clinical outcome and mathematical models can provide a unifying framework to do so. The first objective of this paper is to demonstrate the development of comprehensive HIV-1 dynamics models that describe viral dynamics and also incorporate different factors influencing such dynamics. The second objective of this paper is to describe alternative estimation methods that can be applied to the analysis of data with such models. In particular, we consider: (i) simple but effective two-stage estimation methods, in which data from each patient are analyzed separately and summary statistics derived from the results, (ii) more complex nonlinear mixed effect models, used to pool all the patient data in a single analysis. Bayesian estimation methods are also considered, in particular: (iii) maximum posterior approximations, MAP, and (iv) Markov chain Monte Carlo, MCMC. Bayesian methods incorporate prior knowledge into the models, thus avoiding some of the model simplifications introduced when the data are analyzed using two-stage methods, or a nonlinear mixed effect framework. We demonstrate the development of the models and the different estimation methods using real AIDS clinical trial data involving patients receiving multiple drugs regimens.

  10. Asymptotic distribution theory for break point estimators in models estimated via 2SLS

    NARCIS (Netherlands)

    Boldea, O.; Hall, A.R.; Han, S.

    2012-01-01

    In this paper, we present a limiting distribution theory for the break point estimator in a linear regression model with multiple structural breaks obtained by minimizing a Two Stage Least Squares (2SLS) objective function. Our analysis covers both the case in which the reduced form for the

  11. The Meaning of Higher-Order Factors in Reflective-Measurement Models

    Science.gov (United States)

    Eid, Michael; Koch, Tobias

    2014-01-01

    Higher-order factor analysis is a widely used approach for analyzing the structure of a multidimensional test. Whenever first-order factors are correlated researchers are tempted to apply a higher-order factor model. But is this reasonable? What do the higher-order factors measure? What is their meaning? Willoughby, Holochwost, Blanton, and Blair…

  12. Advanced empirical estimate of information value for credit scoring models

    Directory of Open Access Journals (Sweden)

    Martin Řezáč

    2011-01-01

    Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.

  13. Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  14. Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  15. Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-12-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  16. Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2016-11-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  17. Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-12-07

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.

  18. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  19. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  20. Estimation of a respiratory signal from a single-lead ECG using the 4th order central moments

    Directory of Open Access Journals (Sweden)

    Schmidt Marcus

    2015-09-01

    Full Text Available For a variety of clinical applications like magnetic resonance imaging (MRI the monitoring of vital signs is a common standard in clinical daily routine. Besides the electrocardiogram (ECG, the respiratory activity is an important vital parameter and might reveal pathological changes. Thoracic movement and the resulting impedance change between ECG electrodes enable the estimation of the respiratory signal from the ECG. This ECG-derived respiration (EDR can be used to calculate the breathing rate without the need for additional devices or monitoring modules. In this paper a new method is presented to estimate the respiratory signal from a single-lead ECG. The 4th order central moments was used to estimate the EDR signal exploiting the change of the R-wave slopes induced by respiration. This method was compared with two approaches by analyzing the Fantasia database from www.physionet.org. Furthermore, the ECG signals of 24 healthy subjects placed in an 3 T MR-scanner were acquired.

  1. Comparison of regression models for estimation of isometric wrist joint torques using surface electromyography

    Directory of Open Access Journals (Sweden)

    Menon Carlo

    2011-09-01

    Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.

  2. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  3. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  4. NanoSafer vs. 1.1 - Nanomaterial risk assessment using first order modeling

    DEFF Research Database (Denmark)

    Jensen, Keld A.; Saber, Anne T.; Kristensen, Henrik V.

    2013-01-01

    for safe use of MN based on first order modeling. The hazard and case specific exposure as sessments are combined for an integrated risk evaluation and final control banding. Requested material da ta are typically available from the producers’ technical information sheets. The hazard data are given...... in the SDS for the closest analogue bulk material for which the requested occupational exposure limit (OEL) is given as well. The emission potential is either given by a constant release rate or the dustiness level determined us ing the EN15051 rotating drum or similar. The exposure assessment is estimated...... of the nearest analogue bulk material a nd the specific surface area. The NanoSafer control banding tool is now available in Danish and English and contains help tools, including a data library with dustiness data and an inspirational nanosafety e learning tool for companies’ risk management. The ability...

  5. Kinetic parameter estimation model for anaerobic co-digestion of waste activated sludge and microalgae.

    Science.gov (United States)

    Lee, Eunyoung; Cumberbatch, Jewel; Wang, Meng; Zhang, Qiong

    2017-03-01

    Anaerobic co-digestion has a potential to improve biogas production, but limited kinetic information is available for co-digestion. This study introduced regression-based models to estimate the kinetic parameters for the co-digestion of microalgae and Waste Activated Sludge (WAS). The models were developed using the ratios of co-substrates and the kinetic parameters for the single substrate as indicators. The models were applied to the modified first-order kinetics and Monod model to determine the rate of hydrolysis and methanogenesis for the co-digestion. The results showed that the model using a hyperbola function was better for the estimation of the first-order kinetic coefficients, while the model using inverse tangent function closely estimated the Monod kinetic parameters. The models can be used for estimating kinetic parameters for not only microalgae-WAS co-digestion but also other substrates' co-digestion such as microalgae-swine manure and WAS-aquatic plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. A parametric model order reduction technique for poroelastic finite element models.

    Science.gov (United States)

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  7. A distributed approach for parameters estimation in System Biology models

    International Nuclear Information System (INIS)

    Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.

    2009-01-01

    Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.

  8. Estimating Jupiter’s Gravity Field Using Juno Measurements, Trajectory Estimation Analysis, and a Flow Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Galanti, Eli; Kaspi, Yohai [Department of Earth and Planetary Sciences, Weizmann Institute of Science, Rehovot (Israel); Durante, Daniele; Finocchiaro, Stefano; Iess, Luciano, E-mail: eli.galanti@weizmann.ac.il [Dipartimento di Ingegneria Meccanica e Aerospaziale, Sapienza Universita di Roma, Rome (Italy)

    2017-07-01

    The upcoming Juno spacecraft measurements have the potential of improving our knowledge of Jupiter’s gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spatial gravity variations, but these measurements will be very accurate only over a limited latitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially regarding the Jovian flow structure and its depth, which can influence the measured gravity field. In this study we propose a new iterative method for the estimation of the Jupiter gravity field, using a simulated Juno trajectory, a trajectory estimation model, and an adjoint-based inverse model for the flow dynamics. We test this method both for zonal harmonics only and with a full gravity field including tesseral harmonics. The results show that this method can fit some of the gravitational harmonics better to the “measured” harmonics, mainly because of the added information from the dynamical model, which includes the flow structure. Thus, it is suggested that the method presented here has the potential of improving the accuracy of the expected gravity harmonics estimated from the Juno and Cassini radio science experiments.

  9. Near Shore Wave Modeling and applications to wave energy estimation

    Science.gov (United States)

    Zodiatis, G.; Galanis, G.; Hayes, D.; Nikolaidis, A.; Kalogeri, C.; Adam, A.; Kallos, G.; Georgiou, G.

    2012-04-01

    The estimation of the wave energy potential at the European coastline is receiving increased attention the last years as a result of the adaptation of novel policies in the energy market, the concernsfor global warming and the nuclear energy security problems. Within this framework, numerical wave modeling systems keep a primary role in the accurate description of wave climate and microclimate that is a prerequisite for any wave energy assessment study. In the present work two of the most popular wave models are used for the estimation of the wave parameters at the coastline of Cyprus: The latest parallel version of the wave model WAM (ECMWF version), which employs new parameterization of shallow water effects, and the SWAN model, classically used for near shore wave simulations. The results obtained from the wave models near shores are studied by an energy estimation point of view: The wave parameters that mainly affect the energy temporal and spatial distribution, that is the significant wave height and the mean wave period, are statistically analyzed,focusing onpossible different aspects captured by the two models. Moreover, the wave spectrum distribution prevailing in different areas are discussed contributing, in this way, to the wave energy assessmentin the area. This work is a part of two European projects focusing on the estimation of the wave energy distribution around Europe: The MARINA platform (http://www.marina-platform.info/ index.aspx) and the Ewave (http://www.oceanography.ucy.ac.cy/ewave/) projects.

  10. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  11. Markov models and the ensemble Kalman filter for estimation of sorption rates.

    Energy Technology Data Exchange (ETDEWEB)

    Vugrin, Eric D.; McKenna, Sean Andrew (Sandia National Laboratories, Albuquerque, NM); Vugrin, Kay White

    2007-09-01

    Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

  12. Multilevel Autoregressive Mediation Models: Specification, Estimation, and Applications.

    Science.gov (United States)

    Zhang, Qian; Wang, Lijuan; Bergeman, C S

    2017-11-27

    In the current study, extending from the cross-lagged panel models (CLPMs) in Cole and Maxwell (2003), we proposed the multilevel autoregressive mediation models (MAMMs) by allowing the coefficients to differ across individuals. In addition, Level-2 covariates can be included to explain the interindividual differences of mediation effects. Given the complexity of the proposed models, Bayesian estimation was used. Both a CLPM and an unconditional MAMM were fitted to daily diary data. The 2 models yielded different statistical conclusions regarding the average mediation effect. A simulation study was conducted to examine the estimation accuracy of Bayesian estimation for MAMMs and consequences of model mis-specifications. Factors considered included the sample size (N), number of time points (T), fixed indirect and direct effect sizes, and Level-2 variances and covariances. Results indicated that the fixed effect estimates for the indirect effect components (a and b) and the fixed effects of Level-2 covariates were accurate when N ≥ 50 and T ≥ 5. For estimating Level-2 variances and covariances, they were accurate provided a sufficiently large N and T (e.g., N ≥ 500 and T ≥ 50). Estimates of the average mediation effect were generally accurate when N ≥ 100 and T ≥ 10, or N ≥ 50 and T ≥ 20. Furthermore, we found that when Level-2 variances were zero, MAMMs yielded valid inferences about the fixed effects, whereas when random effects existed, CLPMs had low coverage rates for fixed effects. DIC can be used for model selection. Limitations and future directions were discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Coupling Hydrologic and Hydrodynamic Models to Estimate PMF

    Science.gov (United States)

    Felder, G.; Weingartner, R.

    2015-12-01

    Most sophisticated probable maximum flood (PMF) estimations derive the PMF from the probable maximum precipitation (PMP) by applying deterministic hydrologic models calibrated with observed data. This method is based on the assumption that the hydrological system is stationary, meaning that the system behaviour during the calibration period or the calibration event is presumed to be the same as it is during the PMF. However, as soon as a catchment-specific threshold is reached, the system is no longer stationary. At or beyond this threshold, retention areas, new flow paths, and changing runoff processes can strongly affect downstream peak discharge. These effects can be accounted for by coupling hydrologic and hydrodynamic models, a technique that is particularly promising when the expected peak discharge may considerably exceed the observed maximum discharge. In such cases, the coupling of hydrologic and hydraulic models has the potential to significantly increase the physical plausibility of PMF estimations. This procedure ensures both that the estimated extreme peak discharge does not exceed the physical limit based on riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered. Our study discusses the prospect of considering retention effects on PMF estimations by coupling hydrologic and hydrodynamic models. This method is tested by forcing PREVAH, a semi-distributed deterministic hydrological model, with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to externally force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). Finally, the PMF estimation results obtained using the coupled modelling approach are compared to the results obtained using ordinary hydrologic modelling.

  14. SUN-RAH: a nucleoelectric BWR university simulator based in reduced order models

    International Nuclear Information System (INIS)

    Morales S, J.B.; Lopez R, A.; Sanchez B, A.; Sanchez S, R.; Hernandez S, A.

    2003-01-01

    The development of a simulator that allows to represent the dynamics of a nucleo electric central, with nuclear reactor of the BWR type, using reduced order models is presented. These models present the characteristics defined by the dominant poles of the system (1) and most of those premature operation transitories in a power station can be reproduced with considerable fidelity if the models are identified with data of plant or references of a code of better estimate like RAMONA, TRAC (2) or RELAP. The models of the simulator are developments or own simplifications starting from the physical laws and retaining the main terms. This work describes the objective of the project and the general specifications of the University student of Nucleo electric simulator with Boiling Water Reactor type (SUN-RAH) as well as the finished parts that fundamentally are the nuclear reactor, the one of steam supply (NSSS), the plant balance (BOP), the main controllers of the plant and the implemented graphic interfaces. The pendent goals as well as the future developments and applications of SUN-RAH are described. (Author)

  15. A time series model: First-order integer-valued autoregressive (INAR(1))

    Science.gov (United States)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  16. A Deep Learning based Approach to Reduced Order Modeling of Fluids using LSTM Neural Networks

    Science.gov (United States)

    Mohan, Arvind; Gaitonde, Datta

    2017-11-01

    Reduced Order Modeling (ROM) can be used as surrogates to prohibitively expensive simulations to model flow behavior for long time periods. ROM is predicated on extracting dominant spatio-temporal features of the flow from CFD or experimental datasets. We explore ROM development with a deep learning approach, which comprises of learning functional relationships between different variables in large datasets for predictive modeling. Although deep learning and related artificial intelligence based predictive modeling techniques have shown varied success in other fields, such approaches are in their initial stages of application to fluid dynamics. Here, we explore the application of the Long Short Term Memory (LSTM) neural network to sequential data, specifically to predict the time coefficients of Proper Orthogonal Decomposition (POD) modes of the flow for future timesteps, by training it on data at previous timesteps. The approach is demonstrated by constructing ROMs of several canonical flows. Additionally, we show that statistical estimates of stationarity in the training data can indicate a priori how amenable a given flow-field is to this approach. Finally, the potential and limitations of deep learning based ROM approaches will be elucidated and further developments discussed.

  17. Estimation of the Human Absorption Cross Section Via Reverberation Models

    DEFF Research Database (Denmark)

    Steinböck, Gerhard; Pedersen, Troels; Fleury, Bernard Henri

    2018-01-01

    and compare the obtained results to those of Sabine's model. We find that the absorption by persons is large enough to be measured with a wideband channel sounder and that estimates of the human absorption cross section differ for the two models. The obtained values are comparable to values reported...... in the literature. We also suggest the use of controlled environments with low average absorption coefficients to obtain more reliable estimates. The obtained values can be used to predict the change of reverberation time with persons in the propagation environment. This allows prediction of channel characteristics...... relevant in communication systems, e.g. path loss and rms delay spread, for various population densities....

  18. Analysis and design of second-order sliding-mode algorithms for quadrotor roll and pitch estimation.

    Science.gov (United States)

    Chang, Jing; Cieslak, Jérôme; Dávila, Jorge; Zolghadri, Ali; Zhou, Jun

    2017-11-01

    The problem addressed in this paper is that of quadrotor roll and pitch estimation without any assumption about the knowledge of perturbation bounds when Inertial Measurement Units (IMU) data or position measurements are available. A Smooth Sliding Mode (SSM) algorithm is first designed to provide reliable estimation under a smooth disturbance assumption. This assumption is next relaxed with the second proposed Adaptive Sliding Mode (ASM) algorithm that deals with disturbances of unknown bounds. In addition, the analysis of the observers are extended to the case where measurements are corrupted by bias and noise. The gains of the proposed algorithms were deduced from the Lyapunov function. Furthermore, some useful guidelines are provided for the selection of the observer turning parameters. The performance of these two approaches is evaluated using a nonlinear simulation model and considering either accelerometer or position measurements. The simulation results demonstrate the benefits of the proposed solutions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Biomass models to estimate carbon stocks for hardwood tree species

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Peinado, R.; Montero, G.; Rio, M. del

    2012-11-01

    To estimate forest carbon pools from forest inventories it is necessary to have biomass models or biomass expansion factors. In this study, tree biomass models were developed for the main hardwood forest species in Spain: Alnus glutinosa, Castanea sativa, Ceratonia siliqua, Eucalyptus globulus, Fagus sylvatica, Fraxinus angustifolia, Olea europaea var. sylvestris, Populus x euramericana, Quercus canariensis, Quercus faginea, Quercus ilex, Quercus pyrenaica and Quercus suber. Different tree biomass components were considered: stem with bark, branches of different sizes, above and belowground biomass. For each species, a system of equations was fitted using seemingly unrelated regression, fulfilling the additivity property between biomass components. Diameter and total height were explored as independent variables. All models included tree diameter whereas for the majority of species, total height was only considered in the stem biomass models and in some of the branch models. The comparison of the new biomass models with previous models fitted separately for each tree component indicated an improvement in the accuracy of the models. A mean reduction of 20% in the root mean square error and a mean increase in the model efficiency of 7% in comparison with recently published models. So, the fitted models allow estimating more accurately the biomass stock in hardwood species from the Spanish National Forest Inventory data. (Author) 45 refs.

  20. Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Directory of Open Access Journals (Sweden)

    Johannes Neubauer

    2013-09-01

    Full Text Available We present a graphical and dynamic framework for binding and execution of business process models. It is tailored to integrate 1 ad hoc processes modeled graphically, 2 third party services discovered in the (Internet, and 3 (dynamically synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS. The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.

  1. Estimation of oil toxicity using an additive toxicity model

    International Nuclear Information System (INIS)

    French, D.

    2000-01-01

    The impacts to aquatic organisms resulting from acute exposure to aromatic mixtures released from oil spills can be modeled using a newly developed toxicity model. This paper presented a summary of the model development for the toxicity of monoaromatic and polycyclic aromatic hydrocarbon mixtures. This is normally difficult to quantify because oils are mixtures of a variety of hydrocarbons with different toxicities and environmental fates. Also, aromatic hydrocarbons are volatile, making it difficult to expose organism to constant concentrations in bioassay tests. This newly developed and validated model corrects toxicity for time and temperature of exposure. In addition, it estimates the toxicity of each aromatic in the oil-derived mixture. The toxicity of the mixture can be estimated by the weighted sum of the toxicities of the individual compounds. Acute toxicity is estimated as LC50 (lethal concentration to 50 per cent of exposed organisms). Sublethal effects levels are estimated from LC50s. The model was verified with available oil bioassay data. It was concluded that oil toxicity is a function of the aromatic content and composition in the oil as well as the fate and partitioning of those components in the environment. 81 refs., 19 tabs., 1 fig

  2. Parameter Estimation for Single Diode Models of Photovoltaic Modules

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Photovoltaic and Distributed Systems Integration Dept.

    2015-03-01

    Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.

  3. Groundwater Modelling For Recharge Estimation Using Satellite Based Evapotranspiration

    Science.gov (United States)

    Soheili, Mahmoud; (Tom) Rientjes, T. H. M.; (Christiaan) van der Tol, C.

    2017-04-01

    Groundwater movement is influenced by several factors and processes in the hydrological cycle, from which, recharge is of high relevance. Since the amount of aquifer extractable water directly relates to the recharge amount, estimation of recharge is a perquisite of groundwater resources management. Recharge is highly affected by water loss mechanisms the major of which is actual evapotranspiration (ETa). It is, therefore, essential to have detailed assessment of ETa impact on groundwater recharge. The objective of this study was to evaluate how recharge was affected when satellite-based evapotranspiration was used instead of in-situ based ETa in the Salland area, the Netherlands. The Methodology for Interactive Planning for Water Management (MIPWA) model setup which includes a groundwater model for the northern part of the Netherlands was used for recharge estimation. The Surface Energy Balance Algorithm for Land (SEBAL) based actual evapotranspiration maps from Waterschap Groot Salland were also used. Comparison of SEBAL based ETa estimates with in-situ abased estimates in the Netherlands showed that these SEBAL estimates were not reliable. As such results could not serve for calibrating root zone parameters in the CAPSIM model. The annual cumulative ETa map produced by the model showed that the maximum amount of evapotranspiration occurs in mixed forest areas in the northeast and a portion of central parts. Estimates ranged from 579 mm to a minimum of 0 mm in the highest elevated areas with woody vegetation in the southeast of the region. Variations in mean seasonal hydraulic head and groundwater level for each layer showed that the hydraulic gradient follows elevation in the Salland area from southeast (maximum) to northwest (minimum) of the region which depicts the groundwater flow direction. The mean seasonal water balance in CAPSIM part was evaluated to represent recharge estimation in the first layer. The highest recharge estimated flux was for autumn

  4. Models for estimating photosynthesis parameters from in situ production profiles

    Science.gov (United States)

    Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana

    2017-12-01

    The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of

  5. The Channel Estimation and Modeling in High Altitude Platform Station Wireless Communication Dynamic Network

    Directory of Open Access Journals (Sweden)

    Xiaoyang Liu

    2017-01-01

    Full Text Available In order to analyze the channel estimation performance of near space high altitude platform station (HAPS in wireless communication system, the structure and formation of HAPS are studied in this paper. The traditional Least Squares (LS channel estimation method and Singular Value Decomposition-Linear Minimum Mean-Squared (SVD-LMMS channel estimation method are compared and investigated. A novel channel estimation method and model are proposed. The channel estimation performance of HAPS is studied deeply. The simulation and theoretical analysis results show that the performance of the proposed method is better than the traditional methods. The lower Bit Error Rate (BER and higher Signal Noise Ratio (SNR can be obtained by the proposed method compared with the LS and SVD-LMMS methods.

  6. Deconvolution Estimation in Measurement Error Models: The R Package decon

    Science.gov (United States)

    Wang, Xiao-Feng; Wang, Bin

    2011-01-01

    Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

  7. Testing Static Trade-off Against Pecking Order Models of Capital Structure

    OpenAIRE

    Lakshmi Shyam-Sunder; Stewart C. Myers

    1994-01-01

    This paper tests traditional capital structure models against the alternative of a pecking order model of corporate financing. The basic pecking order model, which predicts external debt financing driven by the internal financial deficit, has much greater explanatory power than a static trade-off model which predicts that each firm adjusts toward an optimal debt ratio. We show that the power of some usual tests of the trade-off model is virtually nil. We question whether the available empiric...

  8. Parameter estimation for groundwater models under uncertain irrigation data

    Science.gov (United States)

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  9. Modeling, estimation and optimal filtration in signal processing

    CERN Document Server

    Najim, Mohamed

    2010-01-01

    The purpose of this book is to provide graduate students and practitioners with traditional methods and more recent results for model-based approaches in signal processing.Firstly, discrete-time linear models such as AR, MA and ARMA models, their properties and their limitations are introduced. In addition, sinusoidal models are addressed.Secondly, estimation approaches based on least squares methods and instrumental variable techniques are presented.Finally, the book deals with optimal filters, i.e. Wiener and Kalman filtering, and adaptive filters such as the RLS, the LMS and the

  10. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  11. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  12. Estimation of traffic accident costs: a prompted model.

    Science.gov (United States)

    Hejazi, Rokhshad; Shamsudin, Mad Nasir; Radam, Alias; Rahim, Khalid Abdul; Ibrahim, Zelina Zaitun; Yazdani, Saeed

    2013-01-01

    Traffic accidents are the reason for 25% of unnatural deaths in Iran. The main objective of this study is to find a simple model for the estimation of economic costs especially in Islamic countries (like Iran) in a straightforward manner. The model can show the magnitude of traffic accident costs with monetary equivalent. Data were collected from different sources that included traffic police records, insurance companies and hospitals. The conceptual framework, in our study, was based on the method of Ayati. He used this method for the estimation of economic costs in Iran. We promoted his method via minimum variables. Our final model has only three available variables which can be taken from insurance companies and police records. The running model showed that the traffic accident costs were US$2.2 million in 2007 for our case study route.

  13. Effects of Sample Size, Estimation Methods, and Model Specification on Structural Equation Modeling Fit Indexes.

    Science.gov (United States)

    Fan, Xitao; Wang, Lin; Thompson, Bruce

    1999-01-01

    A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)

  14. Comparison of physically based catchment models for estimating Phosphorus losses

    OpenAIRE

    Nasr, Ahmed Elssidig; Bruen, Michael

    2003-01-01

    As part of a large EPA-funded research project, coordinated by TEAGASC, the Centre for Water Resources Research at UCD reviewed the available distributed physically based catchment models with a potential for use in estimating phosphorous losses for use in implementing the Water Framework Directive. Three models, representative of different levels of approach and complexity, were chosen and were implemented for a number of Irish catchments. This paper reports on (i) the lessons and experience...

  15. Simplified evacuation model for estimating mitigation of early population exposures

    International Nuclear Information System (INIS)

    Strenge, D.L.

    1980-12-01

    The application of a simple evacuation model to the prediction of expected population exposures following acute releases of activity to the atmosphere is described. The evacuation model of Houston is coupled with a normalized Gaussian dispersion calculation to estimate the time integral of population exposure. The methodology described can be applied to specific sites to determine the expected reduction of population exposures due to evacuation

  16. Comparison of two intelligent models to estimate the instantaneous ...

    Indian Academy of Sciences (India)

    ... they are 85.46 (w/m2), 3.08 (w/m2) and 5.41, respectively. As the results indicate, both models are able to estimate the amount of radiation well, while the neural network has a higher accuracy. The output of the modes for six other cities of Iran, with similar climate conditions, also proves the ability of the proposed models.

  17. Parameter Estimation and Model Selection for Mixtures of Truncated Exponentials

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael

    2010-01-01

    Bayesian networks with mixtures of truncated exponentials (MTEs) support efficient inference algorithms and provide a flexible way of modeling hybrid domains (domains containing both discrete and continuous variables). On the other hand, estimating an MTE from data has turned out to be a difficult...

  18. Time-of-flight estimation based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand; Tuquerres, G.; Regtien, Paulus P.L.

    We address the problem of estimating the time-of-flight (ToF) of a waveform that is disturbed heavily by additional reflections from nearby objects. These additional reflections cause interference patterns that are difficult to predict. The introduction of a model for the reflection in terms of a

  19. Empirical Models for the Estimation of Global Solar Radiation in ...

    African Journals Online (AJOL)

    Empirical Models for the Estimation of Global Solar Radiation in Yola, Nigeria. ... and average daily wind speed (WS) for the interval of three years (2010 – 2012) measured using various instruments for Yola of recorded data collected from the Center for Atmospheric Research (CAR), Anyigba are presented and analyzed.

  20. Revised models and genetic parameter estimates for production and ...

    African Journals Online (AJOL)

    Genetic parameters for production and reproduction traits in the Elsenburg Dormer sheep stud were estimated using records of 11743 lambs born between 1943 and 2002. An animal model with direct and maternal additive, maternal permanent and temporary environmental effects was fitted for traits considered traits of the ...

  1. Determining input values for a simple parametric model to estimate ...

    African Journals Online (AJOL)

    Estimating soil evaporation (Es) is an important part of modelling vineyard evapotranspiration for irrigation purposes. Furthermore, quantification of possible soil texture and trellis effects is essential. Daily Es from six topsoils packed into lysimeters was measured under grapevines on slanting and vertical trellises, ...

  2. inverse gaussian model for small area estimation via gibbs sampling

    African Journals Online (AJOL)

    ADMIN

    (1994) extended the work by Fries and. Bhattacharyya (1983) to include the maximum likelihood analysis of the two-factor inverse. Gaussian model for the unbalanced and interaction case for the estimation of small area parameters in finite populations. The object of this article is to develop a Bayesian approach for small ...

  3. An Approach to Quality Estimation in Model-Based Development

    DEFF Research Database (Denmark)

    Holmegaard, Jens Peter; Koch, Peter; Ravn, Anders Peter

    2004-01-01

    We present an approach to estimation of parameters for design space exploration in Model-Based Development, where synthesis of a system is done in two stages. Component qualities like space, execution time or power consumption are defined in a repository by platform dependent values. Connectors...

  4. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  5. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  6. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal. ... On the other hand, the same slope coefficients β , under Generalized Least Squares (GLS) decreased with increased autocorrelation when the sample size T is small. Journal of the Nigerian Association ...

  7. Method of moments estimation of GO-GARCH models

    NARCIS (Netherlands)

    Boswijk, H.P.; van der Weide, R.

    2009-01-01

    We propose a new estimation method for the factor loading matrix in generalized orthogonal GARCH (GO-GARCH) models. The method is based on the eigenvectors of a suitably defined sample autocorrelation matrix of squares and cross-products of the process. The method can therefore be easily applied to

  8. Bayesian nonparametric estimation of hazard rate in monotone Aalen model

    Czech Academy of Sciences Publication Activity Database

    Timková, Jana

    2014-01-01

    Roč. 50, č. 6 (2014), s. 849-868 ISSN 0023-5954 Institutional support: RVO:67985556 Keywords : Aalen model * Bayesian estimation * MCMC Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/timkova-0438210.pdf

  9. Mathematical models for estimating radio channels utilization when ...

    African Journals Online (AJOL)

    Definition of the radio channel utilization indicator is given. Mathematical models for radio channels utilization assessment by real-time flows transfer in the wireless self-organized network are presented. Estimated experiments results according to the average radio channel utilization productivity with and without buffering of ...

  10. Efficient Bayesian Estimation and Combination of GARCH-Type Models

    NARCIS (Netherlands)

    D. David (David); L.F. Hoogerheide (Lennart)

    2010-01-01

    textabstractThis paper proposes an up-to-date review of estimation strategies available for the Bayesian inference of GARCH-type models. The emphasis is put on a novel efficient procedure named AdMitIS. The methodology automatically constructs a mixture of Student-t distributions as an approximation

  11. An improved COCOMO software cost estimation model | Duke ...

    African Journals Online (AJOL)

    In this paper, we discuss the methodologies adopted previously in software cost estimation using the COnstructive COst MOdels (COCOMOs). From our analysis, COCOMOs produce very high software development efforts, which eventually produce high software development costs. Consequently, we propose its extension, ...

  12. Remote sensing estimates of impervious surfaces for pluvial flood modelling

    DEFF Research Database (Denmark)

    Kaspersen, Per Skougaard; Drews, Martin

    This paper investigates the accuracy of medium resolution (MR) satellite imagery in estimating impervious surfaces for European cities at the detail required for pluvial flood modelling. Using remote sensing techniques enables precise and systematic quantification of the influence of the past 30...

  13. Models for estimation of carbon sequestered by Cupressus ...

    African Journals Online (AJOL)

    This study compared models for estimating carbon sequestered aboveground in Cupressus lusitanica plantation stands at Wondo Genet College of Forestry and Natural Resources, Ethiopia. Relationships of carbon storage with tree component and stand age were also investigated. Thirty trees of three different ages (5, ...

  14. A Model-Driven Approach for Hybrid Power Estimation in Embedded Systems Design

    Directory of Open Access Journals (Sweden)

    Ben Atitallah Rabie

    2011-01-01

    Full Text Available Abstract As technology scales for increased circuit density and performance, the management of power consumption in system-on-chip (SoC is becoming critical. Today, having the appropriate electronic system level (ESL tools for power estimation in the design flow is mandatory. The main challenge for the design of such dedicated tools is to achieve a better tradeoff between accuracy and speed. This paper presents a consumption estimation approach allowing taking the consumption criterion into account early in the design flow during the system cosimulation. The originality of this approach is that it allows the power estimation for both white-box intellectual properties (IPs using annotated power models and black-box IPs using standalone power estimators. In order to obtain accurate power estimates, our simulations were performed at the cycle-accurate bit-accurate (CABA level, using SystemC. To make our approach fast and not tedious for users, the simulated architectures, including standalone power estimators, were generated automatically using a model driven engineering (MDE approach. Both annotated power models and standalone power estimators can be used together to estimate the consumption of the same architecture, which makes them complementary. The simulation results showed that the power estimates given by both estimation techniques for a hardware component are very close, with a difference that does not exceed 0.3%. This proves that, even when the IP code is not accessible or not modifiable, our approach allows obtaining quite accurate power estimates that early in the design flow thanks to the automation offered by the MDE approach.

  15. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    Science.gov (United States)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  16. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    International Nuclear Information System (INIS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-01-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  17. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  18. Model-order reduction of lumped parameter systems via fractional calculus

    Science.gov (United States)

    Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio

    2018-04-01

    This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.

  19. Estimating Model Parameters of Adaptive Software Systems in Real-Time

    Science.gov (United States)

    Kumar, Dinesh; Tantawi, Asser; Zhang, Li

    Adaptive software systems have the ability to adapt to changes in workload and execution environment. In order to perform resource management through model based control in such systems, an accurate mechanism for estimating the software system's model parameters is required. This paper deals with real-time estimation of a performance model for adaptive software systems that process multiple classes of transactional workload. First, insights in to the static performance model estimation problem are provided. Then an Extended Kalman Filter (EKF) design is combined with an open queueing network model to dynamically estimate the model parameters in real-time. Specific problems that are encountered in the case of multiple classes of workload are analyzed. These problems arise mainly due to the under-deterministic nature of the estimation problem. This motivates us to propose a modified design of the filter. Insights for choosing tuning parameters of the modified design, i.e., number of constraints and sampling intervals are provided. The modified filter design is shown to effectively tackle problems with multiple classes of workload through experiments.

  20. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    -valued models, this paper proposes a GSM model - the Bessel K model - that induces concave penalty functions for the estimation of complex sparse signals. The properties of the Bessel K model are analyzed when it is applied to Type I and Type II estimation. This analysis reveals that, by tuning the parameters...... of the mixing pdf different penalty functions are invoked depending on the estimation type used, the value of the noise variance, and whether real or complex signals are estimated. Using the Bessel K model, we derive a sparse estimator based on a modification of the expectation-maximization algorithm formulated......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...