Fundamental Frequency and Model Order Estimation Using Spatial Filtering
DEFF Research Database (Denmark)
Karimian-Azari, Sam; Jensen, Jesper Rindom; Christensen, Mads Græsbøll
2014-01-01
extend this procedure to account for inharmonicity using unconstrained model order estimation. The simulations show that beamforming improves the performance of the joint estimates of fundamental frequency and the number of harmonics in low signal to interference (SIR) levels, and an experiment......In signal processing applications of harmonic-structured signals, estimates of the fundamental frequency and number of harmonics are often necessary. In real scenarios, a desired signal is contaminated by different levels of noise and interferers, which complicate the estimation of the signal...... parameters. In this paper, we present an estimation procedure for harmonic-structured signals in situations with strong interference using spatial filtering, or beamforming. We jointly estimate the fundamental frequency and the constrained model order through the output of the beamformers. Besides that, we...
SECOND ORDER LEAST SQUARE ESTIMATION ON ARCH(1 MODEL WITH BOX-COX TRANSFORMED DEPENDENT VARIABLE
Directory of Open Access Journals (Sweden)
Herni Utami
2014-03-01
Full Text Available Box-Cox transformation is often used to reduce heterogeneity and to achieve a symmetric distribution of response variable. In this paper, we estimate the parameters of Box-Cox transformed ARCH(1 model using second-order leastsquare method and then we study the consistency and asymptotic normality for second-order least square (SLS estimators. The SLS estimation was introduced byWang (2003, 2004 to estimate the parameters of nonlinear regression models with independent and identically distributed errors
Brady, Timothy F.; Tenenbaum, Joshua B.
2013-01-01
When remembering a real-world scene, people encode both detailed information about specific objects and higher order information like the overall gist of the scene. However, formal models of change detection, like those used to estimate visual working memory capacity, assume observers encode only a simple memory representation that includes no…
Disturbance estimation of nuclear power plant by using reduced-order model
International Nuclear Information System (INIS)
Tashima, Shin-ichi; Wakabayashi, Jiro
1983-01-01
An estimation method is proposed of multiplex disturbances which occur in a nuclear power plant. The method is composed of two parts: (i) the identification of a simplified model of multi-input and multi-output to describe the related system response, and (ii) the design of a Kalman filter to estimate the multiplex disturbance. Concerning the simplified model, several observed signals are firstly selected as output variables which can well represent the system response caused by the disturbances. A reduced-order model is utilized for designing the disturbance estimator. This is based on the following two considerations. The first is that the disturbance is assumed to be of a quasistatic nature. The other is based on the intuition that there exist a few dominant modes between the disturbances and the selected observed signals and that most of the non-dominant modes which remain may not affect the accuracy of the disturbance estimator. The reduced-order model is furtherly transformed to a single-output model using a linear combination of the output signals, where the standard procedure of the structural identification is evaded. The parameters of the model thus transformed are calculated by the generalized least square method. As for the multiplex disturbance estimator, the Kalman filtering method is applied by compromising the following three items : (a) quick response to disturbance, (b) reduction of estimation error in the presence of observation noises, and (c) the elimination of cross-interference between the disturbances to the plant and the estimates from the Kalman filter. The effectiveness of the proposed method is verified through some computer experiments using a BWR plant simulator. (author)
Potocki, J K; Tharp, H S
1993-01-01
The success of treating cancerous tissue with heat depends on the temperature elevation, the amount of tissue elevated to that temperature, and the length of time that the tissue temperature is elevated. In clinical situations the temperature of most of the treated tissue volume is unknown, because only a small number of temperature sensors can be inserted into the tissue. A state space model based on a finite difference approximation of the bioheat transfer equation (BHTE) is developed for identification purposes. A full-order extended Kalman filter (EKF) is designed to estimate both the unknown blood perfusion parameters and the temperature at unmeasured locations. Two reduced-order estimators are designed as computationally less intensive alternatives to the full-order EKF. Simulation results show that the success of the estimation scheme depends strongly on the number and location of the temperature sensors. Superior results occur when a temperature sensor exists in each unknown blood perfusion zone, and the number of sensors is at least as large as the number of unknown perfusion zones. Unacceptable results occur when there are more unknown perfusion parameters than temperature sensors, or when the sensors are placed in locations that do not sample the unknown perfusion information.
Directory of Open Access Journals (Sweden)
Renxin Xiao
2016-03-01
Full Text Available In order to properly manage lithium-ion batteries of electric vehicles (EVs, it is essential to build the battery model and estimate the state of charge (SOC. In this paper, the fractional order forms of Thevenin and partnership for a new generation of vehicles (PNGV models are built, of which the model parameters including the fractional orders and the corresponding resistance and capacitance values are simultaneously identified based on genetic algorithm (GA. The relationships between different model parameters and SOC are established and analyzed. The calculation precisions of the fractional order model (FOM and integral order model (IOM are validated and compared under hybrid test cycles. Finally, extended Kalman filter (EKF is employed to estimate the SOC based on different models. The results prove that the FOMs can simulate the output voltage more accurately and the fractional order EKF (FOEKF can estimate the SOC more precisely under dynamic conditions.
BAYESIAN PARAMETER ESTIMATION IN A MIXED-ORDER MODEL OF BOD DECAY. (U915590)
We describe a generalized version of the BOD decay model in which the reaction is allowed to assume an order other than one. This is accomplished by making the exponent on BOD concentration a free parameter to be determined by the data. This "mixed-order" model may be ...
Directory of Open Access Journals (Sweden)
Xin Lu
2018-03-01
Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.
Connection between weighted LPC and higher-order statistics for AR model estimation
Kamp, Y.; Ma, C.
1993-01-01
This paper establishes the relationship between a weighted linear prediction method used for robust analysis of voiced speech and the autoregressive modelling based on higher-order statistics, known as cumulants
On the parameter estimation of first order IMA model corrupted with ...
African Journals Online (AJOL)
In this paper, we showed how the autocovariance functions can be used to estimate the true parameters of IMA(1) models corrupted with white noise . We performed simulation studies to demonstrate our findings. The simulation studies showed that under the presence of errors in not more than 30% of total data points, our ...
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Hadwin, Paul J; Peterson, Sean D
2017-04-01
The Bayesian framework for parameter inference provides a basis from which subject-specific reduced-order vocal fold models can be generated. Previously, it has been shown that a particle filter technique is capable of producing estimates and associated credibility intervals of time-varying reduced-order vocal fold model parameters. However, the particle filter approach is difficult to implement and has a high computational cost, which can be barriers to clinical adoption. This work presents an alternative estimation strategy based upon Kalman filtering aimed at reducing the computational cost of subject-specific model development. The robustness of this approach to Gaussian and non-Gaussian noise is discussed. The extended Kalman filter (EKF) approach is found to perform very well in comparison with the particle filter technique at dramatically lower computational cost. Based upon the test cases explored, the EKF is comparable in terms of accuracy to the particle filter technique when greater than 6000 particles are employed; if less particles are employed, the EKF actually performs better. For comparable levels of accuracy, the solution time is reduced by 2 orders of magnitude when employing the EKF. By virtue of the approximations used in the EKF, however, the credibility intervals tend to be slightly underpredicted.
Sinusoidal Order Estimation Using Angles between Subspaces
Directory of Open Access Journals (Sweden)
Søren Holdt Jensen
2009-01-01
Full Text Available We consider the problem of determining the order of a parametric model from a noisy signal based on the geometry of the space. More specifically, we do this using the nontrivial angles between the candidate signal subspace model and the noise subspace. The proposed principle is closely related to the subspace orthogonality property known from the MUSIC algorithm, and we study its properties and compare it to other related measures. For the problem of estimating the number of complex sinusoids in white noise, a computationally efficient implementation exists, and this problem is therefore considered in detail. In computer simulations, we compare the proposed method to various well-known methods for order estimation. These show that the proposed method outperforms the other previously published subspace methods and that it is more robust to the noise being colored than the previously published methods.
Reduced order ARMA spectral estimation of ocean waves
Digital Repository Service at National Institute of Oceanography (India)
Mandal, S.; Witz, J.A.; Lyons, G.J.
. After selecting the initial model order based on the Akaike Information Criterion method, a novel model order reduction technique is applied to obtain the final reduced order ARMA model. First estimates of the higher order autoregressive coefficients... of the reduced order ARMA model is obtained. The moving average part is determined based on partial fraction and recursive methods. The above system identification models and model order reduction technique are shown here to be successfully applied...
Directory of Open Access Journals (Sweden)
Lijuan Cui
2016-11-01
Full Text Available We monitored the water quality and hydrological conditions of a horizontal subsurface constructed wetland (HSSF-CW in Beijing, China, for two years. We simulated the area-based constant and the temperature coefficient with the first-order kinetic model. We examined the relationships between the nitrogen (N removal rate, N load, seasonal variations in the N removal rate, and environmental factors—such as the area-based constant, temperature, and dissolved oxygen (DO. The effluent ammonia (NH4+-N and nitrate (NO3−-N concentrations were significantly lower than the influent concentrations (p < 0.01, n = 38. The NO3−-N load was significantly correlated with the removal rate (R2 = 0.96, p < 0.01, but the NH4+-N load was not correlated with the removal rate (R2 = 0.02, p > 0.01. The area-based constants of NO3−-N and NH4+-N at 20 °C were 27 ± 26 (mean ± SD and 14 ± 10 m∙year−1, respectively. The temperature coefficients for NO3−-N and NH4+-N were estimated at 1.004 and 0.960, respectively. The area-based constants for NO3−-N and NH4+-N were not correlated with temperature (p > 0.01. The NO3−-N area-based constant was correlated with the corresponding load (R2 = 0.96, p < 0.01. The NH4+-N area rate was correlated with DO (R2 = 0.69, p < 0.01, suggesting that the factors that influenced the N removal rate in this wetland met Liebig’s law of the minimum.
International Nuclear Information System (INIS)
Tylee, J.L.
1980-01-01
A low-order, nonlinear model of the Loss-of-Fluid Test (LOFT) reactor plant, for use in Kalman filter estimators, is developed, described, and evaluated. This model consists of 31 differential equations and represents all major subsystems of both the primary and secondary sides of the LOFT plant. Comparisons between model calculations and available LOFT power range testing transients demonstrate the accuracy of the low-order model. The nonlinear model is numerically linearized for future implementation in Kalman filter and optimal control algorithms. The linearized model is shown to be an adequate representation of the nonlinear plant dynamics
Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems
N'Doye, Ibrahima; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.
Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems
N'Doye, Ibrahima
2015-07-01
This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.
Fernandez, R.; Deveaux, V.
2010-01-01
We provide a formal definition and study the basic properties of partially ordered chains (POC). These systems were proposed to model textures in image processing and to represent independence relations between random variables in statistics (in the later case they are known as Bayesian networks).
On nonlinear reduced order modeling
International Nuclear Information System (INIS)
Abdel-Khalik, Hany S.
2011-01-01
When applied to a model that receives n input parameters and predicts m output responses, a reduced order model estimates the variations in the m outputs of the original model resulting from variations in its n inputs. While direct execution of the forward model could provide these variations, reduced order modeling plays an indispensable role for most real-world complex models. This follows because the solutions of complex models are expensive in terms of required computational overhead, thus rendering their repeated execution computationally infeasible. To overcome this problem, reduced order modeling determines a relationship (often referred to as a surrogate model) between the input and output variations that is much cheaper to evaluate than the original model. While it is desirable to seek highly accurate surrogates, the computational overhead becomes quickly intractable especially for high dimensional model, n ≫ 10. In this manuscript, we demonstrate a novel reduced order modeling method for building a surrogate model that employs only 'local first-order' derivatives and a new tensor-free expansion to efficiently identify all the important features of the original model to reach a predetermined level of accuracy. This is achieved via a hybrid approach in which local first-order derivatives (i.e., gradient) of a pseudo response (a pseudo response represents a random linear combination of original model’s responses) are randomly sampled utilizing a tensor-free expansion around some reference point, with the resulting gradient information aggregated in a subspace (denoted by the active subspace) of dimension much less than the dimension of the input parameters space. The active subspace is then sampled employing the state-of-the-art techniques for global sampling methods. The proposed method hybridizes the use of global sampling methods for uncertainty quantification and local variational methods for sensitivity analysis. In a similar manner to
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-09-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored. These include: battery state of charge (SoC), battery state of health (capacity fade determination, SoH), and state of function (power fade determination, SoF). The second paper concludes the series by presenting a multi-stage online parameter identification technique based on a weighted recursive least quadratic squares parameter estimator to determine the parameters of the proposed battery model from the first paper during operation. A novel mutation based algorithm is developed to determine the nonlinear current dependency of the charge-transfer resistance. The influence of diffusion is determined by an on-line identification technique and verified on several batteries at different operation conditions. This method guarantees a short response time and, together with its fully recursive structure, assures a long-term stable monitoring of the battery parameters. The relative dynamic voltage prediction error of the algorithm is reduced to 2%. The changes of parameters are used to determine the states of the battery. The algorithm is real-time capable and can be implemented on embedded systems.
Directory of Open Access Journals (Sweden)
Bizhong Xia
2017-08-01
Full Text Available Accurate state of charge (SOC estimation can prolong lithium-ion battery life and improve its performance in practice. This paper proposes a new method for SOC estimation. The second-order resistor-capacitor (2RC equivalent circuit model (ECM is applied to describe the dynamic behavior of lithium-ion battery on deriving state space equations. A novel method for SOC estimation is then presented. This method does not require any matrix calculation, so the computation cost can be very low, making it more suitable for hardware implementation. The Federal Urban Driving Schedule (FUDS, The New European Driving Cycle (NEDC, and the West Virginia Suburban Driving Schedule (WVUSUB experiments are carried to evaluate the performance of the proposed method. Experimental results show that the SOC estimation error can converge to 3% error boundary within 30 seconds when the initial SOC estimation error is 20%, and the proposed method can maintain an estimation error less than 3% with 1% voltage noise and 5% current noise. Further, the proposed method has excellent robustness against parameter disturbance. Also, it has higher estimation accuracy than the extended Kalman filter (EKF, but with decreased hardware requirements and faster convergence rate.
Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello
2017-11-01
State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
DEFF Research Database (Denmark)
Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter
2015-01-01
Methane (CH4) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required...... categories. In general, the single-phase model, LandGEM, significantly overestimated CH4 generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH4 potential (BMP) and CH4 generation rate constant (k...... landfills (from the start of disposal until 2020 and until 2100). Through a CH4 mass balance approach, fugitive CH4 emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field...
Energy Technology Data Exchange (ETDEWEB)
Stetzel, KD; Aldrich, LL; Trimboli, MS; Plett, GL
2015-03-15
This paper addresses the problem of estimating the present value of electrochemical internal variables in a lithium-ion cell in real time, using readily available measurements of cell voltage, current, and temperature. The variables that can be estimated include any desired set of reaction flux and solid and electrolyte potentials and concentrations at any set of one-dimensional spatial locations, in addition to more standard quantities such as state of charge. The method uses an extended Kalman filter along with a one-dimensional physics-based reduced-order model of cell dynamics. Simulations show excellent and robust predictions having dependable error bounds for most internal variables. (C) 2014 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lorentzen, Rolf Johan
2002-04-01
The main objective of this thesis is to develop methods which can be used to improve predictions of two-phase flow (liquid and gas) in pipelines and wells. More reliable predictions are accomplished by improvements of numerical methods, and by using measured data to tune the mathematical model which describes the two-phase flow. We present a way to extend simple numerical methods to second order spatial accuracy. These methods are implemented, tested and compared with a second order Godunov-type scheme. In addition, a new (and faster) version of the Godunov-type scheme utilizing primitive (observable) variables is presented. We introduce a least squares method which is used to tune parameters embedded in the two-phase flow model. This method is tested using synthetic generated measurements. We also present an ensemble Kalman filter which is used to tune physical state variables and model parameters. This technique is tested on synthetic generated measurements, but also on several sets of full-scale experimental measurements. The thesis is divided into an introductory part, and a part consisting of four papers. The introduction serves both as a summary of the material treated in the papers, and as supplementary background material. It contains five sections, where the first gives an overview of the main topics which are addressed in the thesis. Section 2 contains a description and discussion of mathematical models for two-phase flow in pipelines. Section 3 deals with the numerical methods which are used to solve the equations arising from the two-phase flow model. The numerical scheme described in Section 3.5 is not included in the papers. This section includes results in addition to an outline of the numerical approach. Section 4 gives an introduction to estimation theory, and leads towards application of the two-phase flow model. The material in Sections 4.6 and 4.7 is not discussed in the papers, but is included in the thesis as it gives an important validation
Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin
2013-07-01
The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
Joint fundamental frequency and order estimation using optimal filtering
Directory of Open Access Journals (Sweden)
Jakobsson Andreas
2011-01-01
Full Text Available Abstract In this paper, the problem of jointly estimating the number of harmonics and the fundamental frequency of periodic signals is considered. We show how this problem can be solved using a number of methods that either are or can be interpreted as filtering methods in combination with a statistical model selection criterion. The methods in question are the classical comb filtering method, a maximum likelihood method, and some filtering methods based on optimal filtering that have recently been proposed, while the model selection criterion is derived herein from the maximum a posteriori principle. The asymptotic properties of the optimal filtering methods are analyzed and an order-recursive efficient implementation is derived. Finally, the estimators have been compared in computer simulations that show that the optimal filtering methods perform well under various conditions. It has previously been demonstrated that the optimal filtering methods perform extremely well with respect to fundamental frequency estimation under adverse conditions, and this fact, combined with the new results on model order estimation and efficient implementation, suggests that these methods form an appealing alternative to classical methods for analyzing multi-pitch signals.
High-order computer-assisted estimates of topological entropy
Grote, Johannes
The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.
Belkhatir, Zehor
2017-05-31
This paper proposes a two-stage estimation algorithm to solve the problem of joint estimation of the parameters and the fractional differentiation orders of a linear continuous-time fractional system with non-commensurate orders. The proposed algorithm combines the modulating functions and the first-order Newton methods. Sufficient conditions ensuring the convergence of the method are provided. An error analysis in the discrete case is performed. Moreover, the method is extended to the joint estimation of smooth unknown input and fractional differentiation orders. The performance of the proposed approach is illustrated with different numerical examples. Furthermore, a potential application of the algorithm is proposed which consists in the estimation of the differentiation orders of a fractional neurovascular model along with the neural activity considered as input for this model.
Estimation of uncertainties from missing higher orders in perturbative calculations
International Nuclear Information System (INIS)
Bagnaschi, E.
2015-05-01
In this proceeding we present the results of our recent study (hep-ph/1409.5036) of the statistical performances of two different approaches, Scale Variation (SV) and the Bayesian model of Cacciari and Houdeau (CH)(hep-ph/1105.5152) (which we also extend to observables with initial state hadrons), to the estimation of Missing Higher-Order Uncertainties (MHOUs)(hep-ph/1307.1843) in perturbation theory. The behavior of the models is determined by analyzing, on a wide set of observables, how the MHOU intervals they produce are successful in predicting the next orders. We observe that the Bayesian model behaves consistently, producing intervals at 68% Degree of Belief (DoB) comparable with the scale variation intervals with a rescaling factor r larger than 2 and closer to 4. Concerning SV, our analysis allows the derivation of a heuristic Confidence Level (CL) for the intervals. We find that assigning a CL of 68% to the intervals obtained with the conventional choice of varying the scales within a factor of two with respect to the central scale could potentially lead to an underestimation of the uncertainties in the case of observables with initial state hadrons.
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Estimates for lower order eigenvalues of a clamped plate problem
Cheng, Qing-Ming; Huang, Guangyue; Wei, Guoxin
2009-01-01
For a bounded domain $\\Omega$ in a complete Riemannian manifold $M^n$, we study estimates for lower order eigenvalues of a clamped plate problem. We obtain universal inequalities for lower order eigenvalues. We would like to remark that our results are sharp.
Are Low-order Covariance Estimates Useful in Error Analyses?
Baker, D. F.; Schimel, D.
2005-12-01
Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb
Reduced Order Modeling in General Relativity
Tiglio, Manuel
2014-03-01
Reduced Order Modeling is an emerging yet fast developing filed in gravitational wave physics. The main goals are to enable fast modeling and parameter estimation of any detected signal, along with rapid matched filtering detecting. I will focus on the first two. Some accomplishments include being able to replace, with essentially no lost of physical accuracy, the original models with surrogate ones (which are not effective ones, that is, they do not simplify the physics but go on a very different track, exploiting the particulars of the waveform family under consideration and state of the art dimensional reduction techniques) which are very fast to evaluate. For example, for EOB models they are at least around 3 orders of magnitude faster than solving the original equations, with physically equivalent results. For numerical simulations the speedup is at least 11 orders of magnitude. For parameter estimation our current numbers are about bringing ~100 days for a single SPA inspiral binary neutron star Bayesian parameter estimation analysis to under a day. More recently, it has been shown that the full precessing problem for, say, 200 cycles, can be represented, through some new ideas, by a remarkably compact set of carefully chosen reduced basis waveforms (~10-100, depending on the accuracy requirements). I will highlight what I personally believe are the challenges to face next in this subarea of GW physics and where efforts should be directed. This talk will summarize work in collaboration with: Harbir Antil (GMU), Jonathan Blackman (Caltech), Priscila Canizares (IoA, Cambridge, UK), Sarah Caudill (UWM), Jonathan Gair (IoA. Cambridge. UK), Scott Field (UMD), Chad R. Galley (Caltech), Frank Herrmann (Germany), Han Hestahven (EPFL, Switzerland), Jason Kaye (Brown, Stanford & Courant). Evan Ochsner (UWM), Ricardo Nochetto (UMD), Vivien Raymond (LIGO, Caltech), Rory Smith (LIGO, Caltech) Bela Ssilagyi (Caltech) and MT (UMD & Caltech).
Global weighted estimates for second-order nondivergence elliptic ...
Indian Academy of Sciences (India)
Fengping Yao
2018-03-21
Mar 21, 2018 ... One of the key a priori estimates in the theory of second-order elliptic .... It is well known that the maximal functions satisfy strong p–p .... Here we prove the following auxiliary result, which will be a crucial ingredient in the proof.
Second order statistics of bilinear forms of robust scatter estimators
Kammoun, Abla; Couillet, Romain; Pascal, Fré dé ric
2015-01-01
. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved
Order Tracking Based on Robust Peak Search Instantaneous Frequency Estimation
International Nuclear Information System (INIS)
Gao, Y; Guo, Y; Chi, Y L; Qin, S R
2006-01-01
Order tracking plays an important role in non-stationary vibration analysis of rotating machinery, especially to run-up or coast down. An instantaneous frequency estimation (IFE) based order tracking of rotating machinery is introduced. In which, a peak search algorithms of spectrogram of time-frequency analysis is employed to obtain IFE of vibrations. An improvement to peak search is proposed, which can avoid strong non-order components or noises disturbing to the peak search work. Compared with traditional methods of order tracking, IFE based order tracking is simplified in application and only software depended. Testing testify the validity of the method. This method is an effective supplement to traditional methods, and the application in condition monitoring and diagnosis of rotating machinery is imaginable
Optimal heavy tail estimation – Part 1: Order selection
Directory of Open Access Journals (Sweden)
M. Mudelsee
2017-12-01
Full Text Available The tail probability, P, of the distribution of a variable is important for risk analysis of extremes. Many variables in complex geophysical systems show heavy tails, where P decreases with the value, x, of a variable as a power law with a characteristic exponent, α. Accurate estimation of α on the basis of data is currently hindered by the problem of the selection of the order, that is, the number of largest x values to utilize for the estimation. This paper presents a new, widely applicable, data-adaptive order selector, which is based on computer simulations and brute force search. It is the first in a set of papers on optimal heavy tail estimation. The new selector outperforms competitors in a Monte Carlo experiment, where simulated data are generated from stable distributions and AR(1 serial dependence. We calculate error bars for the estimated α by means of simulations. We illustrate the method on an artificial time series. We apply it to an observed, hydrological time series from the River Elbe and find an estimated characteristic exponent of 1.48 ± 0.13. This result indicates finite mean but infinite variance of the statistical distribution of river runoff.
Second order statistics of bilinear forms of robust scatter estimators
Kammoun, Abla
2015-08-12
This paper lies in the lineage of recent works studying the asymptotic behaviour of robust-scatter estimators in the case where the number of observations and the dimension of the population covariance matrix grow at infinity with the same pace. In particular, we analyze the fluctuations of bilinear forms of the robust shrinkage estimator of covariance matrix. We show that this result can be leveraged in order to improve the design of robust detection methods. As an example, we provide an improved generalized likelihood ratio based detector which combines robustness to impulsive observations and optimality across the shrinkage parameter, the optimality being considered for the false alarm regulation.
Anisotropic Third-Order Regularization for Sparse Digital Elevation Models
Lellmann, Jan; Morel, Jean-Michel; Schö nlieb, Carola-Bibiane
2013-01-01
features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE
A simplified parsimonious higher order multivariate Markov chain model
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
A tridiagonal parsimonious higher order multivariate Markov chain model
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a tridiagonal parsimonious higher-order multivariate Markov chain model (TPHOMMCM). Moreover, estimation method of the parameters in TPHOMMCM is give. Numerical experiments illustrate the effectiveness of TPHOMMCM.
Estimation of the order of an autoregressive time series: a Bayesian approach
International Nuclear Information System (INIS)
Robb, L.J.
1980-01-01
Finite-order autoregressive models for time series are often used for prediction and other inferences. Given the order of the model, the parameters of the models can be estimated by least-squares, maximum-likelihood, or Yule-Walker method. The basic problem is estimating the order of the model. The problem of autoregressive order estimation is placed in a Bayesian framework. This approach illustrates how the Bayesian method brings the numerous aspects of the problem together into a coherent structure. A joint prior probability density is proposed for the order, the partial autocorrelation coefficients, and the variance; and the marginal posterior probability distribution for the order, given the data, is obtained. It is noted that the value with maximum posterior probability is the Bayes estimate of the order with respect to a particular loss function. The asymptotic posterior distribution of the order is also given. In conclusion, Wolfer's sunspot data as well as simulated data corresponding to several autoregressive models are analyzed according to Akaike's method and the Bayesian method. Both methods are observed to perform quite well, although the Bayesian method was clearly superior, in most cases
Generalized Reduced Order Model Generation, Phase I
National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...
Amplitude Models for Discrimination and Yield Estimation
Energy Technology Data Exchange (ETDEWEB)
Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-09-01
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
An efficient modularized sample-based method to estimate the first-order Sobol' index
International Nuclear Information System (INIS)
Li, Chenzhao; Mahadevan, Sankaran
2016-01-01
Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
REGIONAL FIRST ORDER PERIODIC AUTOREGRESSIVE MODELS FOR MONTHLY FLOWS
Directory of Open Access Journals (Sweden)
Ceyhun ÖZÇELİK
2008-01-01
Full Text Available First order periodic autoregressive models is of mostly used models in modeling of time dependency of hydrological flow processes. In these models, periodicity of the correlogram is preserved as well as time dependency of processes. However, the parameters of these models, namely, inter-monthly lag-1 autocorrelation coefficients may be often estimated erroneously from short samples, since they are statistics of high order moments. Therefore, to constitute a regional model may be a solution that can produce more reliable and decisive estimates, and derive models and model parameters in any required point of the basin considered. In this study, definitions of homogeneous region for lag-1 autocorrelation coefficients are made; five parametric and non parametric models are proposed to set regional models of lag-1 autocorrelation coefficients. Regional models are applied on 30 stream flow gauging stations in Seyhan and Ceyhan basins, and tested by criteria of relative absolute bias, simple and relative root of mean square errors.
Cognitive profiles and heritability estimates in the Old Order Amish.
Kuehner, Ryan M; Kochunov, Peter; Nugent, Katie L; Jurius, Deanna E; Savransky, Anya; Gaudiot, Christopher; Bruce, Heather A; Gold, James; Shuldiner, Alan R; Mitchell, Braxton D; Hong, L Elliot
2016-08-01
This study aimed to establish the applicability of the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS) in the Old Order Amish (OOA) and to assess the genetic contribution toward the RBANS total score and its cognitive domains using a large family-based sample of OOA. RBANS data were collected in 103 OOA individuals from Lancaster County, Pennsylvania, including 85 individuals without psychiatric illness and 18 individuals with current psychiatric diagnoses. The RBANS total score and all five cognitive domains of in nonpsychiatric OOA were within half a SD of the normative data of the general population. The RBANS total score was highly heritable (h=0.51, P=0.019). OOA with psychiatric diagnoses had a numerically lower RBANS total score and domain scores compared with the nonpsychiatric participants. The RBANS appears to be a suitable cognitive battery for the OOA population as measurements obtained from the OOA are comparable with normative data in the US population. The heritability estimated from the OOA is in line with heritabilities of other cognitive batteries estimated in other populations. These results support the use of RBANS in cognitive assessment, clinical care, and behavioral genetic studies of neuropsychological functioning in this population.
Software Cost-Estimation Model
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Fleischer, Christian; Waag, Wladislaw; Heyn, Hans-Martin; Sauer, Dirk Uwe
2014-08-01
Lithium-ion battery systems employed in high power demanding systems such as electric vehicles require a sophisticated monitoring system to ensure safe and reliable operation. Three major states of the battery are of special interest and need to be constantly monitored, these include: battery state of charge (SoC), battery state of health (capcity fade determination, SoH), and state of function (power fade determination, SoF). In a series of two papers, we propose a system of algorithms based on a weighted recursive least quadratic squares parameter estimator, that is able to determine the battery impedance and diffusion parameters for accurate state estimation. The functionality was proven on different battery chemistries with different aging conditions. The first paper investigates the general requirements on BMS for HEV/EV applications. In parallel, the commonly used methods for battery monitoring are reviewed to elaborate their strength and weaknesses in terms of the identified requirements for on-line applications. Special emphasis will be placed on real-time capability and memory optimized code for cost-sensitive industrial or automotive applications in which low-cost microcontrollers must be used. Therefore, a battery model is presented which includes the influence of the Butler-Volmer kinetics on the charge-transfer process. Lastly, the mass transport process inside the battery is modeled in a novel state-space representation.
Optimal inventory management and order book modeling
Baradel, Nicolas; Bouchard, Bruno; Evangelista, David; Mounjid, Othmane
2018-01-01
We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic
Fractional Order Models of Industrial Pneumatic Controllers
Directory of Open Access Journals (Sweden)
Abolhassan Razminia
2014-01-01
Full Text Available This paper addresses a new approach for modeling of versatile controllers in industrial automation and process control systems such as pneumatic controllers. Some fractional order dynamical models are developed for pressure and pneumatic systems with bellows-nozzle-flapper configuration. In the light of fractional calculus, a fractional order derivative-derivative (FrDD controller and integral-derivative (FrID are remodeled. Numerical simulations illustrate the application of the obtained theoretical results in simple examples.
Generalized Reduced Order Modeling of Aeroservoelastic Systems
Gariffo, James Michael
Transonic aeroelastic and aeroservoelastic (ASE) modeling presents a significant technical and computational challenge. Flow fields with a mixture of subsonic and supersonic flow, as well as moving shock waves, can only be captured through high-fidelity CFD analysis. With modern computing power, it is realtively straightforward to determine the flutter boundary for a single structural configuration at a single flight condition, but problems of larger scope remain quite costly. Some such problems include characterizing a vehicle's flutter boundary over its full flight envelope, optimizing its structural weight subject to aeroelastic constraints, and designing control laws for flutter suppression. For all of these applications, reduced-order models (ROMs) offer substantial computational savings. ROM techniques in general have existed for decades, and the methodology presented in this dissertation builds on successful previous techniques to create a powerful new scheme for modeling aeroelastic systems, and predicting and interpolating their transonic flutter boundaries. In this method, linear ASE state-space models are constructed from modal structural and actuator models coupled to state-space models of the linearized aerodynamic forces through feedback loops. Flutter predictions can be made from these models through simple eigenvalue analysis of their state-transition matrices for an appropriate set of dynamic pressures. Moreover, this analysis returns the frequency and damping trend of every aeroelastic branch. In contrast, determining the critical dynamic pressure by direct time-marching CFD requires a separate run for every dynamic pressure being analyzed simply to obtain the trend for the critical branch. The present ROM methodology also includes a new model interpolation technique that greatly enhances the benefits of these ROMs. This enables predictions of the dynamic behavior of the system for flight conditions where CFD analysis has not been explicitly
Higher-order Multivariable Polynomial Regression to Estimate Human Affective States
Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin
2016-03-01
From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.
Investigation of Effectiveness of Order Review and Release Models in Make to Order Supply Chain
Directory of Open Access Journals (Sweden)
Kundu Kaustav
2016-01-01
Full Text Available Nowadays customisation becomes more common due to vast requirement from the customers for which industries are trying to use make-to-order (MTO strategy. Due to high variation in the process, workload control models are extensively used for jobshop companies which usually adapt MTO strategy. Some authors tried to implement workload control models, order review and release systems, in non-repetitive manufacturing companies, where there is a dominant flow in production. Those models are better in shop floor but their performances are never been investigated in high variation situations like MTO supply chain. This paper starts with the introduction of particular issues in MTO companies and a general overview of order review and release systems widely used in the industries. Two order review and release systems, the Limited and Balanced models, particularly suitable for flow shop system are applied to MTO supply chain, where the processing times are difficult to estimate due to high variation. Simulation results show that the Balanced model performs much better than the Limited model if the processing times can be estimated preciously.
Energy Technology Data Exchange (ETDEWEB)
Harris, James M.; Prescott, Ryan; Dawson, Jericah M.; Huelskamp, Robert M.
2014-11-01
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.
XY model with higher-order exchange.
Žukovič, Milan; Kalagov, Georgii
2017-08-01
An XY model, generalized by inclusion of up to an infinite number of higher-order pairwise interactions with an exponentially decreasing strength, is studied by spin-wave theory and Monte Carlo simulations. At low temperatures the model displays a quasi-long-range-order phase characterized by an algebraically decaying correlation function with the exponent η=T/[2πJ(p,α)], nonlinearly dependent on the parameters p and α that control the number of the higher-order terms and the decay rate of their intensity, respectively. At higher temperatures the system shows a crossover from the continuous Berezinskii-Kosterlitz-Thouless to the first-order transition for the parameter values corresponding to a highly nonlinear shape of the potential well. The role of topological excitations (vortices) in changing the nature of the transition is discussed.
Marginal and Interaction Effects in Ordered Response Models
Debdulal Mallick
2009-01-01
In discrete choice models the marginal effect of a variable of interest that is interacted with another variable differs from the marginal effect of a variable that is not interacted with any variable. The magnitude of the interaction effect is also not equal to the marginal effect of the interaction term. I present consistent estimators of both marginal and interaction effects in ordered response models. This procedure is general and can easily be extended to other discrete choice models. I ...
Reduced Order Modeling Methods for Turbomachinery Design
2009-03-01
and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En
Higher Order Improvements for Approximate Estimators
DEFF Research Database (Denmark)
Kristensen, Dennis; Salanié, Bernard
Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties of such appr......Many modern estimation methods in econometrics approximate an objective function, through simulation or discretization for instance. The resulting "approximate" estimator is often biased; and it always incurs an efficiency loss. We here propose three methods to improve the properties...... of such approximate estimators at a low computational cost. The first two methods correct the objective function so as to remove the leading term of the bias due to the approximation. One variant provides an analytical bias adjustment, but it only works for estimators based on stochastic approximators......, such as simulation-based estimators. Our second bias correction is based on ideas from the resampling literature; it eliminates the leading bias term for non-stochastic as well as stochastic approximators. Finally, we propose an iterative procedure where we use Newton-Raphson (NR) iterations based on a much finer...
Modeling and estimating system availability
International Nuclear Information System (INIS)
Gaver, D.P.; Chu, B.B.
1976-11-01
Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult
Dynamical models of happiness with fractional order
Song, Lei; Xu, Shiyun; Yang, Jianying
2010-03-01
This present study focuses on a dynamical model of happiness described through fractional-order differential equations. By categorizing people of different personality and different impact factor of memory (IFM) with different set of model parameters, it is demonstrated via numerical simulations that such fractional-order models could exhibit various behaviors with and without external circumstance. Moreover, control and synchronization problems of this model are discussed, which correspond to the control of emotion as well as emotion synchronization in real life. This study is an endeavor to combine the psychological knowledge with control problems and system theories, and some implications for psychotherapy as well as hints of a personal approach to life are both proposed.
Estimation of the convergence order of rigorous coupled-wave analysis for OCD metrology
Ma, Yuan; Liu, Shiyuan; Chen, Xiuguo; Zhang, Chuanwei
2011-12-01
In most cases of optical critical dimension (OCD) metrology, when applying rigorous coupled-wave analysis (RCWA) to optical modeling, a high order of Fourier harmonics is usually set up to guarantee the convergence of the final results. However, the total number of floating point operations grows dramatically as the truncation order increases. Therefore, it is critical to choose an appropriate order to obtain high computational efficiency without losing much accuracy in the meantime. In this paper, the convergence order associated with the structural and optical parameters has been estimated through simulation. The results indicate that the convergence order is linear with the period of the sample when fixing the other parameters, both for planar diffraction and conical diffraction. The illuminated wavelength also affects the convergence of a final result. With further investigations concentrated on the ratio of illuminated wavelength to period, it is discovered that the convergence order decreases with the growth of the ratio, and when the ratio is fixed, convergence order jumps slightly, especially in a specific range of wavelength. This characteristic could be applied to estimate the optimum convergence order of given samples to obtain high computational efficiency.
Optimal inventory management and order book modeling
Baradel, Nicolas
2018-02-16
We model the behavior of three agent classes acting dynamically in a limit order book of a financial asset. Namely, we consider market makers (MM), high-frequency trading (HFT) firms, and institutional brokers (IB). Given a prior dynamic of the order book, similar to the one considered in the Queue-Reactive models [14, 20, 21], the MM and the HFT define their trading strategy by optimizing the expected utility of terminal wealth, while the IB has a prescheduled task to sell or buy many shares of the considered asset. We derive the variational partial differential equations that characterize the value functions of the MM and HFT and explain how almost optimal control can be deduced from them. We then provide a first illustration of the interactions that can take place between these different market participants by simulating the dynamic of an order book in which each of them plays his own (optimal) strategy.
Hybrid reduced order modeling for assembly calculations
International Nuclear Information System (INIS)
Bang, Youngsuk; Abdel-Khalik, Hany S.; Jessee, Matthew A.; Mertyurek, Ugur
2015-01-01
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Youngsuk, E-mail: ysbang00@fnctech.com [FNC Technology, Co. Ltd., Yongin-si (Korea, Republic of); Abdel-Khalik, Hany S., E-mail: abdelkhalik@purdue.edu [Purdue University, West Lafayette, IN (United States); Jessee, Matthew A., E-mail: jesseema@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States); Mertyurek, Ugur, E-mail: mertyurek@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2015-12-15
Highlights: • Reducing computational cost in engineering calculations. • Reduced order modeling algorithm for multi-physics problem like assembly calculation. • Non-intrusive algorithm with random sampling. • Pattern recognition in the components with high sensitive and large variation. - Abstract: While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system.
Reduced-order modelling of wind turbines
Elkington, K.; Slootweg, J.G.; Ghandhari, M.; Kling, W.L.; Ackermann, T.
2012-01-01
In this chapter power system dynamics simulation(PSDS) isused to study the dynamics of large-scale power systems. It is necessary to incorporate models of wind turbine generating systems into PSDS software packages in order to analyse the impact of high wind power penetration on electrical power
Parameter Estimates in Differential Equation Models for Chemical Kinetics
Winkel, Brian
2011-01-01
We discuss the need for devoting time in differential equations courses to modelling and the completion of the modelling process with efforts to estimate the parameters in the models using data. We estimate the parameters present in several differential equation models of chemical reactions of order n, where n = 0, 1, 2, and apply more general…
Order Quantity Distributions: Estimating an Adequate Aggregation Horizon
Directory of Open Access Journals (Sweden)
Eriksen Poul Svante
2016-09-01
Full Text Available In this paper an investigation into the demand, faced by a company in the form of customer orders, is performed both from an explorative numerical and analytical perspective. The aim of the research is to establish the behavior of customer orders in first-come-first-serve (FCFS systems and the impact of order quantity variation on the planning environment. A discussion of assumptions regarding demand from various planning and control perspectives underlines that most planning methods are based on the assumption that demand in the form of customer orders are independently identically distributed and stem from symmetrical distributions. To investigate and illustrate the need to aggregate demand to live up to these assumptions, a simple methodological framework to investigate the validity of the assumptions and for analyzing the behavior of orders is developed. The paper also presents an analytical approach to identify the aggregation horizon needed to achieve a stable demand. Furthermore, a case study application of the presented framework is presented and concluded on.
Declarative Modeling for Production Order Portfolio Scheduling
Directory of Open Access Journals (Sweden)
Banaszak Zbigniew
2014-12-01
Full Text Available A declarative framework enabling to determine conditions as well as to develop decision-making software supporting small- and medium-sized enterprises aimed at unique, multi-project-like and mass customized oriented production is discussed. A set of unique production orders grouped into portfolio orders is considered. Operations executed along different production orders share available resources following a mutual exclusion protocol. A unique product or production batch is completed while following a given activity’s network order. The problem concerns scheduling a newly inserted project portfolio subject to constraints imposed by a multi-project environment The answers sought are: Can a given project portfolio specified by its cost and completion time be completed within the assumed time period in a manufacturing system in hand? Which manufacturing system capability guarantees the completion of a given project portfolio ordered under assumed cost and time constraints? The considered problems regard finding a computationally effective approach aimed at simultaneous routing and allocation as well as batching and scheduling of a newly ordered project portfolio subject to constraints imposed by a multi-project environment. The main objective is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multi-project-like and mass customized oriented production scheduling. Multiple illustrative examples are discussed.
Sharp probability estimates for Shor's order-finding algorithm
Bourdon, P. S.; Williams, H. T.
2006-01-01
Let N be a (large positive integer, let b > 1 be an integer relatively prime to N, and let r be the order of b modulo N. Finally, let QC be a quantum computer whose input register has the size specified in Shor's original description of his order-finding algorithm. We prove that when Shor's algorithm is implemented on QC, then the probability P of obtaining a (nontrivial) divisor of r exceeds 0.7 whenever N exceeds 2^{11}-1 and r exceeds 39, and we establish that 0.7736 is an asymptotic lower...
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery
King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.
2018-01-01
Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substa...
Belkhatir, Zehor
2015-11-05
This paper deals with the joint estimation of the unknown input and the fractional differentiation orders of a linear fractional order system. A two-stage algorithm combining the modulating functions with a first-order Newton method is applied to solve this estimation problem. First, the modulating functions approach is used to estimate the unknown input for a given fractional differentiation orders. Then, the method is combined with a first-order Newton technique to identify the fractional orders jointly with the input. To show the efficiency of the proposed method, numerical examples illustrating the estimation of the neural activity, considered as input of a fractional model of the neurovascular coupling, along with the fractional differentiation orders are presented in both noise-free and noisy cases.
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Hybrid reduced order modeling for assembly calculations
Energy Technology Data Exchange (ETDEWEB)
Bang, Y.; Abdel-Khalik, H. S. [North Carolina State University, Raleigh, NC (United States); Jessee, M. A.; Mertyurek, U. [Oak Ridge National Laboratory, Oak Ridge, TN (United States)
2013-07-01
While the accuracy of assembly calculations has considerably improved due to the increase in computer power enabling more refined description of the phase space and use of more sophisticated numerical algorithms, the computational cost continues to increase which limits the full utilization of their effectiveness for routine engineering analysis. Reduced order modeling is a mathematical vehicle that scales down the dimensionality of large-scale numerical problems to enable their repeated executions on small computing environment, often available to end users. This is done by capturing the most dominant underlying relationships between the model's inputs and outputs. Previous works demonstrated the use of the reduced order modeling for a single physics code, such as a radiation transport calculation. This manuscript extends those works to coupled code systems as currently employed in assembly calculations. Numerical tests are conducted using realistic SCALE assembly models with resonance self-shielding, neutron transport, and nuclides transmutation/depletion models representing the components of the coupled code system. (authors)
Model predictive control based on reduced order models applied to belt conveyor system.
Chen, Wei; Li, Xin
2016-11-01
In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Health Parameter Estimation with Second-Order Sliding Mode Observer for a Turbofan Engine
Directory of Open Access Journals (Sweden)
Xiaodong Chang
2017-07-01
Full Text Available In this paper the problem of health parameter estimation in an aero-engine is investigated by using an unknown input observer-based methodology, implemented by a second-order sliding mode observer (SOSMO. Unlike the conventional state estimator-based schemes, such as Kalman filters (KF and sliding mode observers (SMO, the proposed scheme uses a “reconstruction signal” to estimate health parameters modeled as artificial inputs, and is not only applicable to long-time health degradation, but reacts much quicker in handling abrupt fault cases. In view of the inevitable uncertainties in engine dynamics and modeling, a weighting matrix is created to minimize such effect on estimation by using the linear matrix inequalities (LMI. A big step toward uncertainty modeling is taken compared with our previous SMO-based work, in that uncertainties are considered in a more practical form. Moreover, to avoid chattering in sliding modes, the super-twisting algorithm (STA is employed in observer design. Various simulations are carried out, based on the comparisons between the KF-based scheme, the SMO-based scheme in our earlier research, and the proposed method. The results consistently demonstrate the capabilities and advantages of the proposed approach in health parameter estimation.
Model for traffic emissions estimation
Alexopoulos, A.; Assimacopoulos, D.; Mitsoulis, E.
A model is developed for the spatial and temporal evaluation of traffic emissions in metropolitan areas based on sparse measurements. All traffic data available are fully employed and the pollutant emissions are determined with the highest precision possible. The main roads are regarded as line sources of constant traffic parameters in the time interval considered. The method is flexible and allows for the estimation of distributed small traffic sources (non-line/area sources). The emissions from the latter are assumed to be proportional to the local population density as well as to the traffic density leading to local main arteries. The contribution of moving vehicles to air pollution in the Greater Athens Area for the period 1986-1988 is analyzed using the proposed model. Emissions and other related parameters are evaluated. Emissions from area sources were found to have a noticeable share of the overall air pollution.
Consistent Estimation of Partition Markov Models
Directory of Open Access Journals (Sweden)
Jesús E. García
2017-04-01
Full Text Available The Partition Markov Model characterizes the process by a partition L of the state space, where the elements in each part of L share the same transition probability to an arbitrary element in the alphabet. This model aims to answer the following questions: what is the minimal number of parameters needed to specify a Markov chain and how to estimate these parameters. In order to answer these questions, we build a consistent strategy for model selection which consist of: giving a size n realization of the process, finding a model within the Partition Markov class, with a minimal number of parts to represent the process law. From the strategy, we derive a measure that establishes a metric in the state space. In addition, we show that if the law of the process is Markovian, then, eventually, when n goes to infinity, L will be retrieved. We show an application to model internet navigation patterns.
Are Quantum Models for Order Effects Quantum?
Moreira, Catarina; Wichert, Andreas
2017-12-01
The application of principles of Quantum Mechanics in areas outside of physics has been getting increasing attention in the scientific community in an emergent disciplined called Quantum Cognition. These principles have been applied to explain paradoxical situations that cannot be easily explained through classical theory. In quantum probability, events are characterised by a superposition state, which is represented by a state vector in a N-dimensional vector space. The probability of an event is given by the squared magnitude of the projection of this superposition state into the desired subspace. This geometric approach is very useful to explain paradoxical findings that involve order effects, but do we really need quantum principles for models that only involve projections? This work has two main goals. First, it is still not clear in the literature if a quantum projection model has any advantage towards a classical projection. We compared both models and concluded that the Quantum Projection model achieves the same results as its classical counterpart, because the quantum interference effects play no role in the computation of the probabilities. Second, it intends to propose an alternative relativistic interpretation for rotation parameters that are involved in both classical and quantum models. In the end, instead of interpreting these parameters as a similarity measure between questions, we propose that they emerge due to the lack of knowledge concerned with a personal basis state and also due to uncertainties towards the state of world and towards the context of the questions.
Multivariable robust adaptive controller using reduced-order model
Directory of Open Access Journals (Sweden)
Wei Wang
1990-04-01
Full Text Available In this paper a multivariable robust adaptive controller is presented for a plant with bounded disturbances and unmodeled dynamics due to plant-model order mismatches. The robust stability of the closed-loop system is achieved by using the normalization technique and the least squares parameter estimation scheme with dead zones. The weighting polynomial matrices are incorporated into the control law, so that the open-loop unstable or/and nonminimum phase plants can be handled.
Optimizing lengths of confidence intervals: fourth-order efficiency in location models
Klaassen, C.; Venetiaan, S.
2010-01-01
Under regularity conditions the maximum likelihood estimator of the location parameter in a location model is asymptotically efficient among translation equivariant estimators. Additional regularity conditions warrant third- and even fourth-order efficiency, in the sense that no translation
Anisotropic Third-Order Regularization for Sparse Digital Elevation Models
Lellmann, Jan
2013-01-01
We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data. © 2013 Springer-Verlag.
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor; Laleg-Kirati, Taous-Meriem
2017-01-01
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating
Modeling Ability Differentiation in the Second-Order Factor Model
Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.
2011-01-01
In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…
Maximum likelihood estimation of finite mixture model for economic data
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-06-01
Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.; Liu, Da-Yan; Laleg-Kirati, Taous-Meriem
2017-01-01
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun
2018-02-01
In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.
Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)
DEFF Research Database (Denmark)
Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri
2013-01-01
In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...
Probabilistic error bounds for reduced order modeling
Energy Technology Data Exchange (ETDEWEB)
Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)
2015-07-01
Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)
Directory of Open Access Journals (Sweden)
Kim Hyang-Mi
2012-09-01
Full Text Available Abstract Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS, and the constrained expectation and maximization (CEM. We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their
Estimating Stochastic Volatility Models using Prediction-based Estimating Functions
DEFF Research Database (Denmark)
Lunde, Asger; Brix, Anne Floor
to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...
Directory of Open Access Journals (Sweden)
Esteban Jiménez-Rodríguez
2016-12-01
Full Text Available This paper presents an estimation structure for a continuous stirred-tank reactor, which is comprised of a sliding mode observer-based estimator coupled with a high-order sliding-mode observer. The whole scheme allows the robust estimation of the state and some parameters, specifically the concentration of the reactive mass, the heat of reaction and the global coefficient of heat transfer, by measuring the temperature inside the reactor and the temperature inside the jacket. In order to verify the results, the convergence proof of the proposed structure is done, and numerical simulations are presented with noiseless and noisy measurements, suggesting the applicability of the posed approach.
Reduced order model of draft tube flow
International Nuclear Information System (INIS)
Rudolf, P; Štefan, D
2014-01-01
Swirling flow with compact coherent structures is very good candidate for proper orthogonal decomposition (POD), i.e. for decomposition into eigenmodes, which are the cornerstones of the flow field. Present paper focuses on POD of steady flows, which correspond to different operating points of Francis turbine draft tube flow. Set of eigenmodes is built using a limited number of snapshots from computational simulations. Resulting reduced order model (ROM) describes whole operating range of the draft tube. ROM enables to interpolate in between the operating points exploiting the knowledge about significance of particular eigenmodes and thus reconstruct the velocity field in any operating point within the given range. Practical example, which employs axisymmetric simulations of the draft tube flow, illustrates accuracy of ROM in regions without vortex breakdown together with need for higher resolution of the snapshot database close to location of sudden flow changes (e.g. vortex breakdown). ROM based on POD interpolation is very suitable tool for insight into flow physics of the draft tube flows (especially energy transfers in between different operating points), for supply of data for subsequent stability analysis or as an initialization database for advanced flow simulations
Aeroelastic simulation using CFD based reduced order models
International Nuclear Information System (INIS)
Zhang, W.; Ye, Z.; Li, H.; Yang, Q.
2005-01-01
This paper aims at providing an accurate and efficient method for aeroelastic simulation. System identification is used to get the reduced order models of unsteady aerodynamics. Unsteady Euler codes are used to compute the output signals while 3211 multistep input signals are utilized. LS(Least Squares) method is used to estimate the coefficients of the input-output difference model. The reduced order models are then used in place of the unsteady CFD code for aeroelastic simulation. The aeroelastic equations are marched by an improved 4th order Runge-Kutta method that only needs to compute the aerodynamic loads one time at every time step. The computed results agree well with that of the direct coupling CFD/CSD methods. The computational efficiency is improved 1∼2 orders while still retaining the high accuracy. A standard aeroelastic computing example (isogai wing) with S type flutter boundary is computed and analyzed. It is due to the system has more than one neutral points at the Mach range of 0.875∼0.9. (author)
W-phase estimation of first-order rupture distribution for megathrust earthquakes
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2014-05-01
Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of
Empirical Reduced-Order Modeling for Boundary Feedback Flow Control
Directory of Open Access Journals (Sweden)
Seddik M. Djouadi
2008-01-01
Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.
Nonparametric estimation in models for unobservable heterogeneity
Hohmann, Daniel
2014-01-01
Nonparametric models which allow for data with unobservable heterogeneity are studied. The first publication introduces new estimators and their asymptotic properties for conditional mixture models. The second publication considers estimation of a function from noisy observations of its Radon transform in a Gaussian white noise model.
MCMC estimation of multidimensional IRT models
Beguin, Anton; Glas, Cornelis A.W.
1998-01-01
A Bayesian procedure to estimate the three-parameter normal ogive model and a generalization to a model with multidimensional ability parameters are discussed. The procedure is a generalization of a procedure by J. Albert (1992) for estimating the two-parameter normal ogive model. The procedure will
Directory of Open Access Journals (Sweden)
P. A. Ermolaev
2014-03-01
Full Text Available Data processing in the interferometer systems requires high-resolution and high-speed algorithms. Recurrence algorithms based on parametric representation of signals execute consequent processing of signal samples. In some cases recurrence algorithms make it possible to increase speed and quality of data processing as compared with classic processing methods. Dependence of the measured interferometer signal on parameters of its model and stochastic nature of noise formation in the system is, in general, nonlinear. The usage of nonlinear stochastic filtering algorithms is expedient for such signals processing. Extended Kalman filter with linearization of state and output equations by the first vector parameters derivatives is an example of these algorithms. To decrease approximation error of this method the second order extended Kalman filtering is suggested with additionally usage of the second vector parameters derivatives of model equations. Examples of algorithm implementation with the different sets of estimated parameters are described. The proposed algorithm gives the possibility to increase the quality of data processing in interferometer systems in which signals are forming according to considered models. Obtained standard deviation of estimated amplitude envelope does not exceed 4% of the maximum. It is shown that signal-to-noise ratio of reconstructed signal is increased by 60%.
Modeling and Parameter Estimation of a Small Wind Generation System
Directory of Open Access Journals (Sweden)
Carlos A. Ramírez Gómez
2013-11-01
Full Text Available The modeling and parameter estimation of a small wind generation system is presented in this paper. The system consists of a wind turbine, a permanent magnet synchronous generator, a three phase rectifier, and a direct current load. In order to estimate the parameters wind speed data are registered in a weather station located in the Fraternidad Campus at ITM. Wind speed data were applied to a reference model programed with PSIM software. From that simulation, variables were registered to estimate the parameters. The wind generation system model together with the estimated parameters is an excellent representation of the detailed model, but the estimated model offers a higher flexibility than the programed model in PSIM software.
Estimating Canopy Dark Respiration for Crop Models
Monje Mejia, Oscar Alberto
2014-01-01
Crop production is obtained from accurate estimates of daily carbon gain.Canopy gross photosynthesis (Pgross) can be estimated from biochemical models of photosynthesis using sun and shaded leaf portions and the amount of intercepted photosyntheticallyactive radiation (PAR).In turn, canopy daily net carbon gain can be estimated from canopy daily gross photosynthesis when canopy dark respiration (Rd) is known.
Estimates on the minimal period for periodic solutions of nonlinear second order Hamiltonian systems
International Nuclear Information System (INIS)
Yiming Long.
1994-11-01
In this paper, we prove a sharper estimate on the minimal period for periodic solutions of autonomous second order Hamiltonian systems under precisely Rabinowitz' superquadratic condition. (author). 20 refs, 1 fig
Sums and products of sets and estimates of rational trigonometric sums in fields of prime order
Energy Technology Data Exchange (ETDEWEB)
Garaev, Mubaris Z [National Autonomous University of Mexico, Institute of Mathematics (Mexico)
2010-11-16
This paper is a survey of main results on the problem of sums and products of sets in fields of prime order and their applications to estimates of rational trigonometric sums. Bibliography: 85 titles.
Asymptotic estimates and exponential stability for higher-order monotone difference equations
Directory of Open Access Journals (Sweden)
Pituk Mihály
2005-01-01
Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.
Asymptotic estimates and exponential stability for higher-order monotone difference equations
Directory of Open Access Journals (Sweden)
Mihály Pituk
2005-03-01
Full Text Available Asymptotic estimates are established for higher-order scalar difference equations and inequalities the right-hand sides of which generate a monotone system with respect to the discrete exponential ordering. It is shown that in some cases the exponential estimates can be replaced with a more precise limit relation. As corollaries, a generalization of discrete Halanay-type inequalities and explicit sufficient conditions for the global exponential stability of the zero solution are given.
Improved diagnostic model for estimating wind energy
Energy Technology Data Exchange (ETDEWEB)
Endlich, R.M.; Lee, J.D.
1983-03-01
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
On parameter estimation in deformable models
DEFF Research Database (Denmark)
Fisker, Rune; Carstensen, Jens Michael
1998-01-01
Deformable templates have been intensively studied in image analysis through the last decade, but despite its significance the estimation of model parameters has received little attention. We present a method for supervised and unsupervised model parameter estimation using a general Bayesian form...
Ordering dynamics of microscopic models with nonconserved order parameter of continuous symmetry
DEFF Research Database (Denmark)
Zhang, Z.; Mouritsen, Ole G.; Zuckermann, Martin J.
1993-01-01
crystals. For both models, which have a nonconserved order parameter, it is found that the linear scale, R(t), of the evolving order, following quenches to below the transition temperature, grows at late times in an effectively algebraic fashion, R(t)∼tn, with exponent values which are strongly temperature......Numerical Monte Carlo temperature-quenching experiments have been performed on two three-dimensional classical lattice models with continuous ordering symmetry: the Lebwohl-Lasher model [Phys. Rev. A 6, 426 (1972)] and the ferromagnetic isotropic Heisenberg model. Both models describe a transition...... from a disordered phase to an orientationally ordered phase of continuous symmetry. The Lebwohl-Lasher model accounts for the orientational ordering properties of the nematic-isotropic transition in liquid crystals and the Heisenberg model for the ferromagnetic-paramagnetic transition in magnetic...
Model selection criteria : how to evaluate order restrictions
Kuiper, R.M.
2012-01-01
Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might
Parameter Estimation of Partial Differential Equation Models.
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Carroll, Raymond J; Maity, Arnab
2013-01-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown, and need to be estimated from the measurements of the dynamic system in the present of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE, and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from LIDAR data.
Model Order Reduction for Non Linear Mechanics
Pinillo, Rubén
2017-01-01
Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.
Chen, Jonathan H; Goldstein, Mary K; Asch, Steven M; Mackey, Lester; Altman, Russ B
2017-05-01
Build probabilistic topic model representations of hospital admissions processes and compare the ability of such models to predict clinical order patterns as compared to preconstructed order sets. The authors evaluated the first 24 hours of structured electronic health record data for > 10 K inpatients. Drawing an analogy between structured items (e.g., clinical orders) to words in a text document, the authors performed latent Dirichlet allocation probabilistic topic modeling. These topic models use initial clinical information to predict clinical orders for a separate validation set of > 4 K patients. The authors evaluated these topic model-based predictions vs existing human-authored order sets by area under the receiver operating characteristic curve, precision, and recall for subsequent clinical orders. Existing order sets predict clinical orders used within 24 hours with area under the receiver operating characteristic curve 0.81, precision 16%, and recall 35%. This can be improved to 0.90, 24%, and 47% ( P sets tend to provide nonspecific, process-oriented aid, with usability limitations impairing more precise, patient-focused support. Algorithmic summarization has the potential to breach this usability barrier by automatically inferring patient context, but with potential tradeoffs in interpretability. Probabilistic topic modeling provides an automated approach to detect thematic trends in patient care and generate decision support content. A potential use case finds related clinical orders for decision support. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Model Order Reduction for Electronic Circuits:
DEFF Research Database (Denmark)
Hjorth, Poul G.; Shontz, Suzanne
Electronic circuits are ubiquitous; they are used in numerous industries including: the semiconductor, communication, robotics, auto, and music industries (among many others). As products become more and more complicated, their electronic circuits also grow in size and complexity. This increased...... in the semiconductor industry. Circuit simulation proceeds by using Maxwell’s equations to create a mathematical model of the circuit. The boundary element method is then used to discretize the equations, and the variational form of the equations are then solved on the graph network....
Temporal rainfall estimation using input data reduction and model inversion
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a
Context Tree Estimation in Variable Length Hidden Markov Models
Dumont, Thierry
2011-01-01
We address the issue of context tree estimation in variable length hidden Markov models. We propose an estimator of the context tree of the hidden Markov process which needs no prior upper bound on the depth of the context tree. We prove that the estimator is strongly consistent. This uses information-theoretic mixture inequalities in the spirit of Finesso and Lorenzo(Consistent estimation of the order for Markov and hidden Markov chains(1990)) and E.Gassiat and S.Boucheron (Optimal error exp...
NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM ...
African Journals Online (AJOL)
NEW MODEL FOR SOLAR RADIATION ESTIMATION FROM MEASURED AIR TEMPERATURE AND ... Nigerian Journal of Technology ... Solar radiation measurement is not sufficient in Nigeria for various reasons such as maintenance and ...
McNeish, Daniel; Dumas, Denis
2017-01-01
Recent methodological work has highlighted the promise of nonlinear growth models for addressing substantive questions in the behavioral sciences. In this article, we outline a second-order nonlinear growth model in order to measure a critical notion in development and education: potential. Here, potential is conceptualized as having three components-ability, capacity, and availability-where ability is the amount of skill a student is estimated to have at a given timepoint, capacity is the maximum amount of ability a student is predicted to be able to develop asymptotically, and availability is the difference between capacity and ability at any particular timepoint. We argue that single timepoint measures are typically insufficient for discerning information about potential, and we therefore describe a general framework that incorporates a growth model into the measurement model to capture these three components. Then, we provide an illustrative example using the public-use Early Childhood Longitudinal Study-Kindergarten data set using a Michaelis-Menten growth function (reparameterized from its common application in biochemistry) to demonstrate our proposed model as applied to measuring potential within an educational context. The advantage of this approach compared to currently utilized methods is discussed as are future directions and limitations.
Parameter Estimation of Nonlinear Models in Forestry.
Fekedulegn, Desta; Mac Siúrtáin, Máirtín Pádraig; Colbert, Jim J.
1999-01-01
Partial derivatives of the negative exponential, monomolecular, Mitcherlich, Gompertz, logistic, Chapman-Richards, von Bertalanffy, Weibull and the Richard’s nonlinear growth models are presented. The application of these partial derivatives in estimating the model parameters is illustrated. The parameters are estimated using the Marquardt iterative method of nonlinear regression relating top height to age of Norway spruce (Picea abies L.) from the Bowmont Norway Spruce Thinnin...
Directory of Open Access Journals (Sweden)
Yu Huang
Full Text Available Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm.
INTEGRATED SPEED ESTIMATION MODEL FOR MULTILANE EXPREESSWAYS
Hong, Sungjoon; Oguchi, Takashi
In this paper, an integrated speed-estimation model is developed based on empirical analyses for the basic sections of intercity multilane expressway un der the uncongested condition. This model enables a speed estimation for each lane at any site under arb itrary highway-alignment, traffic (traffic flow and truck percentage), and rainfall conditions. By combin ing this model and a lane-use model which estimates traffic distribution on the lanes by each vehicle type, it is also possible to es timate an average speed across all the lanes of one direction from a traffic demand by vehicle type under specific highway-alignment and rainfall conditions. This model is exp ected to be a tool for the evaluation of traffic performance for expressways when the performance me asure is travel speed, which is necessary for Performance-Oriented Highway Planning and Design. Regarding the highway-alignment condition, two new estimators, called effective horizo ntal curvature and effective vertical grade, are proposed in this paper which take into account the influence of upstream and downstream alignment conditions. They are applied to the speed-estimation model, and it shows increased accuracy of the estimation.
Partial Orders and Fully Abstract Models for Concurrency
DEFF Research Database (Denmark)
Engberg, Uffe Henrik
1990-01-01
In this thesis sets of labelled partial orders are employed as fundamental mathematical entities for modelling nondeterministic and concurrent processes thereby obtaining so-called noninterleaving semantics. Based on different closures of sets of labelled partial orders, simple algebraic language...
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Semi-parametric estimation for ARCH models
Directory of Open Access Journals (Sweden)
Raed Alzghool
2018-03-01
Full Text Available In this paper, we conduct semi-parametric estimation for autoregressive conditional heteroscedasticity (ARCH model with Quasi likelihood (QL and Asymptotic Quasi-likelihood (AQL estimation methods. The QL approach relaxes the distributional assumptions of ARCH processes. The AQL technique is obtained from the QL method when the process conditional variance is unknown. We present an application of the methods to a daily exchange rate series. Keywords: ARCH model, Quasi likelihood (QL, Asymptotic Quasi-likelihood (AQL, Martingale difference, Kernel estimator
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei
2013-09-01
Partial differential equation (PDE) models are commonly used to model complex dynamic systems in applied sciences such as biology and finance. The forms of these PDE models are usually proposed by experts based on their prior knowledge and understanding of the dynamic system. Parameters in PDE models often have interesting scientific interpretations, but their values are often unknown and need to be estimated from the measurements of the dynamic system in the presence of measurement errors. Most PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus the computational load is high. In this article, we propose two methods to estimate parameters in PDE models: a parameter cascading method and a Bayesian approach. In both methods, the underlying dynamic process modeled with the PDE model is represented via basis function expansion. For the parameter cascading method, we develop two nested levels of optimization to estimate the PDE parameters. For the Bayesian method, we develop a joint model for data and the PDE and develop a novel hierarchical model allowing us to employ Markov chain Monte Carlo (MCMC) techniques to make posterior inference. Simulation studies show that the Bayesian method and parameter cascading method are comparable, and both outperform other available methods in terms of estimation accuracy. The two methods are demonstrated by estimating parameters in a PDE model from long-range infrared light detection and ranging data. Supplementary materials for this article are available online. © 2013 American Statistical Association.
Variations in wave direction estimated using first and second order Fourier coefficients
Digital Repository Service at National Institute of Oceanography (India)
SanilKumar, V.; Anand, N.M.
to the peak frequency are used in practice. In the present study, comparison is made on wave directions estimated based on first and second order Fourier coefficients using data collected at four locations in the west and east coasts of India. Study shows...
Large deviation estimates for a Non-Markovian Lévy generator of big order
International Nuclear Information System (INIS)
Léandre, Rémi
2015-01-01
We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)
A reduced order model of a quadruped walking system
International Nuclear Information System (INIS)
Sano, Akihito; Furusho, Junji; Naganuma, Nobuyuki
1990-01-01
Trot walking has recently been studied by several groups because of its stability and realizability. In the trot, diagonally opposed legs form pairs. While one pair of legs provides support, the other pair of legs swings forward in preparation for the next step. In this paper, we propose a reduced order model for the trot walking. The reduced order model is derived by using two dominant modes of the closed loop system in which the local feedback at each joint is implemented. It is shown by numerical examples that the obtained reduced order model can well approximate the original higher order model. (author)
Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA
Czech Academy of Sciences Publication Activity Database
Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří
2008-01-01
Roč. 2008, č. 2008 (2008), s. 1-11 ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Program:FP6 Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf
Estimation of Multiple Point Sources for Linear Fractional Order Systems Using Modulating Functions
Belkhatir, Zehor
2017-06-28
This paper proposes an estimation algorithm for the characterization of multiple point inputs for linear fractional order systems. First, using polynomial modulating functions method and a suitable change of variables the problem of estimating the locations and the amplitudes of a multi-pointwise input is decoupled into two algebraic systems of equations. The first system is nonlinear and solves for the time locations iteratively, whereas the second system is linear and solves for the input’s amplitudes. Second, closed form formulas for both the time location and the amplitude are provided in the particular case of single point input. Finally, numerical examples are given to illustrate the performance of the proposed technique in both noise-free and noisy cases. The joint estimation of pointwise input and fractional differentiation orders is also presented. Furthermore, a discussion on the performance of the proposed algorithm is provided.
Conditional shape models for cardiac motion estimation
DEFF Research Database (Denmark)
Metz, Coert; Baka, Nora; Kirisli, Hortense
2010-01-01
We propose a conditional statistical shape model to predict patient specific cardiac motion from the 3D end-diastolic CTA scan. The model is built from 4D CTA sequences by combining atlas based segmentation and 4D registration. Cardiac motion estimation is, for example, relevant in the dynamic...
FUZZY MODELING BY SUCCESSIVE ESTIMATION OF RULES ...
African Journals Online (AJOL)
This paper presents an algorithm for automatically deriving fuzzy rules directly from a set of input-output data of a process for the purpose of modeling. The rules are extracted by a method termed successive estimation. This method is used to generate a model without truncating the number of fired rules, to within user ...
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
Statistical Model-Based Face Pose Estimation
Institute of Scientific and Technical Information of China (English)
GE Xinliang; YANG Jie; LI Feng; WANG Huahua
2007-01-01
A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.
Fractional-order in a macroeconomic dynamic model
David, S. A.; Quintino, D. D.; Soliani, J.
2013-10-01
In this paper, we applied the Riemann-Liouville approach in order to realize the numerical simulations to a set of equations that represent a fractional-order macroeconomic dynamic model. It is a generalization of a dynamic model recently reported in the literature. The aforementioned equations have been simulated for several cases involving integer and non-integer order analysis, with some different values to fractional order. The time histories and the phase diagrams have been plotted to visualize the effect of fractional order approach. The new contribution of this work arises from the fact that the macroeconomic dynamic model proposed here involves the public sector deficit equation, which renders the model more realistic and complete when compared with the ones encountered in the literature. The results reveal that the fractional-order macroeconomic model can exhibit a real reasonable behavior to macroeconomics systems and might offer greater insights towards the understanding of these complex dynamic systems.
Roof planes detection via a second-order variational model
Benciolini, Battista; Ruggiero, Valeria; Vitti, Alfonso; Zanetti, Massimo
2018-04-01
The paper describes a unified automatic procedure for the detection of roof planes in gridded height data. The procedure exploits the Blake-Zisserman (BZ) model for segmentation in both 2D and 1D, and aims to detect, to model and to label roof planes. The BZ model relies on the minimization of a functional that depends on first- and second-order derivatives, free discontinuities and free gradient discontinuities. During the minimization, the relative strength of each competitor is controlled by a set of weight parameters. By finding the minimum of the approximated BZ functional, one obtains: (1) an approximation of the data that is smoothed solely within regions of homogeneous gradient, and (2) an explicit detection of the discontinuities and gradient discontinuities of the approximation. Firstly, input data is segmented using the 2D BZ. The maps of data and gradient discontinuities are used to isolate building candidates and planar patches (i.e. regions with homogeneous gradient) that correspond to roof planes. Connected regions that can not be considered as buildings are filtered according to both patch dimension and distribution of the directions of the normals to the boundary. The 1D BZ model is applied to the curvilinear coordinates of boundary points of building candidates in order to reduce the effect of data granularity when the normals are evaluated. In particular, corners are preserved and can be detected by means of gradient discontinuity. Lastly, a total least squares model is applied to estimate the parameters of the plane that best fits the points of each planar patch (orthogonal regression with planar model). Refinement of planar patches is performed by assigning those points that are close to the boundaries to the planar patch for which a given proximity measure assumes the smallest value. The proximity measure is defined to account for the variance of a fitting plane and a weighted distance of a point from the plane. The effectiveness of the
Direct Importance Estimation with Gaussian Mixture Models
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
Higher-order RANS turbulence models for separated flows
National Aeronautics and Space Administration — Higher-order Reynolds-averaged Navier-Stokes (RANS) models are developed to overcome the shortcomings of second-moment RANS models in predicting separated flows....
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang; Youssef, Marzouk; Willcox, Karen
2014-01-01
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce
The second-order decomposition model of nonlinear irregular waves
DEFF Research Database (Denmark)
Yang, Zhi Wen; Bingham, Harry B.; Li, Jin Xuan
2013-01-01
into the first- and the second-order super-harmonic as well as the second-order sub-harmonic components by transferring them into an identical Fourier frequency-space and using a Newton-Raphson iteration method. In order to evaluate the present model, a variety of monochromatic waves and the second...
Thresholding projection estimators in functional linear models
Cardot, Hervé; Johannes, Jan
2010-01-01
We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...
Clock error models for simulation and estimation
International Nuclear Information System (INIS)
Meditch, J.S.
1981-10-01
Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction
Mechhoud, Sarra; Laleg-Kirati, Taous-Meriem
2016-01-01
In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term
Estimation and uncertainty of reversible Markov models.
Trendelkamp-Schroer, Benjamin; Wu, Hao; Paul, Fabian; Noé, Frank
2015-11-07
Reversibility is a key concept in Markov models and master-equation models of molecular kinetics. The analysis and interpretation of the transition matrix encoding the kinetic properties of the model rely heavily on the reversibility property. The estimation of a reversible transition matrix from simulation data is, therefore, crucial to the successful application of the previously developed theory. In this work, we discuss methods for the maximum likelihood estimation of transition matrices from finite simulation data and present a new algorithm for the estimation if reversibility with respect to a given stationary vector is desired. We also develop new methods for the Bayesian posterior inference of reversible transition matrices with and without given stationary vector taking into account the need for a suitable prior distribution preserving the meta-stable features of the observed process during posterior inference. All algorithms here are implemented in the PyEMMA software--http://pyemma.org--as of version 2.0.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
Parameter Estimation for Thurstone Choice Models
Energy Technology Data Exchange (ETDEWEB)
Vojnovic, Milan [London School of Economics (United Kingdom); Yun, Seyoung [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-04-24
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one or more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.
Sparsity enabled cluster reduced-order models for control
Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.
2018-01-01
Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.
Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery
King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.
2018-02-01
Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.
Spiking and bursting patterns of fractional-order Izhikevich model
Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha
2018-03-01
Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.
Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models
Directory of Open Access Journals (Sweden)
Shelton Peiris
2017-12-01
Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.
Modulating functions method for parameters estimation in the fifth order KdV equation
Asiri, Sharefa M.
2017-07-25
In this work, the modulating functions method is proposed for estimating coefficients in higher-order nonlinear partial differential equation which is the fifth order Kortewegde Vries (KdV) equation. The proposed method transforms the problem into a system of linear algebraic equations of the unknowns. The statistical properties of the modulating functions solution are described in this paper. In addition, guidelines for choosing the number of modulating functions, which is an important design parameter, are provided. The effectiveness and robustness of the proposed method are shown through numerical simulations in both noise-free and noisy cases.
A comparison of zero-order, first-order, and Monod biotransformation models
International Nuclear Information System (INIS)
Bekins, B.A.; Warren, E.; Godsy, E.M.
1998-01-01
Under some conditions, a first-order kinetic model is a poor representation of biodegradation in contaminated aquifers. Although it is well known that the assumption of first-order kinetics is valid only when substrate concentration, S, is much less than the half-saturation constant, K S , this assumption is often made without verification of this condition. The authors present a formal error analysis showing that the relative error in the first-order approximation is S/K S and in the zero-order approximation the error is K S /S. They then examine the problems that arise when the first-order approximation is used outside the range for which it is valid. A series of numerical simulations comparing results of first- and zero-order rate approximations to Monod kinetics for a real data set illustrates that if concentrations observed in the field are higher than K S , it may be better to model degradation using a zero-order rate expression. Compared with Monod kinetics, extrapolation of a first-order rate to lower concentrations under-predicts the biotransformation potential, while extrapolation to higher concentrations may grossly over-predict the transformation rate. A summary of solubilities and Monod parameters for aerobic benzene, toluene, and xylene (BTX) degradation shows that the a priori assumption of first-order degradation kinetics at sites contaminated with these compounds is not valid. In particular, out of six published values of K S for toluene, only one is greater than 2 mg/L, indicating that when toluene is present in concentrations greater than about a part per million, the assumption of first-order kinetics may be invalid. Finally, the authors apply an existing analytical solution for steady-state one-dimensional advective transport with Monod degradation kinetics to a field data set
Model order reduction techniques with applications in finite element analysis
Qu, Zu-Qing
2004-01-01
Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...
Estimates of solutions of certain classes of second-order differential equations in a Hilbert space
International Nuclear Information System (INIS)
Artamonov, N V
2003-01-01
Linear second-order differential equations of the form u''(t)+(B+iD)u'(t)+(T+iS)u(t)=0 in a Hilbert space are studied. Under certain conditions on the (generally speaking, unbounded) operators T, S, B and D the correct solubility of the equation in the 'energy' space is proved and best possible (in the general case) estimates of the solutions on the half-axis are obtained
Fang, Hao; Wei, Yue; Chen, Jie; Xin, Bin
2017-04-01
The problem of flocking of second-order multiagent systems with connectivity preservation is investigated in this paper. First, for estimating the algebraic connectivity as well as the corresponding eigenvector, a new decentralized inverse power iteration scheme is formulated. Then, based on the estimation of the algebraic connectivity, a set of distributed gradient-based flocking control protocols is built with a new class of generalized hybrid potential fields which could guarantee collision avoidance, desired distance stabilization, and the connectivity of the underlying communication network simultaneously. What is important is that the proposed control scheme allows the existing edges to be broken without violation of connectivity constraints, and thus yields more flexibility of motions and reduces the communication cost for the multiagent system. In the end, nontrivial comparative simulations and experimental results are performed to demonstrate the effectiveness of the theoretical results and highlight the advantages of the proposed estimation scheme and control algorithm.
Adaptive order search and tangent-weighted trade-off for motion estimation in H.264
Directory of Open Access Journals (Sweden)
Srinivas Bachu
2018-04-01
Full Text Available Motion estimation and compensation play a major role in video compression to reduce the temporal redundancies of the input videos. A variety of block search patterns have been developed for matching the blocks with reduced computational complexity, without affecting the visual quality. In this paper, block motion estimation is achieved through integrating the square as well as the hexagonal search patterns with adaptive order. The proposed algorithm is called, AOSH (Adaptive Order Square Hexagonal Search algorithm, and it finds the best matching block with a reduced number of search points. The searching function is formulated as a trade-off criterion here. Hence, the tangent-weighted function is newly developed to evaluate the matching point. The proposed AOSH search algorithm and the tangent-weighted trade-off criterion are effectively applied to the block estimation process to enhance the visual quality and the compression performance. The proposed method is validated using three videos namely, football, garden and tennis. The quantitative performance of the proposed method and the existing methods is analysed using the Structural SImilarity Index (SSIM and the Peak Signal to Noise Ratio (PSNR. The results prove that the proposed method offers good visual quality than the existing methods. Keywords: Block motion estimation, Square search, Hexagon search, H.264, Video coding
Sparse estimation of polynomial dynamical models
Toth, R.; Hjalmarsson, H.; Rojas, C.R.; Kinnaert, M.
2012-01-01
In many practical situations, it is highly desirable to estimate an accurate mathematical model of a real system using as few parameters as possible. This can be motivated either from appealing to a parsimony principle (Occam's razor) or from the view point of the utilization complexity in terms of
Vasta, M.; Roberts, J. B.
1998-06-01
Methods for using fourth order spectral quantities to estimate the unknown parameters in non-linear, randomly excited dynamic systems are developed. Attention is focused on the case where only the response is measurable and the excitation is unmeasurable and known only in terms of a stochastic process model. The approach is illustrated through application to a non-linear oscillator with both non-linear damping and stiffness and with excitation modelled as a stationary Gaussian white noise process. The methods have applications in studies of the response of structures to random environmental loads, such as wind and ocean wave forces.
A General Model for Estimating Macroevolutionary Landscapes.
Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef
2018-03-01
The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].
Time-Frequency Analysis Using Warped-Based High-Order Phase Modeling
Directory of Open Access Journals (Sweden)
Ioana Cornel
2005-01-01
Full Text Available The high-order ambiguity function (HAF was introduced for the estimation of polynomial-phase signals (PPS embedded in noise. Since the HAF is a nonlinear operator, it suffers from noise-masking effects and from the appearance of undesired cross-terms when multicomponents PPS are analyzed. In order to improve the performances of the HAF, the multi-lag HAF concept was proposed. Based on this approach, several advanced methods (e.g., product high-order ambiguity function (PHAF have been recently proposed. Nevertheless, performances of these new methods are affected by the error propagation effect which drastically limits the order of the polynomial approximation. This phenomenon acts especially when a high-order polynomial modeling is needed: representation of the digital modulation signals or the acoustic transient signals. This effect is caused by the technique used for polynomial order reduction, common for existing approaches: signal multiplication with the complex conjugated exponentials formed with the estimated coefficients. In this paper, we introduce an alternative method to reduce the polynomial order, based on the successive unitary signal transformation, according to each polynomial order. We will prove that this method reduces considerably the effect of error propagation. Namely, with this order reduction method, the estimation error at a given order will depend only on the performances of the estimation method.
Ivanescu, V.C.; Fransoo, J.C.; Bertrand, J.W.M.
2002-01-01
Batch process industries are characterized by complex precedence relationships between operations, which renders the estimation of an acceptable workload very difficult. A detailed schedule based model can be used for this purpose, but for large problems this may require a prohibitive large amount
Multi-Criteria Model for Determining Order Size
Directory of Open Access Journals (Sweden)
Katarzyna Jakowska-Suwalska
2013-01-01
Full Text Available A multi-criteria model for determining the order size for materials used in production has been presented. It was assumed that the consumption rate of each material is a random variable with a known probability distribution. Using such a model, in which the purchase cost of materials ordered is limited, three criteria were considered: order size, probability of a lack of materials in the production process, and deviations in the order size from the consumption rate in past periods. Based on an example, it has been shown how to use the model to determine the order sizes for polyurethane adhesive and wood in a hard-coal mine. (original abstract
Estimating Drilling Cost and Duration Using Copulas Dependencies Models
Directory of Open Access Journals (Sweden)
M. Al Kindi
2017-03-01
Full Text Available Estimation of drilling budget and duration is a high-level challenge for oil and gas industry. This is due to the many uncertain activities in the drilling procedure such as material prices, overhead cost, inflation, oil prices, well type, and depth of drilling. Therefore, it is essential to consider all these uncertain variables and the nature of relationships between them. This eventually leads into the minimization of the level of uncertainty and yet makes a "good" estimation points for budget and duration given the well type. In this paper, the copula probability theory is used in order to model the dependencies between cost/duration and MRI (mechanical risk index. The MRI is a mathematical computation, which relates various drilling factors such as: water depth, measured depth, true vertical depth in addition to mud weight and horizontal displacement. In general, the value of MRI is utilized as an input for the drilling cost and duration estimations. Therefore, modeling the uncertain dependencies between MRI and both cost and duration using copulas is important. The cost and duration estimates for each well were extracted from the copula dependency model where research study simulate over 10,000 scenarios. These new estimates were later compared to the actual data in order to validate the performance of the procedure. Most of the wells show moderate - weak relationship of MRI dependence, which means that the variation in these wells can be related to MRI but to the extent that it is not the primary source.
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
A variable-order fractal derivative model for anomalous diffusion
Directory of Open Access Journals (Sweden)
Liu Xiaoting
2017-01-01
Full Text Available This paper pays attention to develop a variable-order fractal derivative model for anomalous diffusion. Previous investigations have indicated that the medium structure, fractal dimension or porosity may change with time or space during solute transport processes, results in time or spatial dependent anomalous diffusion phenomena. Hereby, this study makes an attempt to introduce a variable-order fractal derivative diffusion model, in which the index of fractal derivative depends on temporal moment or spatial position, to characterize the above mentioned anomalous diffusion (or transport processes. Compared with other models, the main advantages in description and the physical explanation of new model are explored by numerical simulation. Further discussions on the dissimilitude such as computational efficiency, diffusion behavior and heavy tail phenomena of the new model and variable-order fractional derivative model are also offered.
Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations
Bang, Youngsuk
Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel
First-order regional seismotectonic model for South Africa
CSIR Research Space (South Africa)
Singh, M
2011-10-01
Full Text Available A first-order seismotectonic model was created for South Africa. This was done using four logical steps: geoscientific data collection, characterisation, assimilation and zonation. Through the definition of subunits of concentrations of earthquake...
Directory of Open Access Journals (Sweden)
Kalpeshkumar Rohitbhai Patil
2016-10-01
Full Text Available Proper synchronization of Distributed Generator with grid and its performance in grid-connected mode relies on fast and precise estimation of phase and amplitude of the fundamental component of grid voltage. However, the accuracy with which the frequency is estimated is dependent on the type of grid voltage abnormalities and structure of the phase-locked loop or frequency locked loop control schemes. Among various control schemes, second-order generalized integrator based frequency- locked loop (SOGI-FLL is reported to have the most promising performance. It tracks the frequency of grid voltage accurately even when grid voltage is characterized by sag, swell, harmonics, imbalance, frequency variations etc. However, estimated frequency contains low frequency oscillations in case when sensed grid-voltage has a dc offset. This paper presents a modified dual second-order generalized integrator frequency-locked loop (MDSOGI-FLL for three-phase systems to cope with the non-ideal three-phase grid voltages having all type of abnormalities including the dc offset. The complexity in control scheme is almost the same as the standard dual SOGI-FLL, but the performance is enhanced. Simulation results show that the proposed MDSOGI-FLL is effective under all abnormal grid voltage conditions. The results are validated experimentally to justify the superior performance of MDSOGI-FLL under adverse conditions.
Estimating Coastal Digital Elevation Model (DEM) Uncertainty
Amante, C.; Mesick, S.
2017-12-01
Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.
Mechanical model for filament buckling and growth by phase ordering.
Rey, Alejandro D; Abukhdeir, Nasser M
2008-02-05
A mechanical model of open filament shape and growth driven by phase ordering is formulated. For a given phase-ordering driving force, the model output is the filament shape evolution and the filament end-point kinematics. The linearized model for the slope of the filament is the Cahn-Hilliard model of spinodal decomposition, where the buckling corresponds to concentration fluctuations. Two modes are predicted: (i) sequential growth and buckling and (ii) simultaneous buckling and growth. The relation among the maximum buckling rate, filament tension, and matrix viscosity is given. These results contribute to ongoing work in smectic A filament buckling.
Random balance designs for the estimation of first order global sensitivity indices
International Nuclear Information System (INIS)
Tarantola, S.; Gatelli, D.; Mara, T.A.
2006-01-01
We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA
Random balance designs for the estimation of first order global sensitivity indices
Energy Technology Data Exchange (ETDEWEB)
Tarantola, S. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: stefano.tarantola@jrc.it; Gatelli, D. [Joint Research Centre, European Commission, Institute of the Protection and Security of the Citizen, TP 361, Via E. Fermi 1, 21020 Ispra (VA) (Italy); Mara, T.A. [Laboratory of Industrial engineering, University of Reunion Island, BP 7151, 15 avenue Rene Cassin, 97 715 Saint-Denis (France)
2006-06-15
We present two methods for the estimation of main effects in global sensitivity analysis. The methods adopt Satterthwaite's application of random balance designs in regression problems, and extend it to sensitivity analysis of model output for non-linear, non-additive models. Finite as well as infinite ranges for model input factors are allowed. The methods are easier to implement than any other method available for global sensitivity analysis, and reduce significantly the computational cost of the analysis. We test their performance on different test cases, including an international benchmark on safety assessment for nuclear waste disposal originally carried out by OECD/NEA.
Time and order estimation of paintings based on visual features and expert priors
Cabral, Ricardo S.; Costeira, João P.; de La Torre, Fernando; Bernardino, Alexandre; Carneiro, Gustavo
2011-03-01
Time and order are considered crucial information in the art domain, and subject of many research efforts by historians. In this paper, we present a framework for estimating the ordering and date information of paintings and drawings. We formulate this problem as the embedding into a one dimension manifold, which aims to place paintings far or close to each other according to a measure of similarity. Our formulation can be seen as a manifold learning algorithm, albeit properly adapted to deal with existing questions in the art community. To solve this problem, we propose an approach based in Laplacian Eigenmaps and a convex optimization formulation. Both methods are able to incorporate art expertise as priors to the estimation, in the form of constraints. Types of information include exact or approximate dating and partial orderings. We explore the use of soft penalty terms to allow for constraint violation to account for the fact that prior knowledge may contain small errors. Our problem is tested within the scope of the PrintART project, which aims to assist art historians in tracing Portuguese Tile art "Azulejos" back to the engravings that inspired them. Furthermore, we describe other possible applications where time information (and hence, this method) could be of use in art history, fake detection or curatorial treatment.
Los Alamos Waste Management Cost Estimation Model
International Nuclear Information System (INIS)
Matysiak, L.M.; Burns, M.L.
1994-03-01
This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs
Fractional-Order Nonlinear Systems Modeling, Analysis and Simulation
Petráš, Ivo
2011-01-01
"Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation" presents a study of fractional-order chaotic systems accompanied by Matlab programs for simulating their state space trajectories, which are shown in the illustrations in the book. Description of the chaotic systems is clearly presented and their analysis and numerical solution are done in an easy-to-follow manner. Simulink models for the selected fractional-order systems are also presented. The readers will understand the fundamentals of the fractional calculus, how real dynamical systems can be described using fractional derivatives and fractional differential equations, how such equations can be solved, and how to simulate and explore chaotic systems of fractional order. The book addresses to mathematicians, physicists, engineers, and other scientists interested in chaos phenomena or in fractional-order systems. It can be used in courses on dynamical systems, control theory, and applied mathematics at graduate or postgraduate level. ...
Order-of-magnitude physics of neutron stars. Estimating their properties from first principles
Energy Technology Data Exchange (ETDEWEB)
Reisenegger, Andreas; Zepeda, Felipe S. [Pontificia Universidad Catolica de Chile, Instituto de Astrofisica, Facultad de Fisica, Macul (Chile)
2016-03-15
We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of ''everyday'' matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties. (orig.)
Lagrangian generic second order traffic flow models for node
Directory of Open Access Journals (Sweden)
Asma Khelifi
2018-02-01
Full Text Available This study sheds light on higher order macroscopic traffic flow modeling on road networks, thanks to the generic second order models (GSOM family which embeds a myriad of traffic models. It has been demonstrated that such higher order models are easily solved in Lagrangian coordinates which are compatible with both microscopic and macroscopic descriptions. The generalized GSOM model is reformulated in the Lagrangian coordinate system to develop a more efficient numerical method. The difficulty in applying this approach on networks basically resides in dealing with node dynamics. Traffic flow characteristics at node are different from that on homogeneous links. Different geometry features can lead to different critical research issues. For instance, discontinuity in traffic stream can be an important issue for traffic signal operations, while capacity drop may be crucial for lane-merges. The current paper aims to establish and analyze a new adapted node model for macroscopic traffic flow models by applying upstream and downstream boundary conditions on the Lagrangian coordinates in order to perform simulations on networks of roads, and accompanying numerical method. The internal node dynamics between upstream and downstream links are taken into account of the node model. Therefore, a numerical example is provided to underscore the efficiency of this approach. Simulations show that the discretized node model yields accurate results. Additional kinematic waves and contact discontinuities are induced by the variation of the driver attribute.
Multi-skyrmion solutions of a sixth order Skyrme model
International Nuclear Information System (INIS)
Floratos, I.
2001-08-01
In this thesis, we study some of the classical properties of an extension of the Skyrme model defined by adding a sixth order derivative term to the Lagrangian. In chapter 1, we review the physical as well as the mathematical motivation behind the study of the Skyrme model and in chapter 2, we give a brief summary of various extended Skyrme models that have been proposed over the last few years. We then define a new sixth order Skyrme model by introducing a dimensionless parameter λ that denotes the mixing between the two higher order terms, the Skyrme term and the sixth order term. In chapter 3 we compute numerically the multi-skyrmion solutions of this extended model and show that they have the same symmetries with the usual skyrmion solutions. In addition, we analyse the dependence of the energy and radius of these classical solutions with respect to the coupling constant λ. We compare our results with experimental data and determine whether this modified model can provide us with better theoretical predictions than the original one. In chapter 4, we use the rational map ansatz, introduced by Houghton, Manton and Sutcliffe, to approximate minimum energy multi-skyrmion solutions with B ≤ 9 of the SU(2) model and with B ≤ 6 of the SU(3) model. We compare our results with the ones obtained numerically and show that the rational map ansatz works just as well for the generalised model as for the pure Skyrme model, at least for B ≤ 5. In chapter 5, we use a generalisation of the rational map ansatz, introduced by loannidou, Piette and Zakrzewski, to construct analytically some topologically non-trivial solutions of the extended model in SU(3). These solutions are spherically symmetric and some of them can be interpreted as bound states of skyrmions. Finally, we use the same ansatz to construct low energy configurations of the SU(N) sixth order Skyrme model. (author)
Average inactivity time model, associated orderings and reliability properties
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Efficient estimation of feedback effects with application to climate models
International Nuclear Information System (INIS)
Cacugi, D.G.; Hall, M.C.G.
1984-01-01
This work presents an efficient method for calculating the sensitivity of a mathematical model's result to feedback. Feedback is defined in terms of an operator acting on the model's dependent variables. The sensitivity to feedback is defined as a functional derivative, and a method is presented to evaluate this derivative using adjoint functions. Typically, this method allows the individual effect of many different feedbacks to be estimated with a total additional computing time comparable to only one recalculation. The effects on a CO 2 -doubling experiment of actually incorporating surface albedo and water vapor feedbacks in radiative-convective model are compared with sensivities calculated using adjoint functions. These sensitivities predict the actual effects of feedback with at least the correct sign and order of magnitude. It is anticipated that this method of estimation the effect of feedback will be useful for more complex models where extensive recalculations for each of a variety of different feedbacks is impractical
Testing static tradeoff theiry against pecking order models of capital ...
African Journals Online (AJOL)
We test two models with the purpose of finding the best empirical explanation for corporate financing choice of a cross section of 27 Nigerian quoted companies. The models were developed to represent the Static tradeoff Theory and the Pecking order Theory of capital structure with a view to make comparison between ...
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Latent Partially Ordered Classification Models and Normal Mixtures
Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith
2013-01-01
Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…
Next-to-leading order corrections to the valon model
Indian Academy of Sciences (India)
A seminumerical solution to the valon model at next-to-leading order (NLO) in the Laguerre polynomials is presented. We used the valon model to generate the structure of proton with respect to the Laguerre polynomials method. The results are compared with H1 data and other parametrizations.
Partial-Order Reduction for GPU Model Checking
Neele, T.; Wijs, A.; Bosnacki, D.; van de Pol, Jan Cornelis; Artho, C; Legay, A.; Peled, D.
2016-01-01
Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicit-state model checking, we improve memory efficiency by applying partial-order reduction. We propose
International Nuclear Information System (INIS)
Harish, V.S.K.V.; Kumar, Arun
2016-01-01
Highlights: • A BES model based on 1st principles is developed and solved numerically. • Parameters of lumped capacitance model are fitted using the proposed optimization routine. • Validations are showed for different types of building construction elements. • Step response excitations for outdoor air temperature and relative humidity are analyzed. - Abstract: Different control techniques together with intelligent building technology (Building Automation Systems) are used to improve energy efficiency of buildings. In almost all control projects, it is crucial to have building energy models with high computational efficiency in order to design and tune the controllers and simulate their performance. In this paper, a set of partial differential equations are formulated accounting for energy flow within the building space. These equations are then solved as conventional finite difference equations using Crank–Nicholson scheme. Such a model of a higher order is regarded as a benchmark model. An optimization algorithm has been developed, depicted through a flowchart, which minimizes the sum squared error between the step responses of the numerical and the optimal model. Optimal model of the construction element is nothing but a RC-network model with the values of Rs and Cs estimated using the non-linear time invariant constrained optimization routine. The model is validated with comparing the step responses with other two RC-network models whose parameter values are selected based on a certain criteria. Validations are showed for different types of building construction elements viz., low, medium and heavy thermal capacity elements. Simulation results show that the optimal model closely follow the step responses of the numerical model as compared to the responses of other two models.
House thermal model parameter estimation method for Model Predictive Control applications
van Leeuwen, Richard Pieter; de Wit, J.B.; Fink, J.; Smit, Gerardus Johannes Maria
In this paper we investigate thermal network models with different model orders applied to various Dutch low-energy house types with high and low interior thermal mass and containing floor heating. Parameter estimations are performed by using data from TRNSYS simulations. The paper discusses results
Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F
2013-10-01
Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate
Parameter Estimation of Spacecraft Fuel Slosh Model
Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles
2004-01-01
Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.
Group-ICA model order highlights patterns of functional brain connectivity
Directory of Open Access Journals (Sweden)
Ahmed eAbou Elseoud
2011-06-01
Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.
Modeling Human Behaviour with Higher Order Logic: Insider Threats
DEFF Research Database (Denmark)
Boender, Jaap; Ivanova, Marieta Georgieva; Kammuller, Florian
2014-01-01
it to the sociological process of logical explanation. As a case study on modeling human behaviour, we present the modeling and analysis of insider threats as a Higher Order Logic theory in Isabelle/HOL. We show how each of the three step process of sociological explanation can be seen in our modeling of insider’s state......, its context within an organisation and the effects on security as outcomes of a theorem proving analysis....
Resource-estimation models and predicted discovery
International Nuclear Information System (INIS)
Hill, G.W.
1982-01-01
Resources have been estimated by predictive extrapolation from past discovery experience, by analogy with better explored regions, or by inference from evidence of depletion of targets for exploration. Changes in technology and new insights into geological mechanisms have occurred sufficiently often in the long run to form part of the pattern of mature discovery experience. The criterion, that a meaningful resource estimate needs an objective measure of its precision or degree of uncertainty, excludes 'estimates' based solely on expert opinion. This is illustrated by development of error measures for several persuasive models of discovery and production of oil and gas in USA, both annually and in terms of increasing exploration effort. Appropriate generalizations of the models resolve many points of controversy. This is illustrated using two USA data sets describing discovery of oil and of U 3 O 8 ; the latter set highlights an inadequacy of available official data. Review of the oil-discovery data set provides a warrant for adjusting the time-series prediction to a higher resource figure for USA petroleum. (author)
Direction-of-Arrival Estimation Based on Sparse Recovery with Second-Order Statistics
Directory of Open Access Journals (Sweden)
H. Chen
2015-04-01
Full Text Available Traditional direction-of-arrival (DOA estimation techniques perform Nyquist-rate sampling of the received signals and as a result they require high storage. To reduce sampling ratio, we introduce level-crossing (LC sampling which captures samples whenever the signal crosses predetermined reference levels, and the LC-based analog-to-digital converter (LC ADC has been shown to efficiently sample certain classes of signals. In this paper, we focus on the DOA estimation problem by using second-order statistics based on the LC samplings recording on one sensor, along with the synchronous samplings of the another sensors, a sparse angle space scenario can be found by solving an $ell_1$ minimization problem, giving the number of sources and their DOA's. The experimental results show that our proposed method, when compared with some existing norm-based constrained optimization compressive sensing (CS algorithms, as well as subspace method, improves the DOA estimation performance, while using less samples when compared with Nyquist-rate sampling and reducing sensor activity especially for long time silence signal.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
DEFF Research Database (Denmark)
Abildskov, Jens; Constantinou, Leonidas; Gani, Rafiqul
1996-01-01
A simple modification of group contribution based models for estimation of liquid phase activity coefficients is proposed. The main feature of this modification is that contributions estimated from the present first-order groups in many instances are found insufficient since the first-order groups...... correlation/prediction capabilities, distinction between isomers and ability to overcome proximity effects....
Estimating order-picking times for return heuristic - equations and simulations
Directory of Open Access Journals (Sweden)
Grzegorz Tarczyński
2015-09-01
Full Text Available Background: A key element of the evaluation of warehouse operation is the average order-picking time. In warehouses where the order-picking process is carried out according to the "picker-to-part" rule the order-picking time is usually proportional to the distance covered by the picker while picking items. This distance can by estimated by simulations or using mathematical equations. In the paper only the best described in the literature one-block rectangular warehouses are considered. Material and methods: For the one-block rectangular warehouses there are well known five routing heuristics. In the paper the author considers the return heuristic in two variants. The paper presents well known Hall's and De Koster's equations for the average distance traveled by the picker while completing items from one pick list. The author presents own proposals for calculating the expected distance. Results: the results calculated by the use of mathematical equations (the formulas of Hall, De Koster and own propositions were compared with the average values obtained using computer simulations. For the most cases the average error does not exceed 1% (except for Hall's equations. To carry out simulation the computer software Warehouse Real-Time Simulator was used. Conclusions: the order-picking time is a function of many variables and its optimization is not easy. It can be done in two stages: firstly using mathematical equations the set of the potentially best variants is established, next the results are verified using simulations. The results calculated by the use of equations are not precise, but possible to achieve immediately. The simulations are more time-consuming, but allow to analyze the order-picking process more accurately.
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2007-01-01
In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
Energy Technology Data Exchange (ETDEWEB)
Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).
Composite symmetry-protected topological order and effective models
Nietner, A.; Krumnow, C.; Bergholtz, E. J.; Eisert, J.
2017-12-01
Strongly correlated quantum many-body systems at low dimension exhibit a wealth of phenomena, ranging from features of geometric frustration to signatures of symmetry-protected topological order. In suitable descriptions of such systems, it can be helpful to resort to effective models, which focus on the essential degrees of freedom of the given model. In this work, we analyze how to determine the validity of an effective model by demanding it to be in the same phase as the original model. We focus our study on one-dimensional spin-1 /2 systems and explain how nontrivial symmetry-protected topologically ordered (SPT) phases of an effective spin-1 model can arise depending on the couplings in the original Hamiltonian. In this analysis, tensor network methods feature in two ways: on the one hand, we make use of recent techniques for the classification of SPT phases using matrix product states in order to identify the phases in the effective model with those in the underlying physical system, employing Künneth's theorem for cohomology. As an intuitive paradigmatic model we exemplify the developed methodology by investigating the bilayered Δ chain. For strong ferromagnetic interlayer couplings, we find the system to transit into exactly the same phase as an effective spin-1 model. However, for weak but finite coupling strength, we identify a symmetry broken phase differing from this effective spin-1 description. On the other hand, we underpin our argument with a numerical analysis making use of matrix product states.
Motion estimation by data assimilation in reduced dynamic models
International Nuclear Information System (INIS)
Drifi, Karim
2013-01-01
Motion estimation is a major challenge in the field of image sequence analysis. This thesis is a study of the dynamics of geophysical flows visualized by satellite imagery. Satellite image sequences are currently underused for the task of motion estimation. A good understanding of geophysical flows allows a better analysis and forecast of phenomena in domains such as oceanography and meteorology. Data assimilation provides an excellent framework for achieving a compromise between heterogeneous data, especially numerical models and observations. Hence, in this thesis we set out to apply variational data assimilation methods to estimate motion on image sequences. As one of the major drawbacks of applying these assimilation techniques is the considerable computation time and memory required, we therefore define and use a model reduction method in order to significantly decrease the necessary computation time and the memory. We then explore the possibilities that reduced models provide for motion estimation, particularly the possibility of strictly imposing some known constraints on the computed solutions. In particular, we show how to estimate a divergence free motion with boundary conditions on a complex spatial domain [fr
Directory of Open Access Journals (Sweden)
Dilek Teker
2013-01-01
Full Text Available The aim of this research is to compose a new rating methodology and provide credit notches to 23 countries which of 13 are developed and 10 are emerging. There are various literature that explains the determinants of credit ratings. Following the literature, we select 11 variables for our model which of 5 are eliminated by the factor analysis. We use specific dummies to investigate the structural breaks in time and cross section such as pre crises, post crises, BRIC membership, EU membership, OPEC membership, shipbuilder country and platinum reserved country. Then we run an ordered probit model and give credit notches to the countries. We use FITCH ratings as benchmark. Thus, at the end we compare the notches of FITCH with the ones we derive out of our estimated model.
Parameter estimation in fractional diffusion models
Kubilius, Kęstutis; Ralchenko, Kostiantyn
2017-01-01
This book is devoted to parameter estimation in diffusion models involving fractional Brownian motion and related processes. For many years now, standard Brownian motion has been (and still remains) a popular model of randomness used to investigate processes in the natural sciences, financial markets, and the economy. The substantial limitation in the use of stochastic diffusion models with Brownian motion is due to the fact that the motion has independent increments, and, therefore, the random noise it generates is “white,” i.e., uncorrelated. However, many processes in the natural sciences, computer networks and financial markets have long-term or short-term dependences, i.e., the correlations of random noise in these processes are non-zero, and slowly or rapidly decrease with time. In particular, models of financial markets demonstrate various kinds of memory and usually this memory is modeled by fractional Brownian diffusion. Therefore, the book constructs diffusion models with memory and provides s...
PARAMETER ESTIMATION IN BREAD BAKING MODEL
Directory of Open Access Journals (Sweden)
Hadiyanto Hadiyanto
2012-05-01
Full Text Available Bread product quality is highly dependent to the baking process. A model for the development of product quality, which was obtained by using quantitative and qualitative relationships, was calibrated by experiments at a fixed baking temperature of 200°C alone and in combination with 100 W microwave powers. The model parameters were estimated in a stepwise procedure i.e. first, heat and mass transfer related parameters, then the parameters related to product transformations and finally product quality parameters. There was a fair agreement between the calibrated model results and the experimental data. The results showed that the applied simple qualitative relationships for quality performed above expectation. Furthermore, it was confirmed that the microwave input is most meaningful for the internal product properties and not for the surface properties as crispness and color. The model with adjusted parameters was applied in a quality driven food process design procedure to derive a dynamic operation pattern, which was subsequently tested experimentally to calibrate the model. Despite the limited calibration with fixed operation settings, the model predicted well on the behavior under dynamic convective operation and on combined convective and microwave operation. It was expected that the suitability between model and baking system could be improved further by performing calibration experiments at higher temperature and various microwave power levels. Abstrak PERKIRAAN PARAMETER DALAM MODEL UNTUK PROSES BAKING ROTI. Kualitas produk roti sangat tergantung pada proses baking yang digunakan. Suatu model yang telah dikembangkan dengan metode kualitatif dan kuantitaif telah dikalibrasi dengan percobaan pada temperatur 200oC dan dengan kombinasi dengan mikrowave pada 100 Watt. Parameter-parameter model diestimasi dengan prosedur bertahap yaitu pertama, parameter pada model perpindahan masa dan panas, parameter pada model transformasi, dan
Venus gravity and topography: 60th degree and order model
Konopliv, A. S.; Borderies, N. J.; Chodas, P. W.; Christensen, E. J.; Sjogren, W. L.; Williams, B. G.; Balmino, G.; Barriot, J. P.
1993-01-01
We have combined the most recent Pioneer Venus Orbiter (PVO) and Magellan (MGN) data with the earlier 1978-1982 PVO data set to obtain a new 60th degree and order spherical harmonic gravity model and a 120th degree and order spherical harmonic topography model. Free-air gravity maps are shown over regions where the most marked improvement has been obtained (Ishtar-Terra, Alpha, Bell and Artemis). Gravity versus topography relationships are presented as correlations per degree and axes orientation.
Reduced order modeling of flashing two-phase jets
Energy Technology Data Exchange (ETDEWEB)
Gurecky, William, E-mail: william.gurecky@utexas.edu; Schneider, Erich, E-mail: eschneider@mail.utexas.edu; Ballew, Davis, E-mail: davisballew@utexas.edu
2015-12-01
Highlights: • Accident simulation requires ability to quickly predict two-phase flashing jet's damage potential. • A reduced order modeling methodology informed by experimental or computational data is described. • Zone of influence volumes are calculated for jets of various upstream thermodynamic conditions. - Abstract: In the event of a Loss of Coolant Accident (LOCA) in a pressurized water reactor, the escaping coolant produces a highly energetic flashing jet with the potential to damage surrounding structures. In LOCA analysis, the goal is often to evaluate many break scenarios in a Monte Carlo style simulation to evaluate the resilience of a reactor design. Therefore, in order to quickly predict the damage potential of flashing jets, it is of interest to develop a reduced order model that relates the damage potential of a jet to the pressure and temperature upstream of the break and the distance from the break to a given object upon which the jet is impinging. This work presents framework for producing a Reduced Order Model (ROM) that may be informed by measured data, Computational Fluid Dynamics (CFD) simulations, or a combination of both. The model is constructed by performing regression analysis on the pressure field data, allowing the impingement pressure to be quickly reconstructed for any given upstream thermodynamic condition within the range of input data. The model is applicable to both free and fully impinging two-phase flashing jets.
Reverse time migration by Krylov subspace reduced order modeling
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Impact of Physics Parameterization Ordering in a Global Atmosphere Model
Donahue, Aaron S.; Caldwell, Peter M.
2018-02-01
Because weather and climate models must capture a wide variety of spatial and temporal scales, they rely heavily on parameterizations of subgrid-scale processes. The goal of this study is to demonstrate that the assumptions used to couple these parameterizations have an important effect on the climate of version 0 of the Energy Exascale Earth System Model (E3SM) General Circulation Model (GCM), a close relative of version 1 of the Community Earth System Model (CESM1). Like most GCMs, parameterizations in E3SM are sequentially split in the sense that parameterizations are called one after another with each subsequent process feeling the effect of the preceding processes. This coupling strategy is noncommutative in the sense that the order in which processes are called impacts the solution. By examining a suite of 24 simulations with deep convection, shallow convection, macrophysics/microphysics, and radiation parameterizations reordered, process order is shown to have a big impact on predicted climate. In particular, reordering of processes induces differences in net climate feedback that are as big as the intermodel spread in phase 5 of the Coupled Model Intercomparison Project. One reason why process ordering has such a large impact is that the effect of each process is influenced by the processes preceding it. Where output is written is therefore an important control on apparent model behavior. Application of k-means clustering demonstrates that the positioning of macro/microphysics and shallow convection plays a critical role on the model solution.
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, we present a simplified parsimonious higher-order multivariate Markov chain model with new convergence condition. (TPHOMMCM-NCC). Moreover, estimation method of the parameters in TPHOMMCM-NCC is give. Numerical experiments illustrate the effectiveness of TPHOMMCM-NCC.
Adaptive Estimation of Heteroscedastic Money Demand Model of Pakistan
Directory of Open Access Journals (Sweden)
Muhammad Aslam
2007-07-01
Full Text Available For the problem of estimation of Money demand model of Pakistan, money supply (M1 shows heteroscedasticity of the unknown form. For estimation of such model we compare two adaptive estimators with ordinary least squares estimator and show the attractive performance of the adaptive estimators, namely, nonparametric kernel estimator and nearest neighbour regression estimator. These comparisons are made on the basis standard errors of the estimated coefficients, standard error of regression, Akaike Information Criteria (AIC value, and the Durban-Watson statistic for autocorrelation. We further show that nearest neighbour regression estimator performs better when comparing with the other nonparametric kernel estimator.
Numerical Analysis of Fractional Order Epidemic Model of Childhood Diseases
Directory of Open Access Journals (Sweden)
Fazal Haq
2017-01-01
Full Text Available The fractional order Susceptible-Infected-Recovered (SIR epidemic model of childhood disease is considered. Laplace–Adomian Decomposition Method is used to compute an approximate solution of the system of nonlinear fractional differential equations. We obtain the solutions of fractional differential equations in the form of infinite series. The series solution of the proposed model converges rapidly to its exact value. The obtained results are compared with the classical case.
Directory of Open Access Journals (Sweden)
Dimal A. Shah
2017-02-01
Full Text Available A simple and accurate method for the analysis of ibuprofen (IBU and famotidine (FAM in their combined dosage form was developed using second order derivative spectrophotometery. IBU and FAM were quantified using second derivative responses at 272.8 nm and 290 nm in the spectra of their solutions in methanol. The calibration curves were linear in the concentration range of 100–600 μg/mL for IBU and 5–25 μg/mL for FAM. The method was validated and found to be accurate and precise. Developed method was successfully applied for the estimation of IBU and FAM in their combined dosage form.
The Ising model coupled to 2d orders
Glaser, Lisa
2018-04-01
In this article we make first steps in coupling matter to causal set theory in the path integral. We explore the case of the Ising model coupled to the 2d discrete Einstein Hilbert action, restricted to the 2d orders. We probe the phase diagram in terms of the Wick rotation parameter β and the Ising coupling j and find that the matter and the causal sets together give rise to an interesting phase structure. The couplings give rise to five different phases. The causal sets take on random or crystalline characteristics as described in Surya (2012 Class. Quantum Grav. 29 132001) and the Ising model can be correlated or uncorrelated on the random orders and correlated, uncorrelated or anti-correlated on the crystalline orders. We find that at least one new phase transition arises, in which the Ising spins push the causal set into the crystalline phase.
Robust simulation of buckled structures using reduced order modeling
International Nuclear Information System (INIS)
Wiebe, R.; Perez, R.A.; Spottswood, S.M.
2016-01-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties. (paper)
Robust simulation of buckled structures using reduced order modeling
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Gridded rainfall estimation for distributed modeling in western mountainous areas
Moreda, F.; Cong, S.; Schaake, J.; Smith, M.
2006-05-01
Estimation of precipitation in mountainous areas continues to be problematic. It is well known that radar-based methods are limited due to beam blockage. In these areas, in order to run a distributed model that accounts for spatially variable precipitation, we have generated hourly gridded rainfall estimates from gauge observations. These estimates will be used as basic data sets to support the second phase of the NWS-sponsored Distributed Hydrologic Model Intercomparison Project (DMIP 2). One of the major foci of DMIP 2 is to better understand the modeling and data issues in western mountainous areas in order to provide better water resources products and services to the Nation. We derive precipitation estimates using three data sources for the period of 1987-2002: 1) hourly cooperative observer (coop) gauges, 2) daily total coop gauges and 3) SNOw pack TELemetry (SNOTEL) daily gauges. The daily values are disaggregated using the hourly gauge values and then interpolated to approximately 4km grids using an inverse-distance method. Following this, the estimates are adjusted to match monthly mean values from the Parameter-elevation Regressions on Independent Slopes Model (PRISM). Several analyses are performed to evaluate the gridded estimates for DMIP 2 experiments. These gridded inputs are used to generate mean areal precipitation (MAPX) time series for comparison to the traditional mean areal precipitation (MAP) time series derived by the NWS' California-Nevada River Forecast Center for model calibration. We use two of the DMIP 2 basins in California and Nevada: the North Fork of the American River (catchment area 885 sq. km) and the East Fork of the Carson River (catchment area 922 sq. km) as test areas. The basins are sub-divided into elevation zones. The North Fork American basin is divided into two zones above and below an elevation threshold. Likewise, the Carson River basin is subdivided in to four zones. For each zone, the analyses include: a) overall
Model order reduction for complex high-tech systems
Lutowska, A.; Hochstenbach, M.E.; Schilders, W.H.A.; Michielsen, B.; Poirier, J.R.
2012-01-01
This paper presents a computationally efficient model order reduction (MOR) technique for interconnected systems. This MOR technique preserves block structures and zero blocks and exploits separate MOR approximations for the individual sub-systems in combination with low rank approximations for the
Next-to-leading order corrections to the valon model
Indian Academy of Sciences (India)
Next-to-leading order corrections to the valon model. G R BOROUN. ∗ and E ESFANDYARI. Physics Department, Razi University, Kermanshah 67149, Iran. ∗. Corresponding author. E-mail: grboroun@gmail.com; boroun@razi.ac.ir. MS received 17 January 2014; revised 31 October 2014; accepted 21 November 2014.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed
2015-07-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.
First-order estimate of the planktic foraminifer biomass in the modern ocean
Directory of Open Access Journals (Sweden)
R. Schiebel
2012-09-01
Full Text Available Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. With a new non-destructive protocol developed from the bicinchoninic acid (BCA method and nano-photospectrometry, we have analysed the protein-biomass, along with test size and weight, of 754 individual planktic foraminifers from 21 different species and morphotypes. From additional CHN analysis, it can be assumed that protein-biomass equals carbon-biomass. Accordingly, the average individual planktic foraminifer protein- and carbon-biomass amounts to 0.845 μg. Samples include symbiont bearing and symbiont-barren species from the sea surface down to 2500 m water depth. Conversion factors between individual biomass and assemblage-biomass are calculated for test sizes between 72 and 845 μm (minimum test diameter. Assemblage-biomass data presented here include 1128 sites and water depth intervals. The regional coverage of data includes the North Atlantic, Arabian Sea, Red Sea, and Caribbean as well as literature data from the eastern and western North Pacific, and covers a wide range of oligotrophic to eutrophic waters over six orders of magnitude of planktic-foraminifer assemblage-biomass (PFAB. A first order estimate of the average global planktic foraminifer biomass production (>125 μm ranges from 8.2–32.7 Tg C yr^{−1} (i.e. 0.008–0.033 Gt C yr^{−1}, and might be more than three times as high including neanic and juvenile individuals adding up to 25–100 Tg C yr^{−1}. However, this is a first
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Energy Technology Data Exchange (ETDEWEB)
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Integrable higher order deformations of Heisenberg supermagnetic model
International Nuclear Information System (INIS)
Guo Jiafeng; Yan Zhaowen; Wang Shikun; Wu Ke; Zhao Weizhong
2009-01-01
The Heisenberg supermagnet model is an integrable supersymmetric system and has a close relationship with the strong electron correlated Hubbard model. In this paper, we investigate the integrable higher order deformations of Heisenberg supermagnet models with two different constraints: (i) S 2 =3S-2I for S is an element of USPL(2/1)/S(U(2)xU(1)) and (ii) S 2 =S for S is an element of USPL(2/1)/S(L(1/1)xU(1)). In terms of the gauge transformation, their corresponding gauge equivalent counterparts are derived.
Accelerating transient simulation of linear reduced order models.
Energy Technology Data Exchange (ETDEWEB)
Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad
2011-10-01
Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.
Modeling and analysis of fractional order DC-DC converter.
Radwan, Ahmed G; Emira, Ahmed A; AbdelAty, Amr M; Azar, Ahmad Taher
2017-07-11
Due to the non-idealities of commercial inductors, the demand for a better model that accurately describe their dynamic response is elevated. So, the fractional order models of Buck, Boost and Buck-Boost DC-DC converters are presented in this paper. The detailed analysis is made for the two most common modes of converter operation: Continuous Conduction Mode (CCM) and Discontinuous Conduction Mode (DCM). Closed form time domain expressions are derived for inductor currents, voltage gain, average current, conduction time and power efficiency where the effect of the fractional order inductor is found to be strongly present. For example, the peak inductor current at steady state increases with decreasing the inductor order. Advanced Design Systems (ADS) circuit simulations are used to verify the derived formulas, where the fractional order inductor is simulated using Valsa Constant Phase Element (CPE) approximation and Generalized Impedance Converter (GIC). Different simulation results are introduced with good matching to the theoretical formulas for the three DC-DC converter topologies under different fractional orders. A comprehensive comparison with the recently published literature is presented to show the advantages and disadvantages of each approach. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Robust estimation of hydrological model parameters
Directory of Open Access Journals (Sweden)
A. Bárdossy
2008-11-01
Full Text Available The estimation of hydrological model parameters is a challenging task. With increasing capacity of computational power several complex optimization algorithms have emerged, but none of the algorithms gives a unique and very best parameter vector. The parameters of fitted hydrological models depend upon the input data. The quality of input data cannot be assured as there may be measurement errors for both input and state variables. In this study a methodology has been developed to find a set of robust parameter vectors for a hydrological model. To see the effect of observational error on parameters, stochastically generated synthetic measurement errors were applied to observed discharge and temperature data. With this modified data, the model was calibrated and the effect of measurement errors on parameters was analysed. It was found that the measurement errors have a significant effect on the best performing parameter vector. The erroneous data led to very different optimal parameter vectors. To overcome this problem and to find a set of robust parameter vectors, a geometrical approach based on Tukey's half space depth was used. The depth of the set of N randomly generated parameters was calculated with respect to the set with the best model performance (Nash-Sutclife efficiency was used for this study for each parameter vector. Based on the depth of parameter vectors, one can find a set of robust parameter vectors. The results show that the parameters chosen according to the above criteria have low sensitivity and perform well when transfered to a different time period. The method is demonstrated on the upper Neckar catchment in Germany. The conceptual HBV model was used for this study.
Energy Technology Data Exchange (ETDEWEB)
Park, Ho Jin [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)
2015-05-15
In a Monte Carlo (MC) eigenvalue calculation, it is well known that the apparent variance of a local tally such as pin power differs from the real variance considerably. The MC method in eigenvalue calculations uses a power iteration method. In the power iteration method, the fission matrix (FM) and fission source density (FSD) are used as the operator and the solution. The FM is useful to estimate a variance and covariance because the FM can be calculated by a few cycle calculations even at inactive cycle. Recently, S. Carney have implemented the higher order fission matrix (HOFM) capabilities into the MCNP6 MC code in order to apply to extend the perturbation theory to second order. In this study, the HOFM capability by the Hotelling deflation method was implemented into McCARD and used to predict the behavior of a real and apparent SD ratio. In the simple 1D slab problems, the Endo's theoretical model predicts well the real to apparent SD ratio. It was noted that the Endo's theoretical model with the McCARD higher mode FS solutions by the HOFM yields much better the real to apparent SD ratio than that with the analytic solutions. In the near future, the application for a high dominance ratio problem such as BEAVRS benchmark will be conducted.
Low order physical models of vertical axis wind turbines
Craig, Anna; Dabiri, John; Koseff, Jeffrey
2016-11-01
In order to examine the ability of low-order physical models of vertical axis wind turbines to accurately reproduce key flow characteristics, experiments were conducted on rotating turbine models, rotating solid cylinders, and stationary porous flat plates (of both uniform and non-uniform porosities). From examination of the patterns of mean flow, the wake turbulence spectra, and several quantitative metrics, it was concluded that the rotating cylinders represent a reasonably accurate analog for the rotating turbines. In contrast, from examination of the patterns of mean flow, it was found that the porous flat plates represent only a limited analog for rotating turbines (for the parameters examined). These findings have implications for both laboratory experiments and numerical simulations, which have previously used analogous low order models in order to reduce experimental/computational costs. NSF GRF and SGF to A.C; ONR N000141211047 and the Gordon and Betty Moore Foundation Grant GBMF2645 to J.D.; and the Bob and Norma Street Environmental Fluid Mechanics Laboratory at Stanford University.
International Nuclear Information System (INIS)
Martin, Robert P.; Nutt, William T.
2011-01-01
Research highlights: → Historical recitation on application of order-statistics models to nuclear power plant thermal-hydraulics safety analysis. → Interpretation of regulatory language regarding 10 CFR 50.46 reference to a 'high level of probability'. → Derivation and explanation of order-statistics-based evaluation methodologies considering multi-variate acceptance criteria. → Summary of order-statistics models and recommendations to the nuclear power plant thermal-hydraulics safety analysis community. - Abstract: The application of order-statistics in best-estimate plus uncertainty nuclear safety analysis has received a considerable amount of attention from methodology practitioners, regulators, and academia. At the root of the debate are two questions: (1) what is an appropriate quantitative interpretation of 'high level of probability' in regulatory language appearing in the LOCA rule, 10 CFR 50.46 and (2) how best to mathematically characterize the multi-variate case. An original derivation is offered to provide a quantitative basis for 'high level of probability.' At root of the second question is whether one should recognize a probability statement based on the tolerance region method of Wald and Guba, et al., for multi-variate problems, one explicitly based on the regulatory limits, best articulated in the Wallis-Nutt 'Testing Method', or something else entirely. This paper reviews the origins of the different positions, key assumptions, limitations, and relationship to addressing acceptance criteria. It presents a mathematical interpretation of the regulatory language, including a complete derivation of uni-variate order-statistics (as credited in AREVA's Realistic Large Break LOCA methodology) and extension to multi-variate situations. Lastly, it provides recommendations for LOCA applications, endorsing the 'Testing Method' and addressing acceptance methods allowing for limited sample failures.
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
Identification of the reduced order models of a BWR reactor
International Nuclear Information System (INIS)
Hernandez S, A.
2004-01-01
The present work has as objective to analyze the relative stability of a BWR type reactor. It is analyzed that so adaptive it turns out to identify the parameters of a model of reduced order so that this it reproduces a condition of given uncertainty. This will take of a real fact happened in the La Salle plant under certain operation conditions of power and flow of coolant. The parametric identification is carried out by means of an algorithm of recursive least square and an Output Error model (Output Error), measuring the output power of the reactor when the instability is present, and considering that it is produced by a change in the reactivity of the system in the same way that a sign of type step. Also it is carried out an analytic comparison of the relative stability, analyzing two types of answers: the original answer of the uncertainty of the reactor vs. the obtained response identifying the parameters of the model of reduced order, reaching the conclusion that it is very viable to adapt a model of reduced order to study the stability of a reactor, under the only condition to consider that the dynamics of the reactivity is of step type. (Author)
Advanced Fluid Reduced Order Models for Compressible Flow.
Energy Technology Data Exchange (ETDEWEB)
Tezaur, Irina Kalashnikova [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Fike, Jeffrey A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Barone, Matthew F. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Maddix, Danielle [Stanford Univ., CA (United States); Mussoni, Erin E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Balajewicz, Maciej [Univ. of Illinois, Urbana-Champaign, IL (United States)
2017-09-01
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.
Modeling the self-assembly of ordered nanoporous materials
Energy Technology Data Exchange (ETDEWEB)
Monson, Peter [Univ. of Massachusetts, Amherst, MA (United States); Auerbach, Scott [Univ. of Massachusetts, Amherst, MA (United States)
2017-11-13
This report describes progress on a collaborative project on the multiscale modeling of the assembly processes in the synthesis of nanoporous materials. Such materials are of enormous importance in modern technology with application in the chemical process industries, biomedicine and biotechnology as well as microelectronics. The project focuses on two important classes of materials: i) microporous crystalline materials, such as zeolites, and ii) ordered mesoporous materials. In the first case the pores are part of the crystalline structure, while in the second the structures are amorphous on the atomistic length scale but where surfactant templating gives rise to order on the length scale of 2 - 20 nm. We have developed a modeling framework that encompasses both these kinds of materials. Our models focus on the assembly of corner sharing silica tetrahedra in the presence of structure directing agents. We emphasize a balance between sufficient realism in the models and computational tractibility given the complex many-body phenomena. We use both on-lattice and off-lattice models and the primary computational tools are Monte Carlo simulations with sampling techniques and ensembles appropriate to specific situations. Our modeling approach is the first to capture silica polymerization, nanopore crystallization, and mesopore formation through computer-simulated self assembly.
Comparison of different models for non-invasive FFR estimation
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Performance of a reduced-order FSI model for flow-induced vocal fold vibration
Luo, Haoxiang; Chang, Siyuan; Chen, Ye; Rousseau, Bernard; PhonoSim Team
2017-11-01
Vocal fold vibration during speech production involves a three-dimensional unsteady glottal jet flow and three-dimensional nonlinear tissue mechanics. A full 3D fluid-structure interaction (FSI) model is computationally expensive even though it provides most accurate information about the system. On the other hand, an efficient reduced-order FSI model is useful for fast simulation and analysis of the vocal fold dynamics, which can be applied in procedures such as optimization and parameter estimation. In this work, we study performance of a reduced-order model as compared with the corresponding full 3D model in terms of its accuracy in predicting the vibration frequency and deformation mode. In the reduced-order model, we use a 1D flow model coupled with a 3D tissue model that is the same as in the full 3D model. Two different hyperelastic tissue behaviors are assumed. In addition, the vocal fold thickness and subglottal pressure are varied for systematic comparison. The result shows that the reduced-order model provides consistent predictions as the full 3D model across different tissue material assumptions and subglottal pressures. However, the vocal fold thickness has most effect on the model accuracy, especially when the vocal fold is thin.
HOKF: High Order Kalman Filter for Epilepsy Forecasting Modeling.
Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee
2017-08-01
Epilepsy forecasting has been extensively studied using high-order time series obtained from scalp-recorded electroencephalography (EEG). An accurate seizure prediction system would not only help significantly improve patients' quality of life, but would also facilitate new therapeutic strategies to manage epilepsy. This paper thus proposes an improved Kalman Filter (KF) algorithm to mine seizure forecasts from neural activity by modeling three properties in the high-order EEG time series: noise, temporal smoothness, and tensor structure. The proposed High-Order Kalman Filter (HOKF) is an extension of the standard Kalman filter, for which higher-order modeling is limited. The efficient dynamic of HOKF system preserves the tensor structure of the observations and latent states. As such, the proposed method offers two main advantages: (i) effectiveness with HOKF results in hidden variables that capture major evolving trends suitable to predict neural activity, even in the presence of missing values; and (ii) scalability in that the wall clock time of the HOKF is linear with respect to the number of time-slices of the sequence. The HOKF algorithm is examined in terms of its effectiveness and scalability by conducting forecasting and scalability experiments with a real epilepsy EEG dataset. The results of the simulation demonstrate the superiority of the proposed method over the original Kalman Filter and other existing methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Reduced order modeling in topology optimization of vibroacoustic problems
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas
2017-01-01
complex 3D parts. The optimization process can therefore become highly time consuming due to the need to solve a large system of equations at each iteration. Projection-based parametric Model Order Reduction (pMOR) methods have successfully been applied for reducing the computational cost of material......There is an interest in introducing topology optimization techniques in the design process of structural-acoustic systems. In topology optimization, the design space must be finely meshed in order to obtain an accurate design, which results in large numbers of degrees of freedom when designing...... or size optimization in large vibroacoustic models; however, new challenges are encountered when dealing with topology optimization. Since a design parameter per element is considered, the total number of design variables becomes very large; this poses a challenge to most existing pMOR techniques, which...
Finite temperature CPN-1 model and long range Neel order
International Nuclear Information System (INIS)
Ichinose, Ikuo; Yamamoto, Hisashi.
1989-09-01
We study in d space-dimensions the finite temperature behavior of long range Neel order (LRNO) in CP N-1 model as a low energy effective field theory of the antiferromagnetic Heisenberg model. For d≤1, or d≤2 at any nonzero temperature, LRNO disappears, in agreement with Mermin-Wagner-Coleman's theorem. For d=3 in the weak coupling region, LRNO exists below the critical temperature T N (Neel temperature). T N decreases as the interlayer coupling becomes relatively weak compared with that within Cu-O layers. (author)
The order of chaos on a Bianch IX cosmological model
Energy Technology Data Exchange (ETDEWEB)
Bugalho, H; da Silva, A R; Ramos, J S
1986-12-01
The purpose of this paper is to analyze the chaotic behavior that can arise on a type-IX cosmological model using methods from dynamic systems theory and symbolic dynamics. Specifically, instead of the Belinski-Khalatnikov-Lifschitz model, we use the iterates of a monotonously increasing map of the circle with a discontinuity, and for the Hamiltonian dynamics of Misner's Mixmaster model we introduce the iterates of a noninvertible map. An equivalence between these two models can easily be brought upon by translating them in symbolic dynamical terms. The resulting symbolic orbits can be inserted in an ordered tree structure set, and so we can present an effective counting and referentation of all period orbits.
Reduced-Order Computational Model for Low-Frequency Dynamics of Automobiles
Directory of Open Access Journals (Sweden)
A. Arnoux
2013-01-01
Full Text Available A reduced-order model is constructed to predict, for the low-frequency range, the dynamical responses in the stiff parts of an automobile constituted of stiff and flexible parts. The vehicle has then many elastic modes in this range due to the presence of many flexible parts and equipment. A nonusual reduced-order model is introduced. The family of the elastic modes is not used and is replaced by an adapted vector basis of the admissible space of global displacements. Such a construction requires a decomposition of the domain of the structure in subdomains in order to control the spatial wave length of the global displacements. The fast marching method is used to carry out the subdomain decomposition. A probabilistic model of uncertainties is introduced. The parameters controlling the level of uncertainties are estimated solving a statistical inverse problem. The methodology is validated with a large computational model of an automobile.
AMEM-ADL Polymer Migration Estimation Model User's Guide
The user's guide of the Arthur D. Little Polymer Migration Estimation Model (AMEM) provides the information on how the model estimates the fraction of a chemical additive that diffuses through polymeric matrices.
Validity testing of third-order nonlinear models for synchronous generators
Energy Technology Data Exchange (ETDEWEB)
Arjona, M.A. [Division de Estudios de Posgrado e Investigacion, Instituto Tecnologico de La Laguna Torreon, Coah. (Mexico); Escarela-Perez, R. [Universidad Autonoma Metropolitana - Azcapotzalco, Departamento de Energia, Av. San Pablo 180, Col. Reynosa, C.P. 02200 (Mexico); Espinosa-Perez, G. [Division de Estudios Posgrado de la Facultad de Ingenieria Universidad Nacional Autonoma de Mexico (Mexico); Alvarez-Ramirez, J. [Universidad Autonoma Metropolitana -Iztapalapa, Division de Ciencias Basicas e Ingenieria (Mexico)
2009-06-15
Third-order nonlinear models are commonly used in control theory for the analysis of the stability of both open-loop and closed-loop synchronous machines. However, the ability of these models to describe the electrical machine dynamics has not been tested experimentally. This work focuses on this issue by addressing the parameters identification problem for third-order models for synchronous generators. For a third-order model describing the dynamics of power angle {delta}, rotor speed {omega} and quadrature axis transient EMF E{sub q}{sup '}, it is shown that the parameters cannot be identified because of the effects of the unknown initial condition of E{sub q}{sup '}. To avoid this situation, a model that incorporates the measured electrical power dynamics is considered, showing that state measurements guarantee the identification of the model parameters. Data obtained from a 7 kVA lab-scale synchronous generator and from a 150 MVA finite-element simulation were used to show that, at least for the worked examples, the estimated parameters display only moderate variations over the operating region. This suggests that third-order models can suffice to describe the main dynamical features of synchronous generators, and that third-order models can be used to design and tune power system stabilizers and voltage regulators. (author)
Benefit Estimation Model for Tourist Spaceflights
Goehlich, Robert A.
2003-01-01
It is believed that the only potential means for significant reduction of the recurrent launch cost, which results in a stimulation of human space colonization, is to make the launcher reusable, to increase its reliability, and to make it suitable for new markets such as mass space tourism. But such space projects, that have long range aspects are very difficult to finance, because even politicians would like to see a reasonable benefit during their term in office, because they want to be able to explain this investment to the taxpayer. This forces planners to use benefit models instead of intuitive judgement to convince sceptical decision-makers to support new investments in space. Benefit models provide insights into complex relationships and force a better definition of goals. A new approach is introduced in the paper that allows to estimate the benefits to be expected from a new space venture. The main objective why humans should explore space is determined in this study to ``improve the quality of life''. This main objective is broken down in sub objectives, which can be analysed with respect to different interest groups. Such interest groups are the operator of a space transportation system, the passenger, and the government. For example, the operator is strongly interested in profit, while the passenger is mainly interested in amusement, while the government is primarily interested in self-esteem and prestige. This leads to different individual satisfactory levels, which are usable for the optimisation process of reusable launch vehicles.
Macroeconomic Forecasts in Models with Bayesian Averaging of Classical Estimates
Directory of Open Access Journals (Sweden)
Piotr Białowolski
2012-03-01
Full Text Available The aim of this paper is to construct a forecasting model oriented on predicting basic macroeconomic variables, namely: the GDP growth rate, the unemployment rate, and the consumer price inflation. In order to select the set of the best regressors, Bayesian Averaging of Classical Estimators (BACE is employed. The models are atheoretical (i.e. they do not reflect causal relationships postulated by the macroeconomic theory and the role of regressors is played by business and consumer tendency survey-based indicators. Additionally, survey-based indicators are included with a lag that enables to forecast the variables of interest (GDP, unemployment, and inflation for the four forthcoming quarters without the need to make any additional assumptions concerning the values of predictor variables in the forecast period. Bayesian Averaging of Classical Estimators is a method allowing for full and controlled overview of all econometric models which can be obtained out of a particular set of regressors. In this paper authors describe the method of generating a family of econometric models and the procedure for selection of a final forecasting model. Verification of the procedure is performed by means of out-of-sample forecasts of main economic variables for the quarters of 2011. The accuracy of the forecasts implies that there is still a need to search for new solutions in the atheoretical modelling.
Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model
Mo, Qianxing
2010-01-29
ChIP-chip experiments are procedures that combine chromatin immunoprecipitation (ChIP) and DNA microarray (chip) technology to study a variety of biological problems, including protein-DNA interaction, histone modification, and DNA methylation. The most important feature of ChIP-chip data is that the intensity measurements of probes are spatially correlated because the DNA fragments are hybridized to neighboring probes in the experiments. We propose a simple, but powerful Bayesian hierarchical approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic resolutions. The model parameters are estimated using the Gibbs sampler. The proposed method is illustrated using two publicly available data sets from Affymetrix and Agilent platforms, and compared with three alternative Bayesian methods, namely, Bayesian hierarchical model, hierarchical gamma mixture model, and Tilemap hidden Markov model. The numerical results indicate that the proposed method performs as well as the other three methods for the data from Affymetrix tiling arrays, but significantly outperforms the other three methods for the data from Agilent promoter arrays. In addition, we find that the proposed method has better operating characteristics in terms of sensitivities and false discovery rates under various scenarios. © 2010, The International Biometric Society.
Directory of Open Access Journals (Sweden)
Z. Meghnatisi
2009-06-01
Full Text Available Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a class of mixed estimators when it is known that the parameters are ordered. Also, the inadmissible estimators in the class of mixed estimators are derived. Finally the results are extended to some subclass of exponential family
Identification of reduced-order model for an aeroelastic system from flutter test data
Directory of Open Access Journals (Sweden)
Wei Tang
2017-02-01
Full Text Available Recently, flutter active control using linear parameter varying (LPV framework has attracted a lot of attention. LPV control synthesis usually generates controllers that are at least of the same order as the aeroelastic models. Therefore, the reduced-order model is required by synthesis for avoidance of large computation cost and high-order controller. This paper proposes a new procedure for generation of accurate reduced-order linear time-invariant (LTI models by using system identification from flutter testing data. The proposed approach is in two steps. The well-known poly-reference least squares complex frequency (p-LSCF algorithm is firstly employed for modal parameter identification from frequency response measurement. After parameter identification, the dominant physical modes are determined by clear stabilization diagrams and clustering technique. In the second step, with prior knowledge of physical poles, the improved frequency-domain maximum likelihood (ML estimator is presented for building accurate reduced-order model. Before ML estimation, an improved subspace identification considering the poles constraint is also proposed for initializing the iterative procedure. Finally, the performance of the proposed procedure is validated by real flight flutter test data.
An Ordered Regression Model to Predict Transit Passengers’ Behavioural Intentions
Energy Technology Data Exchange (ETDEWEB)
Oña, J. de; Oña, R. de; Eboli, L.; Forciniti, C.; Mazzulla, G.
2016-07-01
Passengers’ behavioural intentions after experiencing transit services can be viewed as signals that show if a customer continues to utilise a company’s service. Users’ behavioural intentions can depend on a series of aspects that are difficult to measure directly. More recently, transit passengers’ behavioural intentions have been just considered together with the concepts of service quality and customer satisfaction. Due to the characteristics of the ways for evaluating passengers’ behavioural intentions, service quality and customer satisfaction, we retain that this kind of issue could be analysed also by applying ordered regression models. This work aims to propose just an ordered probit model for analysing service quality factors that can influence passengers’ behavioural intentions towards the use of transit services. The case study is the LRT of Seville (Spain), where a survey was conducted in order to collect the opinions of the passengers about the existing transit service, and to have a measure of the aspects that can influence the intentions of the users to continue using the transit service in the future. (Author)
Competing orders in the Hofstadter t -J model
Tu, Wei-Lin; Schindler, Frank; Neupert, Titus; Poilblanc, Didier
2018-01-01
The Hofstadter model describes noninteracting fermions on a lattice in the presence of an external magnetic field. Motivated by the plethora of solid-state phases emerging from electron interactions, we consider an interacting version of the Hofstadter model, including a Hubbard repulsion U . We investigate this model in the large-U limit corresponding to a t -J Hamiltonian with an external (orbital) magnetic field. By using renormalized mean-field theory supplemented by exact diagonalization calculations of small clusters, we find evidence for competing symmetry-breaking phases, exhibiting (possibly coexisting) charge, bond, and superconducting orders. Topological properties of the states are also investigated, and some of our results are compared to related experiments involving ultracold atoms loaded on optical lattices in the presence of a synthetic gauge field.
Twisted quantum double model of topological order with boundaries
Bullivant, Alex; Hu, Yuting; Wan, Yidun
2017-10-01
We generalize the twisted quantum double model of topological orders in two dimensions to the case with boundaries by systematically constructing the boundary Hamiltonians. Given the bulk Hamiltonian defined by a gauge group G and a 3-cocycle in the third cohomology group of G over U (1 ) , a boundary Hamiltonian can be defined by a subgroup K of G and a 2-cochain in the second cochain group of K over U (1 ) . The consistency between the bulk and boundary Hamiltonians is dictated by what we call the Frobenius condition that constrains the 2-cochain given the 3-cocyle. We offer a closed-form formula computing the ground-state degeneracy of the model on a cylinder in terms of the input data only, which can be naturally generalized to surfaces with more boundaries. We also explicitly write down the ground-state wave function of the model on a disk also in terms of the input data only.
Topological order in an exactly solvable 3D spin model
International Nuclear Information System (INIS)
Bravyi, Sergey; Leemhuis, Bernhard; Terhal, Barbara M.
2011-01-01
Research highlights: RHtriangle We study exactly solvable spin model with six-qubit nearest neighbor interactions on a 3D face centered cubic lattice. RHtriangle The ground space of the model exhibits topological quantum order. RHtriangle Elementary excitations can be geometrically described as the corners of rectangular-shaped membranes. RHtriangle The ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. RHtriangle Logical operators acting on the encoded qubits are described in terms of closed strings and closed membranes. - Abstract: We study a 3D generalization of the toric code model introduced recently by Chamon. This is an exactly solvable spin model with six-qubit nearest-neighbor interactions on an FCC lattice whose ground space exhibits topological quantum order. The elementary excitations of this model which we call monopoles can be geometrically described as the corners of rectangular-shaped membranes. We prove that the creation of an isolated monopole separated from other monopoles by a distance R requires an operator acting on Ω(R 2 ) qubits. Composite particles that consist of two monopoles (dipoles) and four monopoles (quadrupoles) can be described as end-points of strings. The peculiar feature of the model is that dipole-type strings are rigid, that is, such strings must be aligned with face-diagonals of the lattice. For periodic boundary conditions the ground space can encode 4g qubits where g is the greatest common divisor of the lattice dimensions. We describe a complete set of logical operators acting on the encoded qubits in terms of closed strings and closed membranes.
Estimating true evolutionary distances under the DCJ model.
Lin, Yu; Moret, Bernard M E
2008-07-01
Modern techniques can yield the ordering and strandedness of genes on each chromosome of a genome; such data already exists for hundreds of organisms. The evolutionary mechanisms through which the set of the genes of an organism is altered and reordered are of great interest to systematists, evolutionary biologists, comparative genomicists and biomedical researchers. Perhaps the most basic concept in this area is that of evolutionary distance between two genomes: under a given model of genomic evolution, how many events most likely took place to account for the difference between the two genomes? We present a method to estimate the true evolutionary distance between two genomes under the 'double-cut-and-join' (DCJ) model of genome rearrangement, a model under which a single multichromosomal operation accounts for all genomic rearrangement events: inversion, transposition, translocation, block interchange and chromosomal fusion and fission. Our method relies on a simple structural characterization of a genome pair and is both analytically and computationally tractable. We provide analytical results to describe the asymptotic behavior of genomes under the DCJ model, as well as experimental results on a wide variety of genome structures to exemplify the very high accuracy (and low variance) of our estimator. Our results provide a tool for accurate phylogenetic reconstruction from multichromosomal gene rearrangement data as well as a theoretical basis for refinements of the DCJ model to account for biological constraints. All of our software is available in source form under GPL at http://lcbb.epfl.ch.
Dynamic Diffusion Estimation in Exponential Family Models
Czech Academy of Sciences Publication Activity Database
Dedecius, Kamil; Sečkárová, Vladimíra
2013-01-01
Roč. 20, č. 11 (2013), s. 1114-1117 ISSN 1070-9908 R&D Projects: GA MŠk 7D12004; GA ČR GA13-13502S Keywords : diffusion estimation * distributed estimation * paremeter estimation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.639, year: 2013 http://library.utia.cas.cz/separaty/2013/AS/dedecius-0396518.pdf
UAV State Estimation Modeling Techniques in AHRS
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Temporal aggregation in first order cointegrated vector autoregressive models
DEFF Research Database (Denmark)
La Cour, Lisbeth Funding; Milhøj, Anders
We study aggregation - or sample frequencies - of time series, e.g. aggregation from weekly to monthly or quarterly time series. Aggregation usually gives shorter time series but spurious phenomena, in e.g. daily observations, can on the other hand be avoided. An important issue is the effect of ...... of aggregation on the adjustment coefficient in cointegrated systems. We study only first order vector autoregressive processes for n dimensional time series Xt, and we illustrate the theory by a two dimensional and a four dimensional model for prices of various grades of gasoline...
Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model
Directory of Open Access Journals (Sweden)
Jing-Huai Gao
2009-12-01
Full Text Available This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase. We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS (fourth-order cumulant matching method. In order to derive the estimator of the Higher-order Statistics (HOS, the multivariate scale mixture of Gaussians (MSMG model is applied to formulating the multivariate joint probability density function (PDF of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series.
Mechhoud, Sarra
2016-08-04
In this paper, boundary adaptive estimation of solar radiation in a solar collector plant is investigated. The solar collector is described by a 1D first-order hyperbolic partial differential equation where the solar radiation models the source term and only boundary measurements are available. Using boundary injection, the estimator is developed in the Lyapunov approach and consists of a combination of a state observer and a parameter adaptation law which guarantee the asymptotic convergence of the state and parameter estimation errors. Simulation results are provided to illustrate the performance of the proposed identifier.
Using Count Data and Ordered Models in National Forest Recreation Demand Analysis
Simões, Paula; Barata, Eduardo; Cruz, Luis
2013-11-01
This research addresses the need to improve our knowledge on the demand for national forests for recreation and offers an in-depth data analysis supported by the complementary use of count data and ordered models. From a policy-making perspective, while count data models enable the estimation of monetary welfare measures, ordered models allow for the wider use of the database and provide a more flexible analysis of data. The main purpose of this article is to analyse the individual forest recreation demand and to derive a measure of its current use value. To allow a more complete analysis of the forest recreation demand structure the econometric approach supplements the use of count data models with ordered category models using data obtained by means of an on-site survey in the Bussaco National Forest (Portugal). Overall, both models reveal that travel cost and substitute prices are important explanatory variables, visits are a normal good and demographic variables seem to have no influence on demand. In particular, estimated price and income elasticities of demand are quite low. Accordingly, it is possible to argue that travel cost (price) in isolation may be expected to have a low impact on visitation levels.
Directory of Open Access Journals (Sweden)
R. Locatelli
2013-10-01
question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2011-01-01
In this paper, two non-parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel-based approaches. The second estimator
Performances of some estimators of linear model with ...
African Journals Online (AJOL)
The estimators are compared by examing the finite properties of estimators namely; sum of biases, sum of absolute biases, sum of variances and sum of the mean squared error of the estimated parameter of the model. Results show that when the autocorrelation level is small (ρ=0.4), the MLGD estimator is best except when ...
Antiferromagnetic order in the Hubbard model on the Penrose lattice
Koga, Akihisa; Tsunetsugu, Hirokazu
2017-12-01
We study an antiferromagnetic order in the ground state of the half-filled Hubbard model on the Penrose lattice and investigate the effects of quasiperiodic lattice structure. In the limit of infinitesimal Coulomb repulsion U →+0 , the staggered magnetizations persist to be finite, and their values are determined by confined states, which are strictly localized with thermodynamics degeneracy. The magnetizations exhibit an exotic spatial pattern, and have the same sign in each of cluster regions, the size of which ranges from 31 sites to infinity. With increasing U , they continuously evolve to those of the corresponding spin model in the U =∞ limit. In both limits of U , local magnetizations exhibit a fairly intricate spatial pattern that reflects the quasiperiodic structure, but the pattern differs between the two limits. We have analyzed this pattern change by a mode analysis by the singular value decomposition method for the fractal-like magnetization pattern projected into the perpendicular space.
Venus spherical harmonic gravity model to degree and order 60
Konopliv, Alex S.; Sjogren, William L.
1994-01-01
The Magellan and Pioneer Venus Orbiter radiometric tracking data sets have been combined to produce a 60th degree and order spherical harmonic gravity field. The Magellan data include the high-precision X-band gravity tracking from September 1992 to May 1993 and post-aerobraking data up to January 5, 1994. Gravity models are presented from the application of Kaula's power rule for Venus and an alternative a priori method using surface accelerations. Results are given as vertical gravity acceleration at the reference surface, geoid, vertical Bouguer, and vertical isostatic maps with errors for the vertical gravity and geoid maps included. Correlation of the gravity with topography for the different models is also discussed.
Ordered LOGIT Model approach for the determination of financial distress.
Kinay, B
2010-01-01
Nowadays, as a result of the global competition encountered, numerous companies come up against financial distresses. To predict and take proactive approaches for those problems is quite important. Thus, the prediction of crisis and financial distress is essential in terms of revealing the financial condition of companies. In this study, financial ratios relating to 156 industrial firms that are quoted in the Istanbul Stock Exchange are used and probabilities of financial distress are predicted by means of an ordered logit regression model. By means of Altman's Z Score, the dependent variable is composed by scaling the level of risk. Thus, a model that can compose an early warning system and predict financial distress is proposed.
Pairing of parafermions of order 2: seniority model
International Nuclear Information System (INIS)
Nelson, Charles A
2004-01-01
As generalizations of the fermion seniority model, four multi-mode Hamiltonians are considered to investigate some of the consequences of the pairing of parafermions of order 2. Two- and four-particle states are explicitly constructed for H A ≡ -GA†A with A† ≡ 1/2 Σ m>0 c† m c† -m and the distinct H C ≡ -GC†C with C† ≡ 1/2 Σ m>0 c† -m c† m , and for the time-reversal invariant H (-) ≡ -G(A† - C†)(A - C) and H (+) ≡ -G(A† + C†)(A + C), which has no analogue in the fermion case. The spectra and degeneracies are compared with those of the usual fermion seniority model
Quantifying and modeling birth order effects in autism.
Directory of Open Access Journals (Sweden)
Tychele Turner
Full Text Available Autism is a complex genetic disorder with multiple etiologies whose molecular genetic basis is not fully understood. Although a number of rare mutations and dosage abnormalities are specific to autism, these explain no more than 10% of all cases. The high heritability of autism and low recurrence risk suggests multifactorial inheritance from numerous loci but other factors also intervene to modulate risk. In this study, we examine the effect of birth rank on disease risk which is not expected for purely hereditary genetic models. We analyzed the data from three publicly available autism family collections in the USA for potential birth order effects and studied the statistical properties of three tests to show that adequate power to detect these effects exist. We detect statistically significant, yet varying, patterns of birth order effects across these collections. In multiplex families, we identify V-shaped effects where middle births are at high risk; in simplex families, we demonstrate linear effects where risk increases with each additional birth. Moreover, the birth order effect is gender-dependent in the simplex collection. It is currently unknown whether these patterns arise from ascertainment biases or biological factors. Nevertheless, further investigation of parental age-dependent risks yields patterns similar to those observed and could potentially explain part of the increased risk. A search for genes considering these patterns is likely to increase statistical power and uncover novel molecular etiologies.
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
Dynamics and phenomenology of higher order gravity cosmological models
Moldenhauer, Jacob Andrew
2010-10-01
I present here some new results about a systematic approach to higher-order gravity (HOG) cosmological models. The HOG models are derived from curvature invariants that are more general than the Einstein-Hilbert action. Some of the models exhibit late-time cosmic acceleration without the need for dark energy and fit some current observations. The open question is that there are an infinite number of invariants that one could select, and many of the published papers have stressed the need to find a systematic approach that will allow one to study methodically the various possibilities. We explore a new connection that we made between theorems from the theory of invariants in general relativity and these cosmological models. In summary, the theorems demonstrate that curvature invariants are not all independent from each other and that for a given Ricci Segre type and Petrov type (symmetry classification) of the space-time, there exists a complete minimal set of independent invariants (a basis) in terms of which all the other invariants can be expressed. As an immediate consequence of the proposed approach, the number of invariants to consider is dramatically reduced from infinity to four invariants in the worst case and to only two invariants in the cases of interest, including all Friedmann-Lemaitre-Robertson-Walker metrics. We derive models that pass stability and physical acceptability conditions. We derive dynamical equations and phase portrait analyses that show the promise of the systematic approach. We consider observational constraints from magnitude-redshift Supernovae Type Ia data, distance to the last scattering surface of the Cosmic Microwave Background radiation, and Baryon Acoustic Oscillations. We put observational constraints on general HOG models. We constrain different forms of the Gauss-Bonnet, f(G), modified gravity models with these observations. We show some of these models pass solar system tests. We seek to find models that pass physical and
Falk, Richard A.
The monograph examines the relationship of nuclear power to world order. The major purpose of the document is to stimulate research, education, dialogue, and political action for a just and peaceful world order. The document is presented in five chapters. Chapter I stresses the need for a system of global security to counteract dangers brought…
Radiation risk estimation based on measurement error models
Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya
2017-01-01
This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.
Daily Discharge Estimation in Talar River Using Lazy Learning Model
Directory of Open Access Journals (Sweden)
Zahra Abdollahi
2017-03-01
Full Text Available Introduction: River discharge as one of the most important hydrology factors has a vital role in physical, ecological, social and economic processes. So, accurate and reliable prediction and estimation of river discharge have been widely considered by many researchers in different fields such as surface water management, design of hydraulic structures, flood control and ecological studies in spetialand temporal scale. Therefore, in last decades different techniques for short-term and long-term estimation of hourly, daily, monthly and annual discharge have been developed for many years. However, short-term estimation models are less sophisticated and more accurate.Various global and local algorithms have been widely used to estimate hydrologic variables. The current study effort to use Lazy Learning approach to evaluate the adequacy of input data in order to follow the variation of discharge and also simulate next-day discharge in Talar River in KasilianBasinwhere is located in north of Iran with an area of 66.75 km2. Lazy learning is a local linear modelling approach in which generalization beyond the training data is delayed until a query is made to the system, as opposed to in eager learning, where the system tries to generalize the training data before receiving queries Materials and Methods: The current study was conducted in Kasilian Basin, where is located in north of Iran with an area of 66.75 km2. The main river of this basin joins to Talar River near Valicbon village and then exit from the watershed. Hydrometric station located near Valicbon village is equipped with Parshall flume and Limnogragh which can record river discharge of about 20 cubic meters per second.In this study, daily data of discharge recorded in Valicbon station related to 2002 to 2012 was used to estimate the discharge of 19 September 2012. The mean annual discharge of considered river was also calculated by using available data about 0.441 cubic meters per second. To
High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.
Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong
2018-08-01
This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.
Control-oriented reduced order modeling of dipteran flapping flight
Faruque, Imraan
Flying insects achieve flight stabilization and control in a manner that requires only small, specialized neural structures to perform the essential components of sensing and feedback, achieving unparalleled levels of robust aerobatic flight on limited computational resources. An engineering mechanism to replicate these control strategies could provide a dramatic increase in the mobility of small scale aerial robotics, but a formal investigation has not yet yielded tools that both quantitatively and intuitively explain flapping wing flight as an "input-output" relationship. This work uses experimental and simulated measurements of insect flight to create reduced order flight dynamics models. The framework presented here creates models that are relevant for the study of control properties. The work begins with automated measurement of insect wing motions in free flight, which are then used to calculate flight forces via an empirically-derived aerodynamics model. When paired with rigid body dynamics and experimentally measured state feedback, both the bare airframe and closed loop systems may be analyzed using frequency domain system identification. Flight dynamics models describing maneuvering about hover and cruise conditions are presented for example fruit flies (Drosophila melanogaster) and blowflies (Calliphorids). The results show that biologically measured feedback paths are appropriate for flight stabilization and sexual dimorphism is only a minor factor in flight dynamics. A method of ranking kinematic control inputs to maximize maneuverability is also presented, showing that the volume of reachable configurations in state space can be dramatically increased due to appropriate choice of kinematic inputs.
Efficient estimation of semiparametric copula models for bivariate survival data
Cheng, Guang
2014-01-01
A semiparametric copula model for bivariate survival data is characterized by a parametric copula model of dependence and nonparametric models of two marginal survival functions. Efficient estimation for the semiparametric copula model has been recently studied for the complete data case. When the survival data are censored, semiparametric efficient estimation has only been considered for some specific copula models such as the Gaussian copulas. In this paper, we obtain the semiparametric efficiency bound and efficient estimation for general semiparametric copula models for possibly censored data. We construct an approximate maximum likelihood estimator by approximating the log baseline hazard functions with spline functions. We show that our estimates of the copula dependence parameter and the survival functions are asymptotically normal and efficient. Simple consistent covariance estimators are also provided. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2013 Elsevier Inc.
Miao, Xijiang; Mukhopadhyay, Rishi; Valafar, Homayoun
2008-10-01
Advances in NMR instrumentation and pulse sequence design have resulted in easier acquisition of Residual Dipolar Coupling (RDC) data. However, computational and theoretical analysis of this type of data has continued to challenge the international community of investigators because of their complexity and rich information content. Contemporary use of RDC data has required a-priori assignment, which significantly increases the overall cost of structural analysis. This article introduces a novel algorithm that utilizes unassigned RDC data acquired from multiple alignment media ( nD-RDC, n ⩾ 3) for simultaneous extraction of the relative order tensor matrices and reconstruction of the interacting vectors in space. Estimation of the relative order tensors and reconstruction of the interacting vectors can be invaluable in a number of endeavors. An example application has been presented where the reconstructed vectors have been used to quantify the fitness of a template protein structure to the unknown protein structure. This work has other important direct applications such as verification of the novelty of an unknown protein and validation of the accuracy of an available protein structure model in drug design. More importantly, the presented work has the potential to bridge the gap between experimental and computational methods of structure determination.
Mathematical model of transmission network static state estimation
Directory of Open Access Journals (Sweden)
Ivanov Aleksandar
2012-01-01
Full Text Available In this paper the characteristics and capabilities of the power transmission network static state estimator are presented. The solving process of the mathematical model containing the measurement errors and their processing is developed. To evaluate difference between the general model of state estimation and the fast decoupled state estimation model, the both models are applied to an example, and so derived results are compared.
Parameter Estimation of Partial Differential Equation Models
Xun, Xiaolei; Cao, Jiguo; Mallick, Bani; Maity, Arnab; Carroll, Raymond J.
2013-01-01
PDEs used in practice have no analytic solutions, and can only be solved with numerical methods. Currently, methods for estimating PDE parameters require repeatedly solving PDEs numerically under thousands of candidate parameter values, and thus
Directory of Open Access Journals (Sweden)
F. Ahmadi
2016-10-01
by a linear boundary. In this method, the nearest samples to the decision boundary called support vectors. These vectors define the equation of the decision boundary. The classic intelligent simulation algorithms such as artificial neural network usually minimize the absolute error or sum of square errors of the training data, but the SVM models, used the structural error minimization principle (5. Results Discussion Based on the results of performance evaluations, and RMSE and R criteria, both of the SVM and ANFIS models had a high accuracy in predicting the reference evapotranspiration of North West of Iran. From the results of Tables 6 and 8, it can be concluded that both of the models had similar performance and they can present high accuracy in modeling with different inputs. As the ANFIS model for achieving the maximum accuracy used the maximum, minimum and average temperature, sunshine (M8 and wind speed. But the SVM model in Urmia and Sanandaj stations with M8 pattern and in other stations with M9 pattern achieves the maximum performance. In all of the stations (apart from Sanandaj station the SVM model had a high accuracy and less error than the ANFIS model but, this difference is not remarkable and the SVM model used more input parameters (than the ANFIS model for predicting the evapotranspiration. Conclusion In this research, in order to predict monthly reference evapotranspiration two ANFIS and SVM models employed using collected data at the six synoptic stations in the period of 38 years (1973-2010 located in the north-west of Iran. At first monthly evapotranspiration of a reference crop estimated by FAO-Penman- Monteith method for selected stations as the output of SVM and ANFIS models. Then a regression equation between effective meteorological parameters on evapotranspiration fitted and different input patterns for model determined. Results showed Relative humidity as the less effective parameter deleted from an input of the model. Also in this paper
A Probabilistic Cost Estimation Model for Unexploded Ordnance Removal
National Research Council Canada - National Science Library
Poppe, Peter
1999-01-01
...) contaminated sites that the services must decontaminate. Existing models for estimating the cost of UXO removal often require a high level of expertise and provide only a point estimate for the costs...
A posteriori model validation for the temporal order of directed functional connectivity maps.
Beltz, Adriene M; Molenaar, Peter C M
2015-01-01
A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).
A posteriori model validation for the temporal order of directed functional connectivity maps
Directory of Open Access Journals (Sweden)
Adriene M. Beltz
2015-08-01
Full Text Available A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests, and (b to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates and substantive implications (e.g., higher order lags may be common in resting state data.
A Mathematical Modelling Approach to One-Day Cricket Batting Orders
Bukiet, Bruce; Ovens, Matthews
2006-01-01
While scoring strategies and player performance in cricket have been studied, there has been little published work about the influence of batting order with respect to One-Day cricket. We apply a mathematical modelling approach to compute efficiently the expected performance (runs distribution) of a cricket batting order in an innings. Among other applications, our method enables one to solve for the probability of one team beating another or to find the optimal batting order for a set of 11 players. The influence of defence and bowling ability can be taken into account in a straightforward manner. In this presentation, we outline how we develop our Markov Chain approach to studying the progress of runs for a batting order of non- identical players along the lines of work in baseball modelling by Bukiet et al., 1997. We describe the issues that arise in applying such methods to cricket, discuss ideas for addressing these difficulties and note limitations on modelling batting order for One-Day cricket. By performing our analysis on a selected subset of the possible batting orders, we apply the model to quantify the influence of batting order in a game of One Day cricket using available real-world data for current players. Key Points Batting order does effect the expected runs distribution in one-day cricket. One-day cricket has fewer data points than baseball, thus extreme values have greater effect on estimated probabilities. Dismissals rare and probabilities very small by comparison to baseball. Probability distribution for lower order batsmen is potentially skewed due to increased risk taking. Full enumeration of all possible line-ups is impractical using a single average computer. PMID:24357943
Theory and Low-Order Modeling of Unsteady Airfoil Flows
Ramesh, Kiran
Unsteady flow phenomena are prevalent in a wide range of problems in nature and engineering. These include, but are not limited to, aerodynamics of insect flight, dynamic stall in rotorcraft and wind turbines, leading-edge vortices in delta wings, micro-air vehicle (MAV) design, gust handling and flow control. The most significant characteristics of unsteady flows are rapid changes in the circulation of the airfoil, apparent-mass effects, flow separation and the leading-edge vortex (LEV) phenomenon. Although experimental techniques and computational fluid dynamics (CFD) methods have enabled the detailed study of unsteady flows and their underlying features, a reliable and inexpensive loworder method for fast prediction and for use in control and design is still required. In this research, a low-order methodology based on physical principles rather than empirical fitting is proposed. The objective of such an approach is to enable insights into unsteady phenomena while developing approaches to model them. The basis of the low-order model developed here is unsteady thin-airfoil theory. A time-stepping approach is used to solve for the vorticity on an airfoil camberline, allowing for large amplitudes and nonplanar wakes. On comparing lift coefficients from this method against data from CFD and experiments for some unsteady test cases, it is seen that the method predicts well so long as LEV formation does not occur and flow over the airfoil is attached. The formation of leading-edge vortices (LEVs) in unsteady flows is initiated by flow separation and the formation of a shear layer at the airfoil's leading edge. This phenomenon has been observed to have both detrimental (dynamic stall in helicopters) and beneficial (high-lift flight in insects) effects. To predict the formation of LEVs in unsteady flows, a Leading Edge Suction Parameter (LESP) is proposed. This parameter is calculated from inviscid theory and is a measure of the suction at the airfoil's leading edge. It
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are
A dynamic neural field model of temporal order judgments.
Hecht, Lauren N; Spencer, John P; Vecera, Shaun P
2015-12-01
Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).
Vortex network community based reduced-order force model
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya; Taira, Kunihiko
2017-11-01
We characterize the vortical wake interactions by utilizing network theory and cluster-based approaches, and develop a data-inspired unsteady force model. In the present work, the vortical interaction network is defined by nodes representing vortical elements and the edges quantified by induced velocity measures amongst the vortices. The full vorticity field is reduced to a finite number of vortical clusters based on network community detection algorithm, which serves as a basis for a skeleton network that captures the essence of the wake dynamics. We use this reduced representation of the wake to develop a data-inspired reduced-order force model that can predict unsteady fluid forces on the body. The overall formulation is demonstrated for laminar flows around canonical bluff body wake and stalled flow over an airfoil. We also show the robustness of the present network-based model against noisy data, which motivates applications towards turbulent flows and experimental measurements. Supported by the National Science Foundation (Grant 1632003).
Estimation of Stochastic Volatility Models by Nonparametric Filtering
DEFF Research Database (Denmark)
Kanaya, Shin; Kristensen, Dennis
2016-01-01
/estimated volatility process replacing the latent process. Our estimation strategy is applicable to both parametric and nonparametric stochastic volatility models, and can handle both jumps and market microstructure noise. The resulting estimators of the stochastic volatility model will carry additional biases...... and variances due to the first-step estimation, but under regularity conditions we show that these vanish asymptotically and our estimators inherit the asymptotic properties of the infeasible estimators based on observations of the volatility process. A simulation study examines the finite-sample properties...
Directory of Open Access Journals (Sweden)
Salvador Lucas
2015-12-01
Full Text Available Recent developments in termination analysis for declarative programs emphasize the use of appropriate models for the logical theory representing the program at stake as a generic approach to prove termination of declarative programs. In this setting, Order-Sorted First-Order Logic provides a powerful framework to represent declarative programs. It also provides a target logic to obtain models for other logics via transformations. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis, in particular in termination analysis of declarative programs. We use convex domains to give domains to the different sorts of an order-sorted signature; we interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra and arithmetic constraint solving to synthesize the models.
Directory of Open Access Journals (Sweden)
Enrico Zio
2008-01-01
Full Text Available In the present work, the uncertainties affecting the safety margins estimated from thermal-hydraulic code calculations are captured quantitatively by resorting to the order statistics and the bootstrap technique. The proposed framework of analysis is applied to the estimation of the safety margin, with its confidence interval, of the maximum fuel cladding temperature reached during a complete group distribution blockage scenario in a RBMK-1500 nuclear reactor.
Recursive estimation of high-order Markov chains: Approximation by finite mixtures
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav
2016-01-01
Roč. 326, č. 1 (2016), s. 188-201 ISSN 0020-0255 R&D Projects : GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Markov chain * Approximate parameter estimation * Bayesian recursive estimation * Adaptive systems * Kullback–Leibler divergence * Forgetting Subject RIV: BC - Control Systems Theory Impact factor: 4.832, year: 2016 http://library.utia.cas.cz/separaty/2015/AS/karny-0447119.pdf
Computational design of patterned interfaces using reduced order models
International Nuclear Information System (INIS)
Vattre, A.J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M.J.
2014-01-01
Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. (authors)
Basic first-order model theory in Mizar
Directory of Open Access Journals (Sweden)
Marco Bright Caminati
2010-01-01
Full Text Available The author has submitted to Mizar Mathematical Library a series of five articles introducing a framework for the formalization of classical first-order model theory.In them, Goedel's completeness and Lowenheim-Skolem theorems have also been formalized for the countable case, to offer a first application of it and to showcase its utility.This is an overview and commentary on some key aspects of this setup.It features exposition and discussion of a new encoding of basic definitions and theoretical gears needed for the task, remarks about the design strategies and approaches adopted in their implementation, and more general reflections about proof checking induced by the work done.
Robust estimation and moment selection in dynamic fixed-effects panel data models
Cizek, Pavel; Aquaro, Michele
Considering linear dynamic panel data models with fixed effects, existing outlier–robust estimators based on the median ratio of two consecutive pairs of first-differenced data are extended to higher-order differencing. The estimation procedure is thus based on many pairwise differences and their
Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model
Acquah, H. de-Graft; Onumah, E. E.
2014-01-01
Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...
Rakoto, Virgile; Lognonné, Philippe; Rolland, Lucie; Coïsson, Pierdavide; Drilleau, Mélanie
2017-04-01
Large underwater earthquakes (Mw > 7) can transmit part of their energy to the surrounding ocean through large sea-floor motions, generating tsunamis that propagate over long distances. The forcing effect of tsunami waves on the atmosphere generate internal gravity waves which produce detectable ionospheric perturbations when they reach the upper atmosphere. Theses perturbations are frequently observed in the total electron content (TEC) measured by the multi-frequency Global navigation Satellite systems (GNSS) data (e.g., GPS,GLONASS). In this paper, we performed for the first time an inversion of the sea level anomaly using the GPS TEC data using a least square inversion (LSQ) through a normal modes summation modeling technique. Using the tsunami of the 2012 Haida Gwaii in far field as a test case, we showed that the amplitude peak to peak of the sea level anomaly inverted using this method is below 10 % error. Nevertheless, we cannot invert the second wave arriving 20 minutes later. This second wave is generaly explain by the coastal reflection which the normal modeling does not take into account. Our technique is then applied to two other tsunamis : the 2006 Kuril Islands tsunami in far field, and the 2011 Tohoku tsunami in closer field. This demonstrates that the inversion using a normal mode approach is able to estimate fairly well the amplitude of the first arrivals of the tsunami. In the future, we plan to invert in real the TEC data in order to retrieve the tsunami height.
Volatility estimation using a rational GARCH model
Directory of Open Access Journals (Sweden)
Tetsuya Takaishi
2018-03-01
Full Text Available The rational GARCH (RGARCH model has been proposed as an alternative GARCHmodel that captures the asymmetric property of volatility. In addition to the previously proposedRGARCH model, we propose an alternative RGARCH model called the RGARCH-Exp model thatis more stable when dealing with outliers. We measure the performance of the volatility estimationby a loss function calculated using realized volatility as a proxy for true volatility and compare theRGARCH-type models with other asymmetric type models such as the EGARCH and GJR models.We conduct empirical studies of six stocks on the Tokyo Stock Exchange and find that a volatilityestimation using the RGARCH-type models outperforms the GARCH model and is comparable toother asymmetric GARCH models.
Estimation of curve number by DAWAST model
Energy Technology Data Exchange (ETDEWEB)
Kim, Tai Cheol; Park, Seung Ki; Moon, Jong Pil [Chungnam National University, Taejon (Korea, Republic of)
1997-10-31
It is one of the most important factors to determine the effective rainfall for estimation of flood hydrograph in design schedule. SCS curve number (CN) method has been frequently used to estimate the effective rainfall of synthesized design flood hydrograph for hydraulic structures. But, it should be cautious to apply SCS-CN originally developed in U.S.A to watersheds in Korea, because characteristics of watersheds in Korea and cropping patterns especially like a paddy land cultivation are quite different from those in USA. New CN method has been introduced. Maximum storage capacity which was herein defined as U{sub max} can be calibrated from the stream flow data and converted to new CN-I of driest condition of soil moisture in the given watershed. Effective rainfall for design flood hydrograph can be estimated by the curve number developed in the watersheds in Korea. (author). 14 refs., 5 tabs., 3 figs.
The model for estimation production cost of embroidery handicraft
Nofierni; Sriwana, IK; Septriani, Y.
2017-12-01
Embroidery industry is one of type of micro industry that produce embroidery handicraft. These industries are emerging in some rural areas of Indonesia. Embroidery clothing are produce such as scarves and clothes that show cultural value of certain region. The owner of an enterprise must calculate the cost of production before making a decision on how many products are received from the customer. A calculation approach to production cost analysis is needed to consider the feasibility of each order coming. This study is proposed to design the expert system (ES) in order to improve production management in the embroidery industry. The model will design used Fuzzy inference system as a model to estimate production cost. Research conducted based on survey and knowledge acquisitions from stakeholder of supply chain embroidery handicraft industry at Bukittinggi, West Sumatera, Indonesia. This paper will use fuzzy input where the quality, the complexity of the design and the working hours required and the result of the model are useful to manage production cost on embroidery production.
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Z. Meghnatisi; N. Nematollahi
2009-01-01
Let Xi1, · · · , Xini be a random sample from a gamma distribution with known shape parameter νi > 0 and unknown scale parameter βi > 0, i = 1, 2, satisfying 0 < β1 6 β2. We consider the class of mixed estimators for estimation of β1 and β2 under reflected gamma loss function. It has been shown that the minimum risk equivariant estimator of βi, i = 1, 2, which is admissible when no information on the ordering of parameters are given, is inadmissible and dominated by a cla...
Haussaire, J.-M.; Bocquet, M.
2015-08-01
Bocquet and Sakov (2013) have introduced a low-order model based on the coupling of the chaotic Lorenz-95 model which simulates winds along a mid-latitude circle, with the transport of a tracer species advected by this zonal wind field. This model, named L95-T, can serve as a playground for testing data assimilation schemes with an online model. Here, the tracer part of the model is extended to a reduced photochemistry module. This coupled chemistry meteorology model (CCMM), the L95-GRS model, mimics continental and transcontinental transport and the photochemistry of ozone, volatile organic compounds and nitrogen oxides. Its numerical implementation is described. The model is shown to reproduce the major physical and chemical processes being considered. L95-T and L95-GRS are specifically designed and useful for testing advanced data assimilation schemes, such as the iterative ensemble Kalman smoother (IEnKS) which combines the best of ensemble and variational methods. These models provide useful insights prior to the implementation of data assimilation methods on larger models. We illustrate their use with data assimilation schemes on preliminary, yet instructive numerical experiments. In particular, online and offline data assimilation strategies can be conveniently tested and discussed with this low-order CCMM. The impact of observed chemical species concentrations on the wind field can be quantitatively estimated. The impacts of the wind chaotic dynamics and of the chemical species non-chaotic but highly nonlinear dynamics on the data assimilation strategies are illustrated.
On order reduction in hydrogen isotope distillation models
International Nuclear Information System (INIS)
Sarigiannis, D.A.
1994-01-01
The design integration of the fuel processing system for the next generation fusion reactor plants (such as ITER and beyond) requires the enhancement of safety features related to the operation of the system. The current drive for inherent safety of hazardous chemical plants warrants the minimization of active toxic or radioactive inventories and the identification of process pathways with minimal risk of accidental or routine releases. New mathematical and numerical tools have been developed for the dynamic simulation and optimization of the safety characteristics related to tritium in all its forms in the fusion fuel processing system. The separation of hydrogen isotopes by cryogenic distillation is a key process therein, due to the importance of the separation performance for the quality of the fuel mixture and the on site inventory, the increased energy requirements for cryogenic operation, and the high order of mathematical complexity required for accurate models, able to predict the transient as well as the steady state behavior of the process. The modeling methodology described here is a part of a new dynamic simulation code that captures the inventory dynamics of all the species in the fusion fuel processing plant. The significant reduction of the computational effort and time required by this code will permit designers to easily explore a variety of design and technology options and assess their impact on the overall power plant safety
MacNab, Ying C
2016-08-01
This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.
Lag space estimation in time series modelling
DEFF Research Database (Denmark)
Goutte, Cyril
1997-01-01
The purpose of this article is to investigate some techniques for finding the relevant lag-space, i.e. input information, for time series modelling. This is an important aspect of time series modelling, as it conditions the design of the model through the regressor vector a.k.a. the input layer...
Estimation of landfill emission lifespan using process oriented modeling
International Nuclear Information System (INIS)
Ustohalova, Veronika; Ricken, Tim; Widmann, Renatus
2006-01-01
Depending on the particular pollutants emitted, landfills may require service activities lasting from hundreds to thousands of years. Flexible tools allowing long-term predictions of emissions are of key importance to determine the nature and expected duration of maintenance and post-closure activities. A highly capable option represents predictions based on models and verified by experiments that are fast, flexible and allow for the comparison of various possible operation scenarios in order to find the most appropriate one. The intention of the presented work was to develop a experimentally verified multi-dimensional predictive model capable of quantifying and estimating processes taking place in landfill sites where coupled process description allows precise time and space resolution. This constitutive 2-dimensional model is based on the macromechanical theory of porous media (TPM) for a saturated thermo-elastic porous body. The model was used to simulate simultaneously occurring processes: organic phase transition, gas emissions, heat transport, and settlement behavior on a long time scale for municipal solid waste deposited in a landfill. The relationships between the properties (composition, pore structure) of a landfill and the conversion and multi-phase transport phenomena inside it were experimentally determined. In this paper, we present both the theoretical background of the model and the results of the simulations at one single point as well as in a vertical landfill cross section
Korsten, Maarten J.; Houkes, Z.
1990-01-01
A method is given to estimate the geometry and motion of a moving body surface from image sequences. To this aim a parametric model of the surface is used, in order to reformulate the problem to one of parameter estimation. After linearization of the model standard linear estimation methods can be
Results and Error Estimates from GRACE Forward Modeling over Antarctica
Bonin, Jennifer; Chambers, Don
2013-04-01
Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.
Estimation of DSGE Models under Diffuse Priors and Data-Driven Identification Constraints
DEFF Research Database (Denmark)
Lanne, Markku; Luoto, Jani
We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem of multimo......We propose a sequential Monte Carlo (SMC) method augmented with an importance sampling step for estimation of DSGE models. In addition to being theoretically well motivated, the new method facilitates the assessment of estimation accuracy. Furthermore, in order to alleviate the problem...... the properties of the estimation method, and shows how the problem of multimodal posterior distributions caused by parameter redundancy is eliminated by identification constraints. Out-of-sample forecast comparisons as well as Bayes factors lend support to the constrained model....
Czech Academy of Sciences Publication Activity Database
Omelchenko, Vadym; Kaňková, Vlasta
2015-01-01
Roč. 84, č. 2 (2015), s. 267-281 ISSN 0862-9544 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : Stochastic programming problems * empirical estimates * light and heavy tailed distributions * quantiles Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2015/E/omelchenko-0454495.pdf
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti; Lazarov, Raytcho; Zhou, Zhi
2013-01-01
initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally
Filtered Carrier Phase Estimator for High-Order QAM Optical Systems
DEFF Research Database (Denmark)
Rozental, Valery; Kong, Deming; Corcoran, Bill
2018-01-01
We investigate, using Monte Carlo simulations, the performance characteristics and limits of a low-complexity filtered carrier phase estimator (F-CPE) in terms of cycle slip occurrences and SNR penalties. In this work, the F-CPE algorithm has been extended to include modulation formats whose oute...
Extreme Quantile Estimation in Binary Response Models
1990-03-01
in Cancer Research," Biometria , VoL 66, pp. 307-316. Hsi, B.P. [1969], ’The Multiple Sample Up-and-Down Method in Bioassay," Journal of the American...New Method of Estimation," Biometria , VoL 53, pp. 439-454. Wetherill, G.B. [1976], Sequential Methods in Statistics, London: Chapman and Hall. Wu, C.FJ
Robust Estimation and Forecasting of the Capital Asset Pricing Model
G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2013-01-01
textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more
Robust Estimation and Forecasting of the Capital Asset Pricing Model
G. Bian (Guorui); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)
2010-01-01
textabstractIn this paper, we develop a modified maximum likelihood (MML) estimator for the multiple linear regression model with underlying student t distribution. We obtain the closed form of the estimators, derive the asymptotic properties, and demonstrate that the MML estimator is more
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2009-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Efficient estimation of an additive quantile regression model
Cheng, Y.; de Gooijer, J.G.; Zerom, D.
2010-01-01
In this paper two kernel-based nonparametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a viable alternative to the method of De Gooijer and Zerom (2003). By
Performances Of Estimators Of Linear Models With Autocorrelated ...
African Journals Online (AJOL)
The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...
Crash data modeling with a generalized estimator.
Ye, Zhirui; Xu, Yueru; Lord, Dominique
2018-05-11
The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lepez, V
2002-12-01
The aim of this thesis is to build a statistical model of oil and gas fields' sizes distribution in a given sedimentary basin, for both the fields that exist in:the subsoil and those which have already been discovered. The estimation of all the parameters of the model via estimation of the density of the observations by model selection of piecewise polynomials by penalized maximum likelihood techniques enables to provide estimates of the total number of fields which are yet to be discovered, by class of size. We assume that the set of underground fields' sizes is an i.i.d. sample of unknown population with Levy-Pareto law with unknown parameter. The set of already discovered fields is a sub-sample without replacement from the previous which is 'size-biased'. The associated inclusion probabilities are to be estimated. We prove that the probability density of the observations is the product of the underlying density and of an unknown weighting function representing the sampling bias. An arbitrary partition of the sizes' interval being set (called a model), the analytical solutions of likelihood maximization enables to estimate both the parameter of the underlying Levy-Pareto law and the weighting function, which is assumed to be piecewise constant and based upon the partition. We shall add a monotonousness constraint over the latter, taking into account the fact that the bigger a field, the higher its probability of being discovered. Horvitz-Thompson-like estimators finally give the conclusion. We then allow our partitions to vary inside several classes of models and prove a model selection theorem which aims at selecting the best partition within a class, in terms of both Kuilback and Hellinger risk of the associated estimator. We conclude by simulations and various applications to real data from sedimentary basins of four continents, in order to illustrate theoretical as well as practical aspects of our model. (author)
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Estimating confidence intervals in predicted responses for oscillatory biological models.
St John, Peter C; Doyle, Francis J
2013-07-29
The dynamics of gene regulation play a crucial role in a cellular control: allowing the cell to express the right proteins to meet changing needs. Some needs, such as correctly anticipating the day-night cycle, require complicated oscillatory features. In the analysis of gene regulatory networks, mathematical models are frequently used to understand how a network's structure enables it to respond appropriately to external inputs. These models typically consist of a set of ordinary differential equations, describing a network of biochemical reactions, and unknown kinetic parameters, chosen such that the model best captures experimental data. However, since a model's parameter values are uncertain, and since dynamic responses to inputs are highly parameter-dependent, it is difficult to assess the confidence associated with these in silico predictions. In particular, models with complex dynamics - such as oscillations - must be fit with computationally expensive global optimization routines, and cannot take advantage of existing measures of identifiability. Despite their difficulty to model mathematically, limit cycle oscillations play a key role in many biological processes, including cell cycling, metabolism, neuron firing, and circadian rhythms. In this study, we employ an efficient parameter estimation technique to enable a bootstrap uncertainty analysis for limit cycle models. Since the primary role of systems biology models is the insight they provide on responses to rate perturbations, we extend our uncertainty analysis to include first order sensitivity coefficients. Using a literature model of circadian rhythms, we show how predictive precision is degraded with decreasing sample points and increasing relative error. Additionally, we show how this method can be used for model discrimination by comparing the output identifiability of two candidate model structures to published literature data. Our method permits modellers of oscillatory systems to confidently
Applicability of models to estimate traffic noise for urban roads.
Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M
2015-01-01
Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.
Conditional density estimation using fuzzy GARCH models
Almeida, R.J.; Bastürk, N.; Kaymak, U.; Costa Sousa, da J.M.; Kruse, R.; Berthold, M.R.; Moewes, C.; Gil, M.A.; Grzegorzewski, P.; Hryniewicz, O.
2013-01-01
Abstract. Time series data exhibits complex behavior including non-linearity and path-dependency. This paper proposes a flexible fuzzy GARCH model that can capture different properties of data, such as skewness, fat tails and multimodality in one single model. Furthermore, additional information and
DEFF Research Database (Denmark)
Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen
2011-01-01
described is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders......Abstract This paper discusses the concept of using rank variation concerning the stakeholder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis....... Furthermore, the relative weights can make a large difference in the resulting assessment of alternatives (Hobbs and Meier 2000). Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness...
DEFF Research Database (Denmark)
Jensen, Anders Vestergaard; Barfod, Michael Bruhn; Leleur, Steen
is based on the fact that when using MCA as a decision-support tool, questions often arise about the weighting (or prioritising) of the included criteria. This part of the MCA is seen as the most subjective part and could give reasons for discussion among the decision makers or stakeholders. Furthermore......This paper discusses the concept of using rank variation concerning the stake-holder prioritising of importance criteria for exploring the sensitivity of criteria weights in multi-criteria analysis (MCA). Thereby the robustness of the MCA-based decision support can be tested. The analysis described......, the relative weights can make a large difference in the resulting assessment of alternatives [1]. Therefore it is highly relevant to introduce a procedure for estimating the importance of criteria weights. This paper proposes a methodology for estimating the robustness of weights used in additive utility...
Error Estimates for a Semidiscrete Finite Element Method for Fractional Order Parabolic Equations
Jin, Bangti
2013-01-01
We consider the initial boundary value problem for a homogeneous time-fractional diffusion equation with an initial condition ν(x) and a homogeneous Dirichlet boundary condition in a bounded convex polygonal domain Ω. We study two semidiscrete approximation schemes, i.e., the Galerkin finite element method (FEM) and lumped mass Galerkin FEM, using piecewise linear functions. We establish almost optimal with respect to the data regularity error estimates, including the cases of smooth and nonsmooth initial data, i.e., ν ∈ H2(Ω) ∩ H0 1(Ω) and ν ∈ L2(Ω). For the lumped mass method, the optimal L2-norm error estimate is valid only under an additional assumption on the mesh, which in two dimensions is known to be satisfied for symmetric meshes. Finally, we present some numerical results that give insight into the reliability of the theoretical study. © 2013 Society for Industrial and Applied Mathematics.
Modelling of diffusion from equilibrium diffraction fluctuations in ordered phases
International Nuclear Information System (INIS)
Arapaki, E.; Argyrakis, P.; Tringides, M.C.
2008-01-01
Measurements of the collective diffusion coefficient D c at equilibrium are difficult because they are based on monitoring low amplitude concentration fluctuations generated spontaneously, that are difficult to measure experimentally. A new experimental method has been recently used to measure time-dependent correlation functions from the diffraction intensity fluctuations and was applied to measure thermal step fluctuations. The method has not been applied yet to measure superstructure intensity fluctuations in surface overlayers and to extract D c . With Monte Carlo simulations we study equilibrium fluctuations in Ising lattice gas models with nearest neighbor attractive and repulsive interactions. The extracted diffusion coefficients are compared to the ones obtained from equilibrium methods. The new results are in good agreement with the results from the other methods, i.e., D c decreases monotonically with coverage Θ for attractive interactions and increases monotonically with Θ for repulsive interactions. Even the absolute value of D c agrees well with the results obtained with the probe area method. These results confirm that this diffraction based method is a novel, reliable way to measure D c especially within the ordered region of the phase diagram when the superstructure spot has large intensity
Estimation of a multivariate mean under model selection uncertainty
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2014-05-01
Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty. When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.
A nonparametric mixture model for cure rate estimation.
Peng, Y; Dear, K B
2000-03-01
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.
A simulation of water pollution model parameter estimation
Kibler, J. F.
1976-01-01
A parameter estimation procedure for a water pollution transport model is elaborated. A two-dimensional instantaneous-release shear-diffusion model serves as representative of a simple transport process. Pollution concentration levels are arrived at via modeling of a remote-sensing system. The remote-sensed data are simulated by adding Gaussian noise to the concentration level values generated via the transport model. Model parameters are estimated from the simulated data using a least-squares batch processor. Resolution, sensor array size, and number and location of sensor readings can be found from the accuracies of the parameter estimates.
Directory of Open Access Journals (Sweden)
Szymszal J.
2013-09-01
Full Text Available It has been found that the area where one can look for significant reserves in the procurement logistics is a rational management of the stock of raw materials. Currently, the main purpose of projects which increase the efficiency of inventory management is to rationalise all the activities in this area, taking into account and minimising at the same time the total inventory costs. The paper presents a method for optimising the inventory level of raw materials under a foundry plant conditions using two different control models. The first model is based on the estimate of an optimal level of the minimum emergency stock of raw materials, giving information about the need for an order to be placed immediately and about the optimal size of consignments ordered after the minimum emergency level has occurred. The second model is based on the estimate of a maximum inventory level of raw materials and an optimal order cycle. Optimisation of the presented models has been based on the previously done selection and use of rational methods for forecasting the time series of the delivery of a chosen auxiliary material (ceramic filters to a casting plant, including forecasting a mean size of the delivered batch of products and its standard deviation.
Vishwakarma, Vinod
Modified Modal Domain Analysis (MMDA) is a novel method for the development of a reduced-order model (ROM) of a bladed rotor. This method utilizes proper orthogonal decomposition (POD) of Coordinate Measurement Machine (CMM) data of blades' geometries and sector analyses using ANSYS. For the first time ROM of a geometrically mistuned industrial scale rotor (Transonic rotor) with large size of Finite Element (FE) model is generated using MMDA. Two methods for estimating mass and stiffness mistuning matrices are used a) exact computation from sector FE analysis, b) estimates based on POD mistuning parameters. Modal characteristics such as mistuned natural frequencies, mode shapes and forced harmonic response are obtained from ROM for various cases, and results are compared with full rotor ANSYS analysis and other ROM methods such as Subset of Nominal Modes (SNM) and Fundamental Model of Mistuning (FMM). Accuracy of MMDA ROM is demonstrated with variations in number of POD features and geometric mistuning parameters. It is shown for the aforementioned case b) that the high accuracy of ROM studied in previous work with Academic rotor does not directly translate to the Transonic rotor. Reasons for such mismatch in results are investigated and attributed to higher mistuning in Transonic rotor. Alternate solutions such as estimation of sensitivities via least squares, and interpolation of mass and stiffness matrices on manifolds are developed, and their results are discussed. Statistics such as mean and standard deviations of forced harmonic response peak amplitude are obtained from random permutations, and are shown to have similar results as those of Monte Carlo simulations. These statistics are obtained and compared for 3 degree of freedom (DOF) lumped parameter model (LPM) of rotor, Academic rotor and Transonic rotor. A state -- estimator based on MMDA ROM and Kalman filter is also developed for offline or online estimation of harmonic forcing function from
Estimating High-Dimensional Time Series Models
DEFF Research Database (Denmark)
Medeiros, Marcelo C.; Mendes, Eduardo F.
We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...
Optimal covariance selection for estimation using graphical models
Vichik, Sergey; Oshman, Yaakov
2011-01-01
We consider a problem encountered when trying to estimate a Gaussian random field using a distributed estimation approach based on Gaussian graphical models. Because of constraints imposed by estimation tools used in Gaussian graphical models, the a priori covariance of the random field is constrained to embed conditional independence constraints among a significant number of variables. The problem is, then: given the (unconstrained) a priori covariance of the random field, and the conditiona...
Estimating Lead (Pb) Bioavailability In A Mouse Model
Children are exposed to Pb through ingestion of Pb-contaminated soil. Soil Pb bioavailability is estimated using animal models or with chemically defined in vitro assays that measure bioaccessibility. However, bioavailability estimates in a large animal model (e.g., swine) can be...
Estimating Dynamic Equilibrium Models using Macro and Financial Data
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Posch, Olaf; van der Wel, Michel
We show that including financial market data at daily frequency, along with macro series at standard lower frequency, facilitates statistical inference on structural parameters in dynamic equilibrium models. Our continuous-time formulation conveniently accounts for the difference in observation...... of the estimators and estimate the model using 20 years of U.S. macro and financial data....
mathematical models for estimating radio channels utilization
African Journals Online (AJOL)
2017-08-08
Aug 8, 2017 ... Mathematical models for radio channels utilization assessment by real-time flows transfer in ... data transmission networks application having dynamic topology ..... Journal of Applied Mathematics and Statistics, 56(2): 85–90.
Linear Regression Models for Estimating True Subsurface ...
Indian Academy of Sciences (India)
47
The objective is to minimize the processing time and computer memory required. 10 to carry out inversion .... to the mainland by two long bridges. .... term. In this approach, the model converges when the squared sum of the differences. 143.
Seo, Seongwon; Hwang, Yongwoo
1999-08-01
Construction and demolition (C&D) debris is generated at the site of various construction activities. However, the amount of the debris is usually so large that it is necessary to estimate the amount of C&D debris as accurately as possible for effective waste management and control in urban areas. In this paper, an effective estimation method using a statistical model was proposed. The estimation process was composed of five steps: estimation of the life span of buildings; estimation of the floor area of buildings to be constructed and demolished; calculation of individual intensity units of C&D debris; and estimation of the future C&D debris production. This method was also applied in the city of Seoul as an actual case, and the estimated amount of C&D debris in Seoul in 2021 was approximately 24 million tons. Of this total amount, 98% was generated by demolition, and the main components of debris were concrete and brick.
International Nuclear Information System (INIS)
Brunner, W.; Focht, D.D.
1984-01-01
The kinetics of mineralization of carbonaceous substrates has been explained by a deterministic model which is applicable to either growth or nongrowth conditions in soils. The mixed-order nature of the model does not require a priori decisions about reaction order, discontinuity period of lag or stationary phase, or correction for endogenous mineralization rates. The integrated equation is simpler than the integrated form of the Monod equation because of the following: (i) only two, rather than four, interdependent constants have to be determined by nonlinear regression analysis, (ii) substrate or product formation can be expressed explicitly as a function of time, (iii) biomass concentration does not have to be known, and (iv) the required initial estimate for the nonlinear regression analysis can be easily obtained from a linearized form rather than from an interval estimate of a differential equation. 14 CO 2 evolution data from soil have been fitted to the model equation. All data except those from irradiated soil gave us better fits by residual sum of squares (RSS) by assuming growth in soil was linear (RSS =0.71) as opposed to exponential (RSS = 2.87). The underlying reasons for growth (exponential versus linear), no growth, and relative degradation rates of substrates are consistent with the basic mechanisms from which the model is derived. 21 references
The Interaction Between Control Rods as Estimated by Second-Order One-Group Perturbation Theory
International Nuclear Information System (INIS)
Persson, Rolf
1966-10-01
The interaction effect between control rods is an important problem for the reactivity control of a reactor. The approach of second order one-group perturbation theory is shown to be attractive due to its simplicity. Formulas are derived for the fully inserted control rods in a bare reactor. For a single rod we introduce a correction parameter b, which with good approximation is proportional to the strength of the absorber. For two and more rods we introduce an interaction function g(r ij ), which is assumed to depend only on the distance r ij between the rods. The theoretical expressions are correlated with the results of several experiments in R0, ZEBRA and the Aagesta reactor, as well as with more sophisticated calculations. The approximate formulas are found to give quite good agreement with exact values, but in the case of about 8 or more rods higher-order effects are likely to be important
The Interaction Between Control Rods as Estimated by Second-Order One-Group Perturbation Theory
Energy Technology Data Exchange (ETDEWEB)
Persson, Rolf
1966-10-15
The interaction effect between control rods is an important problem for the reactivity control of a reactor. The approach of second order one-group perturbation theory is shown to be attractive due to its simplicity. Formulas are derived for the fully inserted control rods in a bare reactor. For a single rod we introduce a correction parameter b, which with good approximation is proportional to the strength of the absorber. For two and more rods we introduce an interaction function g(r{sub ij}), which is assumed to depend only on the distance r{sub ij} between the rods. The theoretical expressions are correlated with the results of several experiments in R0, ZEBRA and the Aagesta reactor, as well as with more sophisticated calculations. The approximate formulas are found to give quite good agreement with exact values, but in the case of about 8 or more rods higher-order effects are likely to be important.
DEFF Research Database (Denmark)
Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2011-01-01
Distributed weather radar precipitation measurements are used as rainfall input for an urban drainage model, to simulate the runoff from a small catchment of Denmark. It is demonstrated how the Generalized Likelihood Uncertainty Estimation (GLUE) methodology can be implemented and used to estimate...
Ballistic model to estimate microsprinkler droplet distribution
Directory of Open Access Journals (Sweden)
Conceição Marco Antônio Fonseca
2003-01-01
Full Text Available Experimental determination of microsprinkler droplets is difficult and time-consuming. This determination, however, could be achieved using ballistic models. The present study aimed to compare simulated and measured values of microsprinkler droplet diameters. Experimental measurements were made using the flour method, and simulations using a ballistic model adopted by the SIRIAS computational software. Drop diameters quantified in the experiment varied between 0.30 mm and 1.30 mm, while the simulated between 0.28 mm and 1.06 mm. The greatest differences between simulated and measured values were registered at the highest radial distance from the emitter. The model presented a performance classified as excellent for simulating microsprinkler drop distribution.
Estimation of high orders of the perturbation theory in quantum mechanics
International Nuclear Information System (INIS)
Seznec, Reynald.
1978-01-01
First of all the simple case of an integral of one variable (zero-dimensional model) is examined to illustrate the methods and concepts used. A system n quantum oscillators 0(n) (spherical model) is then studied. A theory of perturbations around the saddle point dominating the functional integral is developed (theory of perturbations around the instanton). The fluctuation propagator is calculated explicitly. Some properties of the corresponding Feynman diagrams are also investigated. Methods are proposed to generalize the calculations to more complicated potentials. As an example of application the calculations of the first correction to the Lipatovian term are given for the spherical model [fr
Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals
International Nuclear Information System (INIS)
Rypdal, Kristin; Baritz, Rainer
2002-01-01
Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO 2 carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)
Estimating and managing uncertainties in order to detect terrestrial greenhouse gas removals
Energy Technology Data Exchange (ETDEWEB)
Rypdal, Kristin; Baritz, Rainer
2002-07-01
Inventories of emissions and removals of greenhouse gases will be used under the United Nations Framework Convention on Climate Change and the Kyoto Protocol to demonstrate compliance with obligations. During the negotiation process of the Kyoto Protocol it has been a concern that uptake of carbon in forest sinks can be difficult to verify. The reason for large uncertainties are high temporal and spatial variability and lack of representative estimation parameters. Additional uncertainties will be a consequence of definitions made in the Kyoto Protocol reporting. In the Nordic countries the national forest inventories will be very useful to estimate changes in carbon stocks. The main uncertainty lies in the conversion from changes in tradable timber to changes in total carbon biomass. The uncertainties in the emissions of the non-CO{sub 2} carbon from forest soils are particularly high. On the other hand the removals reported under the Kyoto Protocol will only be a fraction of the total uptake and are not expected to constitute a high share of the total inventory. It is also expected that the Nordic countries will be able to implement a high tier methodology. As a consequence total uncertainties may not be extremely high. (Author)
Estimation of some stochastic models used in reliability engineering
International Nuclear Information System (INIS)
Huovinen, T.
1989-04-01
The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method
Ordering kinetics in model systems with inhibited interfacial adsorption
DEFF Research Database (Denmark)
Willart, J.-F.; Mouritsen, Ole G.; Naudts, J.
1992-01-01
. The results are related to experimental work on ordering processes in orientational glasses. It is suggested that the experimental observation of very slow ordering kinetics in, e.g., glassy crystals of cyanoadamantane may be a consequence of low-temperature activated processes which ultimately lead...
Abnormal Waves Modelled as Second-order Conditional Waves
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2005-01-01
The paper presents results for the expected second order short-crested wave conditional of a given wave crest at a specific point in time and space. The analysis is based on the second order Sharma and Dean shallow water wave theory. Numerical results showing the importance of the spectral densit...
Cokriging model for estimation of water table elevation
International Nuclear Information System (INIS)
Hoeksema, R.J.; Clapp, R.B.; Thomas, A.L.; Hunley, A.E.; Farrow, N.D.; Dearstone, K.C.
1989-01-01
In geological settings where the water table is a subdued replica of the ground surface, cokriging can be used to estimate the water table elevation at unsampled locations on the basis of values of water table elevation and ground surface elevation measured at wells and at points along flowing streams. The ground surface elevation at the estimation point must also be determined. In the proposed method, separate models are generated for the spatial variability of the water table and ground surface elevation and for the dependence between these variables. After the models have been validated, cokriging or minimum variance unbiased estimation is used to obtain the estimated water table elevations and their estimation variances. For the Pits and Trenches area (formerly a liquid radioactive waste disposal facility) near Oak Ridge National Laboratory, water table estimation along a linear section, both with and without the inclusion of ground surface elevation as a statistical predictor, illustrate the advantages of the cokriging model
Frequency-domain reduced order models for gravitational waves from aligned-spin compact binaries
International Nuclear Information System (INIS)
Pürrer, Michael
2014-01-01
Black-hole binary coalescences are one of the most promising sources for the first detection of gravitational waves. Fast and accurate theoretical models of the gravitational radiation emitted from these coalescences are highly important for the detection and extraction of physical parameters. Spinning effective-one-body models for binaries with aligned-spins have been shown to be highly faithful, but are slow to generate and thus have not yet been used for parameter estimation (PE) studies. I provide a frequency-domain singular value decomposition-based surrogate reduced order model that is thousands of times faster for typical system masses and has a faithfulness mismatch of better than ∼0.1% with the original SEOBNRv1 model for advanced LIGO detectors. This model enables PE studies up to signal-to-noise ratios (SNRs) of 20 and even up to 50 for total masses below 50 M ⊙ . This paper discusses various choices for approximations and interpolation over the parameter space that can be made for reduced order models of spinning compact binaries, provides a detailed discussion of errors arising in the construction and assesses the fidelity of such models. (paper)
Weibull Parameters Estimation Based on Physics of Failure Model
DEFF Research Database (Denmark)
Kostandyan, Erik; Sørensen, John Dalsgaard
2012-01-01
Reliability estimation procedures are discussed for the example of fatigue development in solder joints using a physics of failure model. The accumulated damage is estimated based on a physics of failure model, the Rainflow counting algorithm and the Miner’s rule. A threshold model is used...... for degradation modeling and failure criteria determination. The time dependent accumulated damage is assumed linearly proportional to the time dependent degradation level. It is observed that the deterministic accumulated damage at the level of unity closely estimates the characteristic fatigue life of Weibull...
A Dynamic Travel Time Estimation Model Based on Connected Vehicles
Directory of Open Access Journals (Sweden)
Daxin Tian
2015-01-01
Full Text Available With advances in connected vehicle technology, dynamic vehicle route guidance models gradually become indispensable equipment for drivers. Traditional route guidance models are designed to direct a vehicle along the shortest path from the origin to the destination without considering the dynamic traffic information. In this paper a dynamic travel time estimation model is presented which can collect and distribute traffic data based on the connected vehicles. To estimate the real-time travel time more accurately, a road link dynamic dividing algorithm is proposed. The efficiency of the model is confirmed by simulations, and the experiment results prove the effectiveness of the travel time estimation method.
Biochemical transport modeling, estimation, and detection in realistic environments
Ortner, Mathias; Nehorai, Arye
2006-05-01
Early detection and estimation of the spread of a biochemical contaminant are major issues for homeland security applications. We present an integrated approach combining the measurements given by an array of biochemical sensors with a physical model of the dispersion and statistical analysis to solve these problems and provide system performance measures. We approximate the dispersion model of the contaminant in a realistic environment through numerical simulations of reflected stochastic diffusions describing the microscopic transport phenomena due to wind and chemical diffusion using the Feynman-Kac formula. We consider arbitrary complex geometries and account for wind turbulence. Localizing the dispersive sources is useful for decontamination purposes and estimation of the cloud evolution. To solve the associated inverse problem, we propose a Bayesian framework based on a random field that is particularly powerful for localizing multiple sources with small amounts of measurements. We also develop a sequential detector using the numerical transport model we propose. Sequential detection allows on-line analysis and detecting wether a change has occurred. We first focus on the formulation of a suitable sequential detector that overcomes the presence of unknown parameters (e.g. release time, intensity and location). We compute a bound on the expected delay before false detection in order to decide the threshold of the test. For a fixed false-alarm rate, we obtain the detection probability of a substance release as a function of its location and initial concentration. Numerical examples are presented for two real-world scenarios: an urban area and an indoor ventilation duct.
These model-based estimates use two surveys, the Behavioral Risk Factor Surveillance System (BRFSS) and the National Health Interview Survey (NHIS). The two surveys are combined using novel statistical methodology.
State reduced order models for the modelling of the thermal behavior of buildings
Energy Technology Data Exchange (ETDEWEB)
Menezo, Christophe; Bouia, Hassan; Roux, Jean-Jacques; Depecker, Patrick [Institute National de Sciences Appliquees de Lyon, Villeurbanne Cedex, (France). Centre de Thermique de Lyon (CETHIL). Equipe Thermique du Batiment]. E-mail: menezo@insa-cethil-etb.insa-lyon.fr; bouia@insa-cethil-etb.insa-lyon.fr; roux@insa-cethil-etb.insa-lyon.fr; depecker@insa-cethil-etb.insa-lyon.fr
2000-07-01
This work is devoted to the field of building physics and related to the reduction of heat conduction models. The aim is to enlarge the model libraries of heat and mass transfer codes through limiting the considerable dimensions reached by the numerical systems during the modelling process of a multizone building. We show that the balanced realization technique, specifically adapted to the coupling of reduced order models with the other thermal phenomena, turns out to be very efficient. (author)
Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model
Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami
2017-06-01
A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.
Diffuse solar radiation estimation models for Turkey's big cities
International Nuclear Information System (INIS)
Ulgen, Koray; Hepbasli, Arif
2009-01-01
A reasonably accurate knowledge of the availability of the solar resource at any place is required by solar engineers, architects, agriculturists, and hydrologists in many applications of solar energy such as solar furnaces, concentrating collectors, and interior illumination of buildings. For this purpose, in the past, various empirical models (or correlations) have been developed in order to estimate the solar radiation around the world. This study deals with diffuse solar radiation estimation models along with statistical test methods used to statistically evaluate their performance. Models used to predict monthly average daily values of diffuse solar radiation are classified in four groups as follows: (i) From the diffuse fraction or cloudness index, function of the clearness index, (ii) From the diffuse fraction or cloudness index, function of the relative sunshine duration or sunshine fraction, (iii) From the diffuse coefficient, function of the clearness index, and (iv) From the diffuse coefficient, function of the relative sunshine duration or sunshine fraction. Empirical correlations are also developed to establish a relationship between the monthly average daily diffuse fraction or cloudness index (K d ) and monthly average daily diffuse coefficient (K dd ) with the monthly average daily clearness index (K T ) and monthly average daily sunshine fraction (S/S o ) for the three big cities by population in Turkey (Istanbul, Ankara and Izmir). Although the global solar radiation on a horizontal surface and sunshine duration has been measured by the Turkish State Meteorological Service (STMS) over all country since 1964, the diffuse solar radiation has not been measured. The eight new models for estimating the monthly average daily diffuse solar radiation on a horizontal surface in three big cites are validated, and thus, the most accurate model is selected for guiding future projects. The new models are then compared with the 32 models available in the
Estimation of periodic solutions number of first-order differential equations
Ivanov, Gennady; Alferov, Gennady; Gorovenko, Polina; Sharlay, Artem
2018-05-01
The paper deals with first-order differential equations under the assumption that the right-hand side is a periodic function of time and continuous in the set of arguments. Pliss V.A. obtained the first results for a particular class of equations and showed that a number of theorems can not be continued. In this paper, it was possible to reduce the restrictions on the degree of smoothness of the right-hand side of the equation and obtain upper and lower bounds on the number of possible periodic solutions.
NONLINEAR PLANT PIECEWISE-CONTINUOUS MODEL MATRIX PARAMETERS ESTIMATION
Directory of Open Access Journals (Sweden)
Roman L. Leibov
2017-09-01
Full Text Available This paper presents a nonlinear plant piecewise-continuous model matrix parameters estimation technique using nonlinear model time responses and random search method. One of piecewise-continuous model application areas is defined. The results of proposed approach application for aircraft turbofan engine piecewisecontinuous model formation are presented
Estimation methods for nonlinear state-space models in ecology
DEFF Research Database (Denmark)
Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro
2011-01-01
The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...
Extreme gust wind estimation using mesoscale modeling
DEFF Research Database (Denmark)
Larsén, Xiaoli Guo; Kruger, Andries
2014-01-01
, surface turbulence characteristics. In this study, we follow a theory that is different from the local gust concept as described above. In this theory, the gust at the surface is non-local; it is produced by the deflection of air parcels flowing in the boundary layer and brought down to the surface...... from the Danish site Høvsøre help us to understand the limitation of the traditional method. Good agreement was found between the extreme gust atlases for South Africa and the existing map made from a limited number of measurements across the country. Our study supports the non-local gust theory. While...... through turbulent eddies. This process is modeled using the mesoscale Weather Forecasting and Research (WRF) model. The gust at the surface is calculated as the largest winds over a layer where the averaged turbulence kinetic energy is greater than the averaged buoyancy force. The experiments have been...
A Polarimetric First-Order Model of Soil Moisture Effects on the DInSAR Coherence
Directory of Open Access Journals (Sweden)
Simon Zwieback
2015-06-01
Full Text Available Changes in soil moisture between two radar acquisitions can impact the observed coherence in differential interferometry: both coherence magnitude |Υ| and phase Φ are affected. The influence on the latter potentially biases the estimation of deformations. These effects have been found to be variable in magnitude and sign, as well as dependent on polarization, as opposed to predictions by existing models. Such diversity can be explained when the soil is modelled as a half-space with spatially varying dielectric properties and a rough interface. The first-order perturbative solution achieves–upon calibration with airborne L band data–median correlations ρ at HH polarization of 0.77 for the phase Φ, of 0.50 for |Υ|, and for the phase triplets ≡ of 0.56. The predictions are sensitive to the choice of dielectric mixing model, in particular the absorptive properties; the differences between the mixing models are found to be partially compensatable by varying the relative importance of surface and volume scattering. However, for half of the agricultural fields the Hallikainen mixing model cannot reproduce the observed sensitivities of the phase to soil moisture. In addition, the first-order expansion does not predict any impact on the HV coherence, which is however empirically found to display similar sensitivities to soil moisture as the co-pol channels HH and VV. These results indicate that the first-order solution, while not able to reproduce all observed phenomena, can capture some of the more salient patterns of the effect of soil moisture changes on the HH and VV DInSAR signals. Hence it may prove useful in separating the deformations from the moisture signals, thus yielding improved displacement estimates or new ways for inferring soil moisture.
A case study to estimate costs using Neural Networks and regression based models
Directory of Open Access Journals (Sweden)
Nadia Bhuiyan
2012-07-01
Full Text Available Bombardier Aerospace’s high performance aircrafts and services set the utmost standard for the Aerospace industry. A case study in collaboration with Bombardier Aerospace is conducted in order to estimate the target cost of a landing gear. More precisely, the study uses both parametric model and neural network models to estimate the cost of main landing gears, a major aircraft commodity. A comparative analysis between the parametric based model and those upon neural networks model will be considered in order to determine the most accurate method to predict the cost of a main landing gear. Several trials are presented for the design and use of the neural network model. The analysis for the case under study shows the flexibility in the design of the neural network model. Furthermore, the performance of the neural network model is deemed superior to the parametric models for this case study.
NanoSafer vs. 1.1 - Nanomaterial risk assessment using first order modeling
DEFF Research Database (Denmark)
Jensen, Keld A.; Saber, Anne T.; Kristensen, Henrik V.
2013-01-01
for safe use of MN based on first order modeling. The hazard and case specific exposure as sessments are combined for an integrated risk evaluation and final control banding. Requested material da ta are typically available from the producers’ technical information sheets. The hazard data are given...... using the work room dimensions , ventilation rate, powder use rate, duration, and calculated or given emission rates. The hazard sc aling is based on direct assessment. The exposure band is derived from estimated acute and work day expo sure levels divided by a nano OEL calculated from the OEL...... to construct user specific work scenarios for exposure assessment is considered a highly versatile approach....
Model-based estimation for dynamic cardiac studies using ECT
International Nuclear Information System (INIS)
Chiao, P.C.; Rogers, W.L.; Clinthorne, N.H.; Fessler, J.A.; Hero, A.O.
1994-01-01
In this paper, the authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (Emission Computed Tomography). The authors construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. The authors also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, model assumptions and potential uses of the joint estimation strategy are discussed
Model-based estimation for dynamic cardiac studies using ECT.
Chiao, P C; Rogers, W L; Clinthorne, N H; Fessler, J A; Hero, A O
1994-01-01
The authors develop a strategy for joint estimation of physiological parameters and myocardial boundaries using ECT (emission computed tomography). They construct an observation model to relate parameters of interest to the projection data and to account for limited ECT system resolution and measurement noise. The authors then use a maximum likelihood (ML) estimator to jointly estimate all the parameters directly from the projection data without reconstruction of intermediate images. They also simulate myocardial perfusion studies based on a simplified heart model to evaluate the performance of the model-based joint ML estimator and compare this performance to the Cramer-Rao lower bound. Finally, the authors discuss model assumptions and potential uses of the joint estimation strategy.
Directory of Open Access Journals (Sweden)
S. Vukotic
2016-08-01
Full Text Available Digital polynomial-based interpolation filters implemented using the Farrow structure are used in Digital Signal Processing (DSP to calculate the signal between its discrete samples. The two basic design parameters for these filters are number of polynomial-segments defining the finite length of impulse response, and order of polynomials in each polynomial segment. The complexity of the implementation structure and the frequency domain performance depend on these two parameters. This contribution presents estimation formulae for length and polynomial order of polynomial-based filters for various types of requirements including attenuation in stopband, width of transitions band, deviation in passband, weighting in passband/stopband.
A Novel Method for Decoding Any High-Order Hidden Markov Model
Directory of Open Access Journals (Sweden)
Fei Ye
2014-01-01
Full Text Available This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.
Parameter estimation in stochastic rainfall-runoff models
DEFF Research Database (Denmark)
Jonsdottir, Harpa; Madsen, Henrik; Palsson, Olafur Petur
2006-01-01
A parameter estimation method for stochastic rainfall-runoff models is presented. The model considered in the paper is a conceptual stochastic model, formulated in continuous-discrete state space form. The model is small and a fully automatic optimization is, therefore, possible for estimating all...... the parameter values are optimal for simulation or prediction. The data originates from Iceland and the model is designed for Icelandic conditions, including a snow routine for mountainous areas. The model demands only two input data series, precipitation and temperature and one output data series...
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Basic problems solving for two-dimensional discrete 3 × 4 order hidden markov model
International Nuclear Information System (INIS)
Wang, Guo-gang; Gan, Zong-liang; Tang, Gui-jin; Cui, Zi-guan; Zhu, Xiu-chang
2016-01-01
A novel model is proposed to overcome the shortages of the classical hypothesis of the two-dimensional discrete hidden Markov model. In the proposed model, the state transition probability depends on not only immediate horizontal and vertical states but also on immediate diagonal state, and the observation symbol probability depends on not only current state but also on immediate horizontal, vertical and diagonal states. This paper defines the structure of the model, and studies the three basic problems of the model, including probability calculation, path backtracking and parameters estimation. By exploiting the idea that the sequences of states on rows or columns of the model can be seen as states of a one-dimensional discrete 1 × 2 order hidden Markov model, several algorithms solving the three questions are theoretically derived. Simulation results further demonstrate the performance of the algorithms. Compared with the two-dimensional discrete hidden Markov model, there are more statistical characteristics in the structure of the proposed model, therefore the proposed model theoretically can more accurately describe some practical problems.
Directory of Open Access Journals (Sweden)
Jiyuan Zhang
2014-09-01
Full Text Available The application of headspace-solid phase microextraction (HS-SPME has been widely used in various fields as a simple and versatile method, yet challenging in quantification. In order to improve the reproducibility in quantification, a mathematical model with its root in psychological modeling and chemical reactor modeling was developed, describing the kinetic behavior of aroma active compounds extracted by SPME from two different food model systems, i.e., a semi-solid food and a liquid food. The model accounted for both adsorption and release of the analytes from SPME fiber, which occurred simultaneously but were counter-directed. The model had four parameters and their estimated values were found to be more reproducible than the direct measurement of the compounds themselves by instrumental analysis. With the relative standard deviations (RSD of each parameter less than 5% and root mean square error (RMSE less than 0.15, the model was proved to be a robust one in estimating the release of a wide range of low molecular weight acetates at three environmental temperatures i.e., 30, 40 and 60 °C. More insights of SPME behavior regarding the small molecule analytes were also obtained through the kinetic parameters and the model itself.
A unified model for transfer alignment at random misalignment angles based on second-order EKF
International Nuclear Information System (INIS)
Cui, Xiao; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo; Mei, Chunbo
2017-01-01
In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles. (paper)
A unified model for transfer alignment at random misalignment angles based on second-order EKF
Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo
2017-04-01
In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Asiri, Sharefa M.; Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2017-01-01
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Asiri, Sharefa M.
2017-08-22
In this paper, an on-line estimation algorithm of the source term in a first order hyperbolic PDE is proposed. This equation describes heat transport dynamics in concentrated solar collectors where the source term represents the received energy. This energy depends on the solar irradiance intensity and the collector characteristics affected by the environmental changes. Control strategies are usually used to enhance the efficiency of heat production; however, these strategies often depend on the source term which is highly affected by the external working conditions. Hence, efficient source estimation methods are required. The proposed algorithm is based on modulating functions method where a moving horizon strategy is introduced. Numerical results are provided to illustrate the performance of the proposed estimator in open and closed loops.
Directory of Open Access Journals (Sweden)
Christer Dalen
2017-10-01
Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.
DEFF Research Database (Denmark)
Kaplan, Sigal; Prato, Carlo Giacomo
2012-01-01
of 2011. Method: The current study investigates the underlying risk factors of bus accident severity in the United States by estimating a generalized ordered logit model. Data for the analysis are retrieved from the General Estimates System (GES) database for the years 2005–2009. Results: Results show...... that accident severity increases: (i) for young bus drivers under the age of 25; (ii) for drivers beyond the age of 55, and most prominently for drivers over 65 years old; (iii) for female drivers; (iv) for very high (over 65 mph) and very low (under 20 mph) speed limits; (v) at intersections; (vi) because......Introduction: Recent years have witnessed a growing interest in improving bus safety operations worldwide. While in the United States buses are considered relatively safe, the number of bus accidents is far from being negligible, triggering the introduction of the Motor-coach Enhanced Safety Act...
Model improves oil field operating cost estimates
International Nuclear Information System (INIS)
Glaeser, J.L.
1996-01-01
A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis
Explicit estimating equations for semiparametric generalized linear latent variable models
Ma, Yanyuan
2010-07-05
We study generalized linear latent variable models without requiring a distributional assumption of the latent variables. Using a geometric approach, we derive consistent semiparametric estimators. We demonstrate that these models have a property which is similar to that of a sufficient complete statistic, which enables us to simplify the estimating procedure and explicitly to formulate the semiparametric estimating equations. We further show that the explicit estimators have the usual root n consistency and asymptotic normality. We explain the computational implementation of our method and illustrate the numerical performance of the estimators in finite sample situations via extensive simulation studies. The advantage of our estimators over the existing likelihood approach is also shown via numerical comparison. We employ the method to analyse a real data example from economics. © 2010 Royal Statistical Society.
Estimating varying coefficients for partial differential equation models.
Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J
2017-09-01
Partial differential equations (PDEs) are used to model complex dynamical systems in multiple dimensions, and their parameters often have important scientific interpretations. In some applications, PDE parameters are not constant but can change depending on the values of covariates, a feature that we call varying coefficients. We propose a parameter cascading method to estimate varying coefficients in PDE models from noisy data. Our estimates of the varying coefficients are shown to be consistent and asymptotically normally distributed. The performance of our method is evaluated by a simulation study and by an empirical study estimating three varying coefficients in a PDE model arising from LIDAR data. © 2017, The International Biometric Society.
Low-order models of wave interactions in the transition to baroclinic chaos
Directory of Open Access Journals (Sweden)
W.-G. Früh
1996-01-01
Full Text Available A hierarchy of low-order models, based on the quasi-geostrophic two-layer model, is used to investigate complex multi-mode flows. The different models were used to study distinct types of nonlinear interactions, namely wave- wave interactions through resonant triads, and zonal flow-wave interactions. The coupling strength of individual triads is estimated using a phase locking probability density function. The flow of primary interest is a strongly modulated amplitude vacillation, whose modulation is coupled to intermittent bursts of weaker wave modes. This flow was found to emerge in a discontinuous bifurcation directly from a steady wave solution. Two mechanism were found to result in this flow, one involving resonant triads, and the other involving zonal flow-wave interactions together with a strong β-effect. The results will be compared with recent laboratory experiments of multi-mode baroclinic waves in a rotating annulus of fluid subjected to a horizontal temperature gradient.
Comparing estimates of genetic variance across different relationship models.
Legarra, Andres
2016-02-01
Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.
Bilinear reduced order approximate model of parabolic distributed solar collectors
Elmetennani, Shahrazed; Laleg-Kirati, Taous-Meriem
2015-01-01
This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low
Information matrix estimation procedures for cognitive diagnostic models.
Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei
2018-03-06
Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
Linear and nonlinear stability analysis in BWRs applying a reduced order model
Energy Technology Data Exchange (ETDEWEB)
Olvera G, O. A.; Espinosa P, G.; Prieto G, A., E-mail: omar_olverag@hotmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico)
2016-09-15
Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)
Linear and nonlinear stability analysis in BWRs applying a reduced order model
International Nuclear Information System (INIS)
Olvera G, O. A.; Espinosa P, G.; Prieto G, A.
2016-09-01
Boiling Water Reactor (BWR) stability studies are generally conducted through nonlinear reduced order models (Rom) employing various techniques such as bifurcation analysis and time domain numerical integration. One of those models used for these studies is the March-Leuba Rom. Such model represents qualitatively the dynamic behavior of a BWR through a one-point reactor kinetics, a one node representation of the heat transfer process in fuel, and a two node representation of the channel Thermal hydraulics to account for the void reactivity feedback. Here, we study the effect of this higher order model on the overall stability of the BWR. The change in the stability boundaries is determined by evaluating the eigenvalues of the Jacobian matrix. The nonlinear model is also integrated numerically to show that in the nonlinear region, the system evolves to stable limit cycles when operating close to the stability boundary. We also applied a new technique based on the Empirical Mode Decomposition (Emd) to estimate a parameter linked with stability in a BWR. This instability parameter is not exactly the classical Decay Ratio (Dr), but it will be linked with it. The proposed method allows decomposing the analyzed signal in different levels or mono-component functions known as intrinsic mode functions (Imf). One or more of these different modes can be associated to the instability problem in BWRs. By tracking the instantaneous frequencies (calculated through Hilbert Huang Transform (HHT) and the autocorrelation function (Acf) of the Imf linked to instability. The estimation of the proposed parameter can be achieved. The current methodology was validated with simulated signals of the studied model. (Author)
Estimating Structural Models of Corporate Bond Prices in Indonesian Corporations
Directory of Open Access Journals (Sweden)
Lenny Suardi
2014-08-01
Full Text Available This paper applies the maximum likelihood (ML approaches to implementing the structural model of corporate bond, as suggested by Li and Wong (2008, in Indonesian corporations. Two structural models, extended Merton and Longstaff & Schwartz (LS models, are used in determining these prices, yields, yield spreads and probabilities of default. ML estimation is used to determine the volatility of irm value. Since irm value is unobserved variable, Duan (1994 suggested that the irst step of ML estimation is to derive the likelihood function for equity as the option on the irm value. The second step is to ind parameters such as the drift and volatility of irm value, that maximizing this function. The irm value itself is extracted by equating the pricing formula to the observed equity prices. Equity, total liabilities, bond prices data and the irm's parameters (irm value, volatility of irm value, and default barrier are substituted to extended Merton and LS bond pricing formula in order to valuate the corporate bond.These models are implemented to a sample of 24 bond prices in Indonesian corporation during period of 2001-2005, based on criteria of Eom, Helwege and Huang (2004. The equity and bond prices data were obtained from Indonesia Stock Exchange for irms that issued equity and provided regular inancial statement within this period. The result shows that both models, in average, underestimate the bond prices and overestimate the yields and yield spread. ";} // -->activate javascript
Estimating cardiovascular disease incidence from prevalence: a spreadsheet based model
Directory of Open Access Journals (Sweden)
Xue Feng Hu
2017-01-01
Full Text Available Abstract Background Disease incidence and prevalence are both core indicators of population health. Incidence is generally not as readily accessible as prevalence. Cohort studies and electronic health record systems are two major way to estimate disease incidence. The former is time-consuming and expensive; the latter is not available in most developing countries. Alternatively, mathematical models could be used to estimate disease incidence from prevalence. Methods We proposed and validated a method to estimate the age-standardized incidence of cardiovascular disease (CVD, with prevalence data from successive surveys and mortality data from empirical studies. Hallett’s method designed for estimating HIV infections in Africa was modified to estimate the incidence of myocardial infarction (MI in the U.S. population and incidence of heart disease in the Canadian population. Results Model-derived estimates were in close agreement with observed incidence from cohort studies and population surveillance systems. This method correctly captured the trend in incidence given sufficient waves of cross-sectional surveys. The estimated MI declining rate in the U.S. population was in accordance with the literature. This method was superior to closed cohort, in terms of the estimating trend of population cardiovascular disease incidence. Conclusion It is possible to estimate CVD incidence accurately at the population level from cross-sectional prevalence data. This method has the potential to be used for age- and sex- specific incidence estimates, or to be expanded to other chronic conditions.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
International Nuclear Information System (INIS)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-01-01
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reported demonstrate the simplicity and versatility of relaxation schemes as numerical solvers
Asymptotics for Estimating Equations in Hidden Markov Models
DEFF Research Database (Denmark)
Hansen, Jørgen Vinsløv; Jensen, Jens Ledet
Results on asymptotic normality for the maximum likelihood estimate in hidden Markov models are extended in two directions. The stationarity assumption is relaxed, which allows for a covariate process influencing the hidden Markov process. Furthermore a class of estimating equations is considered...
Online State Space Model Parameter Estimation in Synchronous Machines
Directory of Open Access Journals (Sweden)
Z. Gallehdari
2014-06-01
The suggested approach is evaluated for a sample synchronous machine model. Estimated parameters are tested for different inputs at different operating conditions. The effect of noise is also considered in this study. Simulation results show that the proposed approach provides good accuracy for parameter estimation.
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
2002-01-01
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Parameter Estimation for a Computable General Equilibrium Model
DEFF Research Database (Denmark)
Arndt, Channing; Robinson, Sherman; Tarp, Finn
We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...
Person Appearance Modeling and Orientation Estimation using Spherical Harmonics
Liem, M.C.; Gavrila, D.M.
2013-01-01
We present a novel approach for the joint estimation of a person's overall body orientation, 3D shape and texture, from overlapping cameras. Overall body orientation (i.e. rotation around torso major axis) is estimated by minimizing the difference between a learned texture model in a canonical
Inverse Gaussian model for small area estimation via Gibbs sampling
African Journals Online (AJOL)
We present a Bayesian method for estimating small area parameters under an inverse Gaussian model. The method is extended to estimate small area parameters for finite populations. The Gibbs sampler is proposed as a mechanism for implementing the Bayesian paradigm. We illustrate the method by application to ...
Performances of estimators of linear auto-correlated error model ...
African Journals Online (AJOL)
The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...
Nonparametric volatility density estimation for discrete time models
Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.
2005-01-01
We consider discrete time models for asset prices with a stationary volatility process. We aim at estimating the multivariate density of this process at a set of consecutive time instants. A Fourier-type deconvolution kernel density estimator based on the logarithm of the squared process is proposed
Jacobian projection reduced-order models for dynamic systems with contact nonlinearities
Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.
2018-02-01
In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.
Parameter Estimates in Differential Equation Models for Population Growth
Winkel, Brian J.
2011-01-01
We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…
Review Genetic prediction models and heritability estimates for ...
African Journals Online (AJOL)
edward
2015-05-09
May 9, 2015 ... Instead, through stepwise inclusion of type traits in the PH model, the .... Great Britain uses a bivariate animal model for all breeds, ... Štípková, 2012) and then applying linear models to the combined datasets with the ..... multivariate analyses, it is difficult to use indicator traits to estimate longevity early in life ...
Parameter estimation of electricity spot models from futures prices
Aihara, ShinIchi; Bagchi, Arunabha; Imreizeeq, E.S.N.; Walter, E.
We consider a slight perturbation of the Schwartz-Smith model for the electricity futures prices and the resulting modified spot model. Using the martingale property of the modified price under the risk neutral measure, we derive the arbitrage free model for the spot and futures prices. We estimate
Estimating the Competitive Storage Model with Trending Commodity Prices
Gouel , Christophe; LEGRAND , Nicolas
2017-01-01
We present a method to estimate jointly the parameters of a standard commodity storage model and the parameters characterizing the trend in commodity prices. This procedure allows the influence of a possible trend to be removed without restricting the model specification, and allows model and trend selection based on statistical criteria. The trend is modeled deterministically using linear or cubic spline functions of time. The results show that storage models with trend are always preferred ...
Modeling and estimation of a low degree geopotential model from terrestrial gravity data
Pavlis, Nikolaos K.
1988-01-01
The development of appropriate modeling and adjustment procedures for the estimation of harmonic coefficients of the geopotential, from surface gravity data was studied, in order to provide an optimum way of utilizing the terrestrial gravity information in combination solutions currently developed at NASA/Goddard Space Flight Center, for use in the TOPEX/POSEIDON mission. The mathematical modeling was based on the fundamental boundary condition of the linearized Molodensky boundary value problem. Atmospheric and ellipsoidal corrections were applied to the surface anomalies. Terrestrial gravity solutions were found to be in good agreement with the satellite ones over areas which are well surveyed (gravimetrically), such as North America or Australia. However, systematic differences between the terrestrial only models and GEMT1, over extended regions in Africa, the Soviet Union, and China were found. In Africa, gravity anomaly differences on the order of 20 mgals and undulation differences on the order of 15 meters, over regions extending 2000 km in diameter, occur. Comparisons of the GEMT1 implied undulations with 32 well distributed Doppler derived undulations gave an RMS difference of 2.6 m, while corresponding comparison with undulations implied by the terrestrial solution gave RMS difference on the order of 15 m, which implies that the terrestrial data in that region are substantially in error.
Functional Mixed Effects Model for Small Area Estimation.
Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou
2016-09-01
Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.
Development of simple kinetic models and parameter estimation for ...
African Journals Online (AJOL)
PANCHIGA
2016-09-28
Sep 28, 2016 ... estimation for simulation of recombinant human serum albumin ... and recombinant protein production by P. pastoris without requiring complex models. Key words: ..... SDS-PAGE and showed the same molecular size as.
COPS model estimates of LLEA availability near selected reactor sites
International Nuclear Information System (INIS)
Berkbigler, K.P.
1979-11-01
The COPS computer model has been used to estimate local law enforcement agency (LLEA) officer availability in the neighborhood of selected nuclear reactor sites. The results of these analyses are presented both in graphic and tabular form in this report
Censored rainfall modelling for estimation of fine-scale extremes
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
Empirical model for estimating the surface roughness of machined ...
African Journals Online (AJOL)
Empirical model for estimating the surface roughness of machined ... as well as surface finish is one of the most critical quality measure in mechanical products. ... various cutting speed have been developed using regression analysis software.
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
Temporal Aggregation in First Order Cointegrated Vector Autoregressive models
DEFF Research Database (Denmark)
Milhøj, Anders; la Cour, Lisbeth Funding
2011-01-01
with the frequency of the data. We also introduce a graphical representation that will prove useful as an additional informational tool for deciding the appropriate cointegration rank of a model. In two examples based on models of time series of different grades of gasoline, we demonstrate the usefulness of our...
Calculus for cognitive scientists higher order models and their analysis
Peterson, James K
2016-01-01
This book offers a self-study program on how mathematics, computer science and science can be profitably and seamlessly intertwined. This book focuses on two variable ODE models, both linear and nonlinear, and highlights theoretical and computational tools using MATLAB to explain their solutions. It also shows how to solve cable models using separation of variables and the Fourier Series.
Partial-order reduction for GPU model checking
Neele, T.S.; Wijs, A.J.; Bošnački, D.; van de Pol, J.C.
2016-01-01
Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicitstate model checking, we improve memory efficiency by applying partialorder reduction. We propose
Higher-Order Hamiltonian Model for Unidirectional Water Waves
Bona, J. L.; Carvajal, X.; Panthee, M.; Scialom, M.
2018-04-01
Formally second-order correct, mathematical descriptions of long-crested water waves propagating mainly in one direction are derived. These equations are analogous to the first-order approximations of KdV- or BBM-type. The advantage of these more complex equations is that their solutions corresponding to physically relevant initial perturbations of the rest state may be accurate on a much longer timescale. The initial value problem for the class of equations that emerges from our derivation is then considered. A local well-posedness theory is straightforwardly established by a contraction mapping argument. A subclass of these equations possess a special Hamiltonian structure that implies the local theory can be continued indefinitely.
Neutrino masses and their ordering: global data, priors and models
Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.
2018-03-01
We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the
Optimal difference-based estimation for partially linear models
Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun
2017-01-01
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
Optimal difference-based estimation for partially linear models
Zhou, Yuejin
2017-12-16
Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.
Simultaneous Parameters Identifiability and Estimation of an E. coli Metabolic Network Model
Directory of Open Access Journals (Sweden)
Kese Pontes Freitas Alberton
2015-01-01
Full Text Available This work proposes a procedure for simultaneous parameters identifiability and estimation in metabolic networks in order to overcome difficulties associated with lack of experimental data and large number of parameters, a common scenario in the modeling of such systems. As case study, the complex real problem of parameters identifiability of the Escherichia coli K-12 W3110 dynamic model was investigated, composed by 18 differential ordinary equations and 35 kinetic rates, containing 125 parameters. With the procedure, model fit was improved for most of the measured metabolites, achieving 58 parameters estimated, including 5 unknown initial conditions. The results indicate that simultaneous parameters identifiability and estimation approach in metabolic networks is appealing, since model fit to the most of measured metabolites was possible even when important measures of intracellular metabolites and good initial estimates of parameters are not available.
Estimation and prediction under local volatility jump-diffusion model
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
the attributes in the database into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside PostgreSQL’s query optimizer. Experimental results indicate an order of magnitude better...
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
A generalized cellular automata approach to modeling first order ...
Indian Academy of Sciences (India)
... inhibitors deforming the allosteric site or inhibitors changing the structure of active ... Cell-based models with discrete state variables, such as Cellular Automata ... capture the essential features of a discrete real system, consisting of space, ...
A generalized cellular automata approach to modeling first order ...
Indian Academy of Sciences (India)
system, consisting of space, time and state, structured with simple local rules without ... Sensitivity analysis of a stochastic cellular automata model. 413 ..... Baetens J M and De Baets B 2011 Design and parameterization of a stochastic cellular.
Stripe order from the perspective of the Hubbard model
Energy Technology Data Exchange (ETDEWEB)
Devereaux, Thomas Peter
2018-03-01
A microscopic understanding of the strongly correlated physics of the cuprates must account for the translational and rotational symmetry breaking that is present across all cuprate families, commonly in the form of stripes. Here we investigate emergence of stripes in the Hubbard model, a minimal model believed to be relevant to the cuprate superconductors, using determinant quantum Monte Carlo (DQMC) simulations at finite temperatures and density matrix renormalization group (DMRG) ground state calculations. By varying temperature, doping, and model parameters, we characterize the extent of stripes throughout the phase diagram of the Hubbard model. Our results show that including the often neglected next-nearest-neighbor hopping leads to the absence of spin incommensurability upon electron-doping and nearly half-filled stripes upon hole-doping. The similarities of these findings to experimental results on both electron and hole-doped cuprate families support a unified description across a large portion of the cuprate phase diagram.
Reduced Order Models for Dynamic Behavior of Elastomer Damping Devices
Morin, B.; Legay, A.; Deü, J.-F.
2016-09-01
In the context of passive damping, various mechanical systems from the space industry use elastomer components (shock absorbers, silent blocks, flexible joints...). The material of these devices has frequency, temperature and amplitude dependent characteristics. The associated numerical models, using viscoelastic and hyperelastic constitutive behaviour, may become computationally too expensive during a design process. The aim of this work is to propose efficient reduced viscoelastic models of rubber devices. The first step is to choose an accurate material model that represent the viscoelasticity. The second step is to reduce the rubber device finite element model to a super-element that keeps the frequency dependence. This reduced model is first built by taking into account the fact that the device's interfaces are much more rigid than the rubber core. To make use of this difference, kinematical constraints enforce the rigid body motion of these interfaces reducing the rubber device model to twelve dofs only on the interfaces (three rotations and three translations per face). Then, the superelement is built by using a component mode synthesis method. As an application, the dynamic behavior of a structure supported by four hourglass shaped rubber devices under harmonic loads is analysed to show the efficiency of the proposed approach.
Offline estimation of decay time for an optical cavity with a low pass filter cavity model.
Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C
2012-08-01
This Letter presents offline estimation results for the decay-time constant for an experimental Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The cavity dynamics are modeled in terms of a low pass filter (LPF) with unity DC gain. This model is used by an extended Kalman filter (EKF) along with the recorded light intensity at the output of the cavity in order to estimate the decay-time constant. The estimation results using the LPF cavity model are compared to those obtained using the quadrature model for the cavity presented in previous work by Kallapur et al. The estimation process derived using the LPF model comprises two states as opposed to three states in the quadrature model. When considering the EKF, this means propagating two states and a (2×2) covariance matrix using the LPF model, as opposed to propagating three states and a (3×3) covariance matrix using the quadrature model. This gives the former model a computational advantage over the latter and leads to faster execution times for the corresponding EKF. It is shown in this Letter that the LPF model for the cavity with two filter states is computationally more efficient, converges faster, and is hence a more suitable method than the three-state quadrature model presented in previous work for real-time estimation of the decay-time constant for the cavity.
Reduced Order Modeling of Combustion Instability in a Gas Turbine Model Combustor
Arnold-Medabalimi, Nicholas; Huang, Cheng; Duraisamy, Karthik
2017-11-01
Hydrocarbon fuel based propulsion systems are expected to remain relevant in aerospace vehicles for the foreseeable future. Design of these devices is complicated by combustion instabilities. The capability to model and predict these effects at reduced computational cost is a requirement for both design and control of these devices. This work focuses on computational studies on a dual swirl model gas turbine combustor in the context of reduced order model development. Full fidelity simulations are performed utilizing URANS and Hybrid RANS-LES with finite rate chemistry. Following this, data decomposition techniques are used to extract a reduced basis representation of the unsteady flow field. These bases are first used to identify sensor locations to guide experimental interrogations and controller feedback. Following this, initial results on developing a control-oriented reduced order model (ROM) will be presented. The capability of the ROM will be further assessed based on different operating conditions and geometric configurations.
The independent loss model with ordered insertions for the evolution of CRISPR spacers.
Baumdicker, F; Huebner, A M I; Pfaffelhuber, P
2018-02-01
Today, the CRISPR (clustered regularly interspaced short palindromic repeats) region within bacterial and archaeal genomes is known to encode an adaptive immune system. We rely on previous results on the evolution of the CRISPR arrays, which led to the ordered independent loss model, introduced by Kupczok and Bollback (2013). When focusing on the spacers (between the repeats), new elements enter a CRISPR array at rate θ at the leader end of the array, while all spacers present are lost at rate ρ along the phylogeny relating the sample. Within this model, we compute the distribution of distances of spacers which are present in all arrays in a sample of size n. We use these results to estimate the loss rate ρ from spacer array data for n=2 and n=3. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimation of Nonlinear Dynamic Panel Data Models with Individual Effects
Directory of Open Access Journals (Sweden)
Yi Hu
2014-01-01
Full Text Available This paper suggests a generalized method of moments (GMM based estimation for dynamic panel data models with individual specific fixed effects and threshold effects simultaneously. We extend Hansen’s (Hansen, 1999 original setup to models including endogenous regressors, specifically, lagged dependent variables. To address the problem of endogeneity of these nonlinear dynamic panel data models, we prove that the orthogonality conditions proposed by Arellano and Bond (1991 are valid. The threshold and slope parameters are estimated by GMM, and asymptotic distribution of the slope parameters is derived. Finite sample performance of the estimation is investigated through Monte Carlo simulations. It shows that the threshold and slope parameter can be estimated accurately and also the finite sample distribution of slope parameters is well approximated by the asymptotic distribution.
Directory of Open Access Journals (Sweden)
Githure John I
2009-09-01
Full Text Available Abstract Background Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct some form of spatial error analysis to detect clusters of An. arabiensis aquatic habitats based on uncertainty residuals from individual sampled habitats. In this research, a method of error estimation for spatial simulation models was demonstrated using autocorrelation indices and eigenfunction spatial filters to distinguish among the effects of parameter uncertainty on a stochastic simulation of ecological sampled Anopheles aquatic habitat covariates. A test for diagnostic checking error residuals in an An. arabiensis aquatic habitat model may enable intervention efforts targeting productive habitats clusters, based on larval/pupal productivity, by using the asymptotic distribution of parameter estimates from a residual autocovariance matrix. The models considered in this research extends a normal regression analysis previously considered in the literature. Methods Field and remote-sampled data were collected during July 2006 to December 2007 in Karima rice-village complex in Mwea, Kenya. SAS 9.1.4® was used to explore univariate statistics, correlations, distributions, and to generate global autocorrelation statistics from the ecological sampled datasets. A local autocorrelation index was also generated using spatial covariance parameters (i.e., Moran's Indices in a SAS/GIS® database. The Moran's statistic was decomposed into orthogonal and uncorrelated synthetic map pattern components using a Poisson model with a gamma-distributed mean (i.e. negative binomial regression. The eigenfunction
A new method to estimate parameters of linear compartmental models using artificial neural networks
International Nuclear Information System (INIS)
Gambhir, Sanjiv S.; Keppenne, Christian L.; Phelps, Michael E.; Banerjee, Pranab K.
1998-01-01
At present, the preferred tool for parameter estimation in compartmental analysis is an iterative procedure; weighted nonlinear regression. For a large number of applications, observed data can be fitted to sums of exponentials whose parameters are directly related to the rate constants/coefficients of the compartmental models. Since weighted nonlinear regression often has to be repeated for many different data sets, the process of fitting data from compartmental systems can be very time consuming. Furthermore the minimization routine often converges to a local (as opposed to global) minimum. In this paper, we examine the possibility of using artificial neural networks instead of weighted nonlinear regression in order to estimate model parameters. We train simple feed-forward neural networks to produce as outputs the parameter values of a given model when kinetic data are fed to the networks' input layer. The artificial neural networks produce unbiased estimates and are orders of magnitude faster than regression algorithms. At noise levels typical of many real applications, the neural networks are found to produce lower variance estimates than weighted nonlinear regression in the estimation of parameters from mono- and biexponential models. These results are primarily due to the inability of weighted nonlinear regression to converge. These results establish that artificial neural networks are powerful tools for estimating parameters for simple compartmental models. (author)
Heterogeneous traffic flow modelling using second-order macroscopic continuum model
Mohan, Ranju; Ramadurai, Gitakrishnan
2017-01-01
Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.
Fast prediction and evaluation of eccentric inspirals using reduced-order models
Barta, Dániel; Vasúth, Mátyás
2018-06-01
A large number of theoretically predicted waveforms are required by matched-filtering searches for the gravitational-wave signals produced by compact binary coalescence. In order to substantially alleviate the computational burden in gravitational-wave searches and parameter estimation without degrading the signal detectability, we propose a novel reduced-order-model (ROM) approach with applications to adiabatic 3PN-accurate inspiral waveforms of nonspinning sources that evolve on either highly or slightly eccentric orbits. We provide a singular-value decomposition-based reduced-basis method in the frequency domain to generate reduced-order approximations of any gravitational waves with acceptable accuracy and precision within the parameter range of the model. We construct efficient reduced bases comprised of a relatively small number of the most relevant waveforms over three-dimensional parameter-space covered by the template bank (total mass 2.15 M⊙≤M ≤215 M⊙ , mass ratio 0.01 ≤q ≤1 , and initial orbital eccentricity 0 ≤e0≤0.95 ). The ROM is designed to predict signals in the frequency band from 10 Hz to 2 kHz for aLIGO and aVirgo design sensitivity. Beside moderating the data reduction, finer sampling of fiducial templates improves the accuracy of surrogates. Considerable increase in the speedup from several hundreds to thousands can be achieved by evaluating surrogates for low-mass systems especially when combined with high-eccentricity.
Bayesian estimation of parameters in a regional hydrological model
Directory of Open Access Journals (Sweden)
K. Engeland
2002-01-01
Full Text Available This study evaluates the applicability of the distributed, process-oriented Ecomag model for prediction of daily streamflow in ungauged basins. The Ecomag model is applied as a regional model to nine catchments in the NOPEX area, using Bayesian statistics to estimate the posterior distribution of the model parameters conditioned on the observed streamflow. The distribution is calculated by Markov Chain Monte Carlo (MCMC analysis. The Bayesian method requires formulation of a likelihood function for the parameters and three alternative formulations are used. The first is a subjectively chosen objective function that describes the goodness of fit between the simulated and observed streamflow, as defined in the GLUE framework. The second and third formulations are more statistically correct likelihood models that describe the simulation errors. The full statistical likelihood model describes the simulation errors as an AR(1 process, whereas the simple model excludes the auto-regressive part. The statistical parameters depend on the catchments and the hydrological processes and the statistical and the hydrological parameters are estimated simultaneously. The results show that the simple likelihood model gives the most robust parameter estimates. The simulation error may be explained to a large extent by the catchment characteristics and climatic conditions, so it is possible to transfer knowledge about them to ungauged catchments. The statistical models for the simulation errors indicate that structural errors in the model are more important than parameter uncertainties. Keywords: regional hydrological model, model uncertainty, Bayesian analysis, Markov Chain Monte Carlo analysis
Bayesian Nonparametric Model for Estimating Multistate Travel Time Distribution
Directory of Open Access Journals (Sweden)
Emmanuel Kidando
2017-01-01
Full Text Available Multistate models, that is, models with more than two distributions, are preferred over single-state probability models in modeling the distribution of travel time. Literature review indicated that the finite multistate modeling of travel time using lognormal distribution is superior to other probability functions. In this study, we extend the finite multistate lognormal model of estimating the travel time distribution to unbounded lognormal distribution. In particular, a nonparametric Dirichlet Process Mixture Model (DPMM with stick-breaking process representation was used. The strength of the DPMM is that it can choose the number of components dynamically as part of the algorithm during parameter estimation. To reduce computational complexity, the modeling process was limited to a maximum of six components. Then, the Markov Chain Monte Carlo (MCMC sampling technique was employed to estimate the parameters’ posterior distribution. Speed data from nine links of a freeway corridor, aggregated on a 5-minute basis, were used to calculate the corridor travel time. The results demonstrated that this model offers significant flexibility in modeling to account for complex mixture distributions of the travel time without specifying the number of components. The DPMM modeling further revealed that freeway travel time is characterized by multistate or single-state models depending on the inclusion of onset and offset of congestion periods.
Bayesian Modeling of ChIP-chip Data Through a High-Order Ising Model
Mo, Qianxing; Liang, Faming
2010-01-01
approach to ChIP-chip data through an Ising model with high-order interactions. The proposed method naturally takes into account the intrinsic spatial structure of the data and can be used to analyze data from multiple platforms with different genomic
Ordering phase transition in the one-dimensional Axelrod model
Vilone, D.; Vespignani, A.; Castellano, C.
2002-12-01
We study the one-dimensional behavior of a cellular automaton aimed at the description of the formation and evolution of cultural domains. The model exhibits a non-equilibrium transition between a phase with all the system sharing the same culture and a disordered phase of coexisting regions with different cultural features. Depending on the initial distribution of the disorder the transition occurs at different values of the model parameters. This phenomenology is qualitatively captured by a mean-field approach, which maps the dynamics into a multi-species reaction-diffusion problem.
Hydrological model uncertainty due to spatial evapotranspiration estimation methods
Yu, Xuan; Lamačová, Anna; Duffy, Christopher; Krám, Pavel; Hruška, Jakub
2016-05-01
Evapotranspiration (ET) continues to be a difficult process to estimate in seasonal and long-term water balances in catchment models. Approaches to estimate ET typically use vegetation parameters (e.g., leaf area index [LAI], interception capacity) obtained from field observation, remote sensing data, national or global land cover products, and/or simulated by ecosystem models. In this study we attempt to quantify the uncertainty that spatial evapotranspiration estimation introduces into hydrological simulations when the age of the forest is not precisely known. The Penn State Integrated Hydrologic Model (PIHM) was implemented for the Lysina headwater catchment, located 50°03‧N, 12°40‧E in the western part of the Czech Republic. The spatial forest patterns were digitized from forest age maps made available by the Czech Forest Administration. Two ET methods were implemented in the catchment model: the Biome-BGC forest growth sub-model (1-way coupled to PIHM) and with the fixed-seasonal LAI method. From these two approaches simulation scenarios were developed. We combined the estimated spatial forest age maps and two ET estimation methods to drive PIHM. A set of spatial hydrologic regime and streamflow regime indices were calculated from the modeling results for each method. Intercomparison of the hydrological responses to the spatial vegetation patterns suggested considerable variation in soil moisture and recharge and a small uncertainty in the groundwater table elevation and streamflow. The hydrologic modeling with ET estimated by Biome-BGC generated less uncertainty due to the plant physiology-based method. The implication of this research is that overall hydrologic variability induced by uncertain management practices was reduced by implementing vegetation models in the catchment models.
Marginal Maximum Likelihood Estimation of Item Response Models in R
Directory of Open Access Journals (Sweden)
Matthew S. Johnson
2007-02-01
Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.
Estimation of the Thurstonian model for the 2-AC protocol
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen; Lee, Hye-Seong; Brockhoff, Per B.
2012-01-01
. This relationship makes it possible to extract estimates and standard errors of δ and τ from general statistical software, and furthermore, it makes it possible to combine standard regression modelling with the Thurstonian model for the 2-AC protocol. A model for replicated 2-AC data is proposed using cumulative......The 2-AC protocol is a 2-AFC protocol with a “no-difference” option and is technically identical to the paired preference test with a “no-preference” option. The Thurstonian model for the 2-AC protocol is parameterized by δ and a decision parameter τ, the estimates of which can be obtained...... by fairly simple well-known methods. In this paper we describe how standard errors of the parameters can be obtained and how exact power computations can be performed. We also show how the Thurstonian model for the 2-AC protocol is closely related to a statistical model known as a cumulative probit model...
Bayesian analysis for uncertainty estimation of a canopy transpiration model
Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.
2007-04-01
A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading
[Using log-binomial model for estimating the prevalence ratio].
Ye, Rong; Gao, Yan-hui; Yang, Yi; Chen, Yue
2010-05-01
To estimate the prevalence ratios, using a log-binomial model with or without continuous covariates. Prevalence ratios for individuals' attitude towards smoking-ban legislation associated with smoking status, estimated by using a log-binomial model were compared with odds ratios estimated by logistic regression model. In the log-binomial modeling, maximum likelihood method was used when there were no continuous covariates and COPY approach was used if the model did not converge, for example due to the existence of continuous covariates. We examined the association between individuals' attitude towards smoking-ban legislation and smoking status in men and women. Prevalence ratio and odds ratio estimation provided similar results for the association in women since smoking was not common. In men however, the odds ratio estimates were markedly larger than the prevalence ratios due to a higher prevalence of outcome. The log-binomial model did not converge when age was included as a continuous covariate and COPY method was used to deal with the situation. All analysis was performed by SAS. Prevalence ratio seemed to better measure the association than odds ratio when prevalence is high. SAS programs were provided to calculate the prevalence ratios with or without continuous covariates in the log-binomial regression analysis.
Proposed higher order continuum-based models for an elastic ...
African Journals Online (AJOL)
Three new variants of continuum-based models for an elastic subgrade are proposed. The subgrade is idealized as a homogenous, isotropic elastic layer of thickness H overlying a firm stratum. All components of the stress tensor in the subgrade are taken into account. Reasonable assumptions are made regarding the ...
Multilevel Higher-Order Item Response Theory Models
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
In the social sciences, latent traits often have a hierarchical structure, and data can be sampled from multiple levels. Both hierarchical latent traits and multilevel data can occur simultaneously. In this study, we developed a general class of item response theory models to accommodate both hierarchical latent traits and multilevel data. The…
Optimization of power rationing order based on fuzzy evaluation model
Zhang, Siyuan; Liu, Li; Xie, Peiyuan; Tang, Jihong; Wang, Canlin
2018-04-01
With the development of production and economic growth, China's electricity load has experienced a significant increase. Over the years, in order to alleviate the contradiction of power shortage, a series of policies and measures to speed up electric power construction have been made in china, which promotes the rapid development of the power industry and the power construction has made great achievements. For China, after large-scale power facilities, power grid long-term power shortage situation has been improved to some extent, but in a certain period of time, the power development still exists uneven development. On the whole, it is still in the state of insufficient power, and the situation of power restriction is still severe in some areas, so it is necessary to study on the power rationing.
Mixed Higher Order Variational Model for Image Recovery
Directory of Open Access Journals (Sweden)
Pengfei Liu
2014-01-01
Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.
DEFF Research Database (Denmark)
Gørgens, Tue; Skeels, Christopher L.; Wurtz, Allan
This paper explores estimation of a class of non-linear dynamic panel data models with additive unobserved individual-specific effects. The models are specified by moment restrictions. The class includes the panel data AR(p) model and panel smooth transition models. We derive an efficient set...... of moment restrictions for estimation and apply the results to estimation of panel smooth transition models with fixed effects, where the transition may be determined endogenously. The performance of the GMM estimator, both in terms of estimation precision and forecasting performance, is examined in a Monte...
On the Estimation of Standard Errors in Cognitive Diagnosis Models
Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim
2018-01-01
Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…
Estimation of pure autoregressive vector models for revenue series ...
African Journals Online (AJOL)
This paper aims at applying multivariate approach to Box and Jenkins univariate time series modeling to three vector series. General Autoregressive Vector Models with time varying coefficients are estimated. The first vector is a response vector, while others are predictor vectors. By matrix expansion each vector, whether ...
Vacuum expectation values for four-fermion operators. Model estimates
International Nuclear Information System (INIS)
Zhitnitskij, A.R.
1985-01-01
Some simple models (a system with a heavy quark, the rarefied insatanton gas) are used to investigate the problem of factorizability. Characteristics of vacuum fluctuations responsible for saturation of four-fermion vacuum expectation values which are known phenomenologically are discussed. A qualitative agreement between the model and phenomenologic;l estimates has been noted
Vacuum expectation values of four-fermion operators. Model estimates
International Nuclear Information System (INIS)
Zhitnitskii, A.R.
1985-01-01
Simple models (a system with a heavy quark, a rarefied instanton gas) are used to study problems of factorizability. A discussion is given of the characteristics of the vacuum fluctuations responsible for saturation of the phenomenologically known four-fermion vacuum expectation values. Qualitative agreement between the model and phenomenological estimates is observed
Estimation of pump operational state with model-based methods
International Nuclear Information System (INIS)
Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha
2010-01-01
Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.
Simplification of an MCNP model designed for dose rate estimation
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Simplification of an MCNP model designed for dose rate estimation
Directory of Open Access Journals (Sweden)
Laptev Alexander
2017-01-01
Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Improved air ventilation rate estimation based on a statistical model
International Nuclear Information System (INIS)
Brabec, M.; Jilek, K.
2004-01-01
A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements
Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks
Kyo, Koki
Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.
A Reduced Order, One Dimensional Model of Joint Response
Energy Technology Data Exchange (ETDEWEB)
DOHNER,JEFFREY L.
2000-11-06
As a joint is loaded, the tangent stiffness of the joint reduces due to slip at interfaces. This stiffness reduction continues until the direction of the applied load is reversed or the total interface slips. Total interface slippage in joints is called macro-slip. For joints not undergoing macro-slip, when load reversal occurs the tangent stiffness immediately rebounds to its maximum value. This occurs due to stiction effects at the interface. Thus, for periodic loads, a softening and rebound hardening cycle is produced which defines a hysteretic, energy absorbing trajectory. For many jointed sub-structures, this hysteretic trajectory can be approximated using simple polynomial representations. This allows for complex joint substructures to be represented using simple non-linear models. In this paper a simple one dimensional model is discussed.
Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models
Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.
2014-12-01
We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.
Ali, M. F.; Mawdsley, J. A.
1987-09-01
An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)
A novel Gaussian model based battery state estimation approach: State-of-Energy
International Nuclear Information System (INIS)
He, HongWen; Zhang, YongZhi; Xiong, Rui; Wang, Chun
2015-01-01
Highlights: • The Gaussian model is employed to construct a novel battery model. • The genetic algorithm is used to implement model parameter identification. • The AIC is used to decide the best hysteresis order of the battery model. • A novel battery SoE estimator is proposed and verified by two kinds of batteries. - Abstract: State-of-energy (SoE) is a very important index for battery management system (BMS) used in electric vehicles (EVs), it is indispensable for ensuring safety and reliable operation of batteries. For achieving battery SoE accurately, the main work can be summarized in three aspects. (1) In considering that different kinds of batteries show different open circuit voltage behaviors, the Gaussian model is employed to construct the battery model. What is more, the genetic algorithm is employed to locate the optimal parameter for the selecting battery model. (2) To determine an optimal tradeoff between battery model complexity and prediction precision, the Akaike information criterion (AIC) is used to determine the best hysteresis order of the combined battery model. Results from a comparative analysis show that the first-order hysteresis battery model is thought of being the best based on the AIC values. (3) The central difference Kalman filter (CDKF) is used to estimate the real-time SoE and an erroneous initial SoE is considered to evaluate the robustness of the SoE estimator. Lastly, two kinds of lithium-ion batteries are used to verify the proposed SoE estimation approach. The results show that the maximum SoE estimation error is within 1% for both LiFePO 4 and LiMn 2 O 4 battery datasets
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Parameter Estimation in Stochastic Grey-Box Models
DEFF Research Database (Denmark)
Kristensen, Niels Rode; Madsen, Henrik; Jørgensen, Sten Bay
2004-01-01
An efficient and flexible parameter estimation scheme for grey-box models in the sense of discretely, partially observed Ito stochastic differential equations with measurement noise is presented along with a corresponding software implementation. The estimation scheme is based on the extended...... Kalman filter and features maximum likelihood as well as maximum a posteriori estimation on multiple independent data sets, including irregularly sampled data sets and data sets with occasional outliers and missing observations. The software implementation is compared to an existing software tool...... and proves to have better performance both in terms of quality of estimates for nonlinear systems with significant diffusion and in terms of reproducibility. In particular, the new tool provides more accurate and more consistent estimates of the parameters of the diffusion term....
The problematic estimation of "imitation effects" in multilevel models
Directory of Open Access Journals (Sweden)
2003-09-01
Full Text Available It seems plausible that a person's demographic behaviour may be influenced by that among other people in the community, for example because of an inclination to imitate. When estimating multilevel models from clustered individual data, some investigators might perhaps feel tempted to try to capture this effect by simply including on the right-hand side the average of the dependent variable, constructed by aggregation within the clusters. However, such modelling must be avoided. According to simulation experiments based on real fertility data from India, the estimated effect of this obviously endogenous variable can be very different from the true effect. Also the other community effect estimates can be strongly biased. An "imitation effect" can only be estimated under very special assumptions that in practice will be hard to defend.
In vitro biological models in order to study BNCT
International Nuclear Information System (INIS)
Dagrosa, Maria A.; Kreimann, Erica L.; Schwint, Amanda E.; Juvenal, Guillermo J.; Pisarev, Mario A.; Farias, Silvia S.; Garavaglia, Ricardo N.; Batistoni, Daniel A.
1999-01-01
Undifferentiated thyroid carcinoma (UTC) lacks an effective treatment. Boron neutron capture therapy (BNCT) is based on the selective uptake of 10 B-boronated compounds by some tumours, followed by irradiation with an appropriate neutron beam. The radioactive boron originated ( 11 B) decays releasing 7 Li, gamma rays and alpha particles, and these latter will destroy the tumour. In order to explore the possibility of applying BNCT to UTC we have studied the biodistribution of BPA. In vitro studies: the uptake of p- 10 borophenylalanine (BPA) by the UTC cell line ARO, primary cultures of normal bovine thyroid cells (BT) and human follicular adenoma (FA) thyroid was studied. No difference in BPA uptake was observed between proliferating and quiescent ARO cells. The uptake by quiescent ARO, BT and FA showed that the ARO/BT and ARO/FA ratios were 4 and 5, respectively (p< 0.001). The present experimental results open the possibility of applying BNCT for the treatment of UTC. (author)
Development on electromagnetic impedance function modeling and its estimation
Energy Technology Data Exchange (ETDEWEB)
Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)
2015-09-30
Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition
A Bayesian framework for parameter estimation in dynamical models.
Directory of Open Access Journals (Sweden)
Flávio Codeço Coelho
Full Text Available Mathematical models in biology are powerful tools for the study and exploration of complex dynamics. Nevertheless, bringing theoretical results to an agreement with experimental observations involves acknowledging a great deal of uncertainty intrinsic to our theoretical representation of a real system. Proper handling of such uncertainties is key to the successful usage of models to predict experimental or field observations. This problem has been addressed over the years by many tools for model calibration and parameter estimation. In this article we present a general framework for uncertainty analysis and parameter estimation that is designed to handle uncertainties associated with the modeling of dynamic biological systems while remaining agnostic as to the type of model used. We apply the framework to fit an SIR-like influenza transmission model to 7 years of incidence data in three European countries: Belgium, the Netherlands and Portugal.
Multiscale Reduced Order Modeling of Complex Multi-Bay Structures
2013-07-01
fuselage panel studied in [28], see Fig. 2 for a picture of the actual hardware taken from [28]. The finite element model of the 9-bay panel, shown in...discussed. Two alternatives to reduce the computational time for the solution of these problems are explored. iii A mi familia ...results at P=0.98-1.82 lb/in, P=1.4-2.6 lb/in. The baseline solution P=1.4-2.6 lb/in has a 46 mean value of 2 lb/in and it is actually very close to
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
Advanced empirical estimate of information value for credit scoring models
Directory of Open Access Journals (Sweden)
Martin Řezáč
2011-01-01
Full Text Available Credit scoring, it is a term for a wide spectrum of predictive models and their underlying techniques that aid financial institutions in granting credits. These methods decide who will get credit, how much credit they should get, and what further strategies will enhance the profitability of the borrowers to the lenders. Many statistical tools are avaiable for measuring quality, within the meaning of the predictive power, of credit scoring models. Because it is impossible to use a scoring model effectively without knowing how good it is, quality indexes like Gini, Kolmogorov-Smirnov statisic and Information value are used to assess quality of given credit scoring model. The paper deals primarily with the Information value, sometimes called divergency. Commonly it is computed by discretisation of data into bins using deciles. One constraint is required to be met in this case. Number of cases have to be nonzero for all bins. If this constraint is not fulfilled there are some practical procedures for preserving finite results. As an alternative method to the empirical estimates one can use the kernel smoothing theory, which allows to estimate unknown densities and consequently, using some numerical method for integration, to estimate value of the Information value. The main contribution of this paper is a proposal and description of the empirical estimate with supervised interval selection. This advanced estimate is based on requirement to have at least k, where k is a positive integer, observations of socres of both good and bad client in each considered interval. A simulation study shows that this estimate outperform both the empirical estimate using deciles and the kernel estimate. Furthermore it shows high dependency on choice of the parameter k. If we choose too small value, we get overestimated value of the Information value, and vice versa. Adjusted square root of number of bad clients seems to be a reasonable compromise.
Perspectives on Modelling BIM-enabled Estimating Practices
Directory of Open Access Journals (Sweden)
Willy Sher
2014-12-01
Full Text Available BIM-enabled estimating processes do not replace or provide a substitute for the traditional approaches used in the architecture, engineering and construction industries. This paper explores the impact of BIM on these traditional processes. It identifies differences between the approaches used with BIM and other conventional methods, and between the various construction professionals that prepare estimates. We interviewed 17 construction professionals from client organizations, contracting organizations, consulting practices and specialist-project firms. Our analyses highlight several logical relationships between estimating processes and BIM attributes. Estimators need to respond to the challenges BIM poses to traditional estimating practices. BIM-enabled estimating circumvents long-established conventions and traditional approaches, and focuses on data management. Consideration needs to be given to the model data required for estimating, to the means by which these data may be harnessed when exported, to the means by which the integrity of model data are protected, to the creation and management of tools that work effectively and efficiently in multi-disciplinary settings, and to approaches that narrow the gap between virtual reality and actual reality. Areas for future research are also identified in the paper.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Line impedance estimation using model based identification technique
DEFF Research Database (Denmark)
Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus
2011-01-01
The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...
Model Year 2017 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2016-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2012 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2011-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2013 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2012-12-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2011 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2010-11-01
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Model Year 2018 Fuel Economy Guide: EPA Fuel Economy Estimates
Energy Technology Data Exchange (ETDEWEB)
None
2017-12-07
The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. These estimates are provided by the U.S. Environmental Protection Agency in compliance with Federal Law. By using this Guide, consumers can estimate the average yearly fuel cost for any vehicle. The Guide is intended to help consumers compare the fuel economy of similarly sized cars, light duty trucks and special purpose vehicles.
Models of economic geography: dynamics, estimation and policy evaluation
Knaap, Thijs
2004-01-01
In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....
Cucinotta, Francis A.; Yan, Congchong; Saganti, Premkumar B.
2018-01-01
Heavy ion absorption cross sections play an important role in radiation transport codes used in risk assessment and for shielding studies of galactic cosmic ray (GCR) exposures. Due to the GCR primary nuclei composition and nuclear fragmentation leading to secondary nuclei heavy ions of charge number, Z with 3 ≤ Z ≥ 28 and mass numbers, A with 6 ≤ A ≥ 60 representing about 190 isotopes occur in GCR transport calculations. In this report we describe methods for developing a data-base of isotopic dependent heavy ion absorption cross sections for interactions. Calculations of a 2nd-order optical model solution to coupled-channel solutions to the Eikonal form of the nucleus-nucleus scattering amplitude are compared to 1st-order optical model solutions. The 2nd-order model takes into account two-body correlations in the projectile and target ground-states, which are ignored in the 1st-order optical model. Parameter free predictions are described using one-body and two-body ground state form factors for the isotopes considered and the free nucleon-nucleon scattering amplitude. Root mean square (RMS) matter radii for protons and neutrons are taken from electron and muon scattering data and nuclear structure models. We report on extensive comparisons to experimental data for energy-dependent absorption cross sections for over 100 isotopes of elements from Li to Fe interacting with carbon and aluminum targets. Agreement between model and experiments are generally within 10% for the 1st-order optical model and improved to less than 5% in the 2nd-order optical model in the majority of comparisons. Overall the 2nd-order optical model leads to a reduction in absorption compared to the 1st-order optical model for heavy ion interactions, which influences estimates of nuclear matter radii.
A single model procedure for estimating tank calibration equations
International Nuclear Information System (INIS)
Liebetrau, A.M.
1997-10-01
A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Third order dielectric susceptibility in a model quantum paraelectric
International Nuclear Information System (INIS)
Martonak, R.; Tosatti, E.
1996-02-01
In the context of perovskite quantum paraelectrics, we study the effects of a quadrupolar interaction J q , in addition to the standard dipolar one J d . We concentrate here on the nonlinear dielectric response χ (3) P , as the main response function sensitive to quadrupolar (in our case antiquadrupolar) interactions. We employ a 3D quantum four-state lattice model and mean-field theory. The results show that inclusion of quadrupolar coupling of moderate strength (J q ∼ 1/4J d ) is clearly accompanied by a double change of sign of χ (3) P from negative to positive, near the quantum temperature T Q where the quantum paraelectric behaviour sets in. We fit our χ (3) to recent experimental data for SrTiO 3 , where the sign change is identified close to T Q ∼ 37 K. (author). 40 refs, 2 figs
Directory of Open Access Journals (Sweden)
Menon Carlo
2011-09-01
Full Text Available Abstract Background Several regression models have been proposed for estimation of isometric joint torque using surface electromyography (SEMG signals. Common issues related to torque estimation models are degradation of model accuracy with passage of time, electrode displacement, and alteration of limb posture. This work compares the performance of the most commonly used regression models under these circumstances, in order to assist researchers with identifying the most appropriate model for a specific biomedical application. Methods Eleven healthy volunteers participated in this study. A custom-built rig, equipped with a torque sensor, was used to measure isometric torque as each volunteer flexed and extended his wrist. SEMG signals from eight forearm muscles, in addition to wrist joint torque data were gathered during the experiment. Additional data were gathered one hour and twenty-four hours following the completion of the first data gathering session, for the purpose of evaluating the effects of passage of time and electrode displacement on accuracy of models. Acquired SEMG signals were filtered, rectified, normalized and then fed to models for training. Results It was shown that mean adjusted coefficient of determination (Ra2 values decrease between 20%-35% for different models after one hour while altering arm posture decreased mean Ra2 values between 64% to 74% for different models. Conclusions Model estimation accuracy drops significantly with passage of time, electrode displacement, and alteration of limb posture. Therefore model retraining is crucial for preserving estimation accuracy. Data resampling can significantly reduce model training time without losing estimation accuracy. Among the models compared, ordinary least squares linear regression model (OLS was shown to have high isometric torque estimation accuracy combined with very short training times.
A distributed approach for parameters estimation in System Biology models
International Nuclear Information System (INIS)
Mosca, E.; Merelli, I.; Alfieri, R.; Milanesi, L.
2009-01-01
Due to the lack of experimental measurements, biological variability and experimental errors, the value of many parameters of the systems biology mathematical models is yet unknown or uncertain. A possible computational solution is the parameter estimation, that is the identification of the parameter values that determine the best model fitting respect to experimental data. We have developed an environment to distribute each run of the parameter estimation algorithm on a different computational resource. The key feature of the implementation is a relational database that allows the user to swap the candidate solutions among the working nodes during the computations. The comparison of the distributed implementation with the parallel one showed that the presented approach enables a faster and better parameter estimation of systems biology models.
Correlation between the model accuracy and model-based SOC estimation
International Nuclear Information System (INIS)
Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing
2017-01-01
State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.
Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms
Berhausen, Sebastian; Paszek, Stefan
2016-01-01
In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.